Model card for CLAPModel card for CLAP: Contrastive Language-Audio PretrainingTable of ContentsTL;DR...
- huggingface.co
- 2025-05-06
WavLM-BaseMicrosoft’s WavLMThe base model pretrained on 16kHz sampled speech audio. When using...
- huggingface.co
- 2025-05-06
Latvian BERT-base-cased model.@inproceedings{Znotins-Barzdins:2020:BalticHLT, author = "A. Znot...
- huggingface.co
- 2025-05-06
CANINE-s (CANINE pre-trained with subword loss)Pretrained CANINE model on 104 languages using a mask...
- huggingface.co
- 2025-05-06
E5-smallText Embeddings by Weakly-Supervised Contrastive Pre-training.Liang Wang, Nan Yang, Xiaolong...
- huggingface.co
- 2025-05-06
bert-base-cased-conversationalConversational BERT (English, cased, 12‑layer, 768‑hidden, 12‑heads, 1...
- huggingface.co
- 2025-05-06
SEW-D-tinySEW-D by ASAPP ResearchThe base model pretrained on 16kHz sampled speech audio. When using...
- huggingface.co
- 2025-05-06
WavLM-Base-PlusMicrosoft’s WavLMThe base model pretrained on 16kHz sampled speech audio. When ...
- huggingface.co
- 2025-05-06
https://github.com/BM-K/Sentence-Embedding-is-all-you-needKorean-Sentence-Embedding
- huggingface.co
- 2025-05-06
Releasing Hindi ELECTRA modelThis is a first attempt at a Hindi language model trained with Google R...
- huggingface.co
- 2025-05-06