SPECTER 2.0SPECTER 2.0 is the successor to SPECTER and is capable of generating task specific embedd...
- huggingface.co
- 2025-05-05
Vision Transformer (base-sized model)Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 ...
- huggingface.co
- 2025-05-05
(B AR出口 T )是论文中使用的 文本填充 40 GB 语言模型。由此导出的 由[可选]共享: 全熙元(哈文)许可证:该模型不应被用来故意为人们创造敌对或疏远的环境。大量研究探讨了语言模型的偏见和...
- huggingface.co
- 2025-05-05
MotivationThis model is based on anferico/bert-for-patents – a BERTLARGE model (See next secti...
- huggingface.co
- 2025-05-05
KoBART-base-v1from transformers import PreTrainedTokenizerFast, BartModeltokenizer = PreTrainedToken...
- huggingface.co
- 2025-05-06
Erlangshen-SimCSE-110M-ChineseGithub: Fengshenbang-LMDocs: Fengshenbang-Docs简介 Brief Introduction基于s...
- huggingface.co
- 2025-05-06
X-CLIP (base-sized model)X-CLIP model (base-sized, patch resolution of 16) trained on Kinetics-400. ...
- huggingface.co
- 2025-05-06