cambridgeltl/SapBERT-UMLS-2020AB-all-lang-from-XLMR

feature extractiontransformerstransformerspytorchsafetensorsxlm-robertafeature-extractionarxiv:2010.11784
169.8K

replace with your own list of entity names

all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"]

bs = 128 # batch size during inference all_embs = [] for i in tqdm(np.arange(0, len(all_names), bs)): toks = tokenizer.batch_encode_plus(all_names[i:i+bs], padding="max_length", max_length=25, truncation=True, return_tensors="pt") toks_cuda = {} for k,v in toks.items(): toks_cuda[k] = v.cuda() cls_rep = model(**toks_cuda)[0][:,0,:] # use CLS representation as the embedding all_embs.append(cls_rep.cpu().detach().numpy())

all_embs = np.concatenate(all_embs, axis=0)


For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert).

### Citation

```bibtex
@inproceedings{liu2021learning,
	title={Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking},
	author={Liu, Fangyu and Vuli{\'c}, Ivan and Korhonen, Anna and Collier, Nigel},
	booktitle={Proceedings of ACL-IJCNLP 2021},
	month = aug,
	year={2021}
}
DEPLOY IN 60 SECONDS

Run SapBERT-UMLS-2020AB-all-lang-from-XLMR on Runcrate

Deploy on H100, A100, or RTX GPUs. Pay only for what you use. No setup required.