Instructions to use LTP/base2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LTP/base2 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("LTP/base2", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- c718b98893dfb60f7cd77856d5f13d779d9f5a5899f4734caba3a38c60b49c8d
- Size of remote file:
- 660 MB
- SHA256:
- c10f249fc2f6d84d66d90c4b75f9cfa8beb7773a855821fec4773941c66afc4a
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.