Instructions to use tslim1/Fin-R1-mlx-8Bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use tslim1/Fin-R1-mlx-8Bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir Fin-R1-mlx-8Bit tslim1/Fin-R1-mlx-8Bit
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
File size: 243 Bytes
e78c4ee | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | {
"bos_token_id": 151643,
"do_sample": true,
"eos_token_id": [
151645,
151643
],
"pad_token_id": 151643,
"repetition_penalty": 1.05,
"temperature": 0.7,
"top_k": 20,
"top_p": 0.8,
"transformers_version": "4.40.2"
}
|