Instructions to use BarelyFunctionalCode/Janus-Pro-1B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use BarelyFunctionalCode/Janus-Pro-1B with Transformers:
# Load model directly from transformers import MultiModalityCausalLM model = MultiModalityCausalLM.from_pretrained("BarelyFunctionalCode/Janus-Pro-1B", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "background_color": [ | |
| 127, | |
| 127, | |
| 127 | |
| ], | |
| "do_normalize": true, | |
| "image_mean": [ | |
| 0.5, | |
| 0.5, | |
| 0.5 | |
| ], | |
| "image_processor_type": "VLMImageProcessor", | |
| "image_size": 384, | |
| "image_std": [ | |
| 0.5, | |
| 0.5, | |
| 0.5 | |
| ], | |
| "min_size": 14, | |
| "processor_class": "VLChatProcessor", | |
| "rescale_factor": 0.00392156862745098 | |
| } | |