NingLab/MuMOInstruct
Viewer • Updated • 872k • 847 • 2
How to use NingLab/GeLLMO-P4-Mistral with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="NingLab/GeLLMO-P4-Mistral") # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("NingLab/GeLLMO-P4-Mistral", dtype="auto")How to use NingLab/GeLLMO-P4-Mistral with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "NingLab/GeLLMO-P4-Mistral"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NingLab/GeLLMO-P4-Mistral",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/NingLab/GeLLMO-P4-Mistral
How to use NingLab/GeLLMO-P4-Mistral with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "NingLab/GeLLMO-P4-Mistral" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NingLab/GeLLMO-P4-Mistral",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "NingLab/GeLLMO-P4-Mistral" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "NingLab/GeLLMO-P4-Mistral",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use NingLab/GeLLMO-P4-Mistral with Docker Model Runner:
docker model run hf.co/NingLab/GeLLMO-P4-Mistral
For instructions to run the model, please refer to our repository.
While our models are designed for research and drug discovery applications, they come with ethical and safety considerations:
We urge users to adopt best practices, including toxicity prediction pipelines, ethical oversight, and responsible AI usage policies, to prevent harmful applications of this model.
If you use the trained model checkpoints, datasets or other resources, please use the following citation:
@article{dey2025gellmo,
title={GeLLMO: Generalizing Large Language Models for Multi-property Molecule Optimization},
author={Vishal Dey and Xiao Hu and Xia Ning},
year={2025},
journal={arXiv preprint arXiv:2502.13398},
url={https://arxiv.org/abs/2502.13398},
}
Base model
mistralai/Mistral-7B-v0.3