Instructions to use Jarvis1111/MiniGPT4-RobustVLGuard with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Jarvis1111/MiniGPT4-RobustVLGuard with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="Jarvis1111/MiniGPT4-RobustVLGuard")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Jarvis1111/MiniGPT4-RobustVLGuard", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Jarvis1111/MiniGPT4-RobustVLGuard with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Jarvis1111/MiniGPT4-RobustVLGuard" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jarvis1111/MiniGPT4-RobustVLGuard", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Jarvis1111/MiniGPT4-RobustVLGuard
- SGLang
How to use Jarvis1111/MiniGPT4-RobustVLGuard with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Jarvis1111/MiniGPT4-RobustVLGuard" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jarvis1111/MiniGPT4-RobustVLGuard", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Jarvis1111/MiniGPT4-RobustVLGuard" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Jarvis1111/MiniGPT4-RobustVLGuard", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Jarvis1111/MiniGPT4-RobustVLGuard with Docker Model Runner:
docker model run hf.co/Jarvis1111/MiniGPT4-RobustVLGuard
π Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks
Welcome! This repository hosts the official implementation of our paper, "Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks."
Paper link: arxiv.org/abs/2504.01308
Project page:
π Whatβs New?
We propose state-of-the-art solutions to enhance the robustness of Vision-Language Models (VLMs) against Gaussian noise and adversarial attacks. Key highlights include:
π― Robust-VLGuard: A pioneering multimodal safety dataset covering both aligned and misaligned image-text pair scenarios.
π‘οΈ DiffPure-VLM: A novel defense framework that leverages diffusion models to neutralize adversarial noise by transforming it into Gaussian-like noise, significantly improving VLM resilience.
β¨ Key Contributions
- π Conducted a comprehensive vulnerability analysis revealing the sensitivity of mainstream VLMs to Gaussian noise.
- π Developed Robust-VLGuard, a dataset designed to improve model robustness without compromising helpfulness or safety alignment.
- βοΈ Introduced DiffPure-VLM, an effective pipeline for defending against complex optimization-based adversarial attacks.
- π Demonstrated strong performance across multiple benchmarks, outperforming existing baseline methods.