Multimodal
Ministral 3 8B Instruct
Mistral AI's versatile 8 billion parameter instruction-tuned model
Memory Requirement 8GB RAM
Precision FP8
Size 5GB
Jetson Inference - Supported Inference Engines
🚀
Container # Run Command
sudo docker run -it --rm --pull always --runtime=nvidia --network host ghcr.io/nvidia-ai-iot/vllm:latest-jetson-orin vllm serve mistralai/Ministral-3-8B-Instruct-2512 Model Details
Mistral AI’s Ministral 3 8B Instruct is the default instruction-tuned variant, balancing capability and efficiency.
The Ministral 3 Instruct model offers the following capabilities:
- Vision: Enables the model to analyze images and provide insights based on visual content, in addition to text.
- Multilingual: Supports dozens of languages, including English, French, Spanish, German, Italian, Portuguese, Dutch, Chinese, Japanese, Korean, Arabic.
- System Prompt: Maintains strong adherence and support for system prompts.
- Agentic: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- Edge-Optimized: Delivers best-in-class performance at a small scale, deployable anywhere.
- Apache 2.0 License: Open-source license allowing usage and modification for both commercial and non-commercial purposes.
- Large Context Window: Supports a 256k context window.