Tutorial - Introduction
Overview
Our tutorials are divided into categories roughly based on model modality, the type of data to be processed or generated.
Text (LLM)
|
|
text-generation-webui |
Interact with a local AI assistant by running a LLM with oobabooga's text-generaton-webui |
llamaspeak |
Talk live with Llama using Riva ASR/TTS, and chat about images with Llava! |
Text + Vision (VLM)
Give your locally running LLM an access to vision!
Image Generation
|
|
EfficientVIT |
MIT Han Lab's EfficientViT, Multi-Scale Linear Attention for High-Resolution Dense Prediction |
NanoSAM |
NanoSAM, SAM model variant capable of running in real-time on Jetson |
NanoOWL |
OWL-ViT optimized to run real-time on Jetson with NVIDIA TensorRT |
SAM |
Meta's SAM, Segment Anything model |
TAM |
TAM, Track-Anything model, is an interactive tool for video object tracking and segmentation |
Vector Database
|
|
NanoDB |
Interactive demo to witness the impact of Vector Database that handles multimodal data |
Audio
Tips
|
|
Knowledge Distillation |
|
SSD + Docker |
|
Memory optimization |
|
About NVIDIA Jetson
Note
We are mainly targeting Jetson Orin generation devices for deploying the latest LLMs and generative AI models.
|
Jetson AGX Orin 64GB Developer Kit |
Jetson AGX Orin Developer Kit |
Jetson Orin Nano Developer Kit |
|
 |
 |
 |
GPU |
2048-core NVIDIA Ampere architecture GPU with 64 Tensor Cores |
1024-core NVIDIA Ampere architecture GPU with 32 Tensor Cores |
|
RAM (CPU+GPU) |
64GB |
32GB |
8GB |
Storage |
64GB eMMC (+ NVMe SSD) |
microSD card (+ NVMe SSD) |
|