nm-testing/Llama-3.1-8B-Instruct-KV-Cache-FP8
8B • Updated • 3
nm-testing/TinyLlama-1.1B-Chat-v1.0-NVFP4-test132
0.7B • Updated • 2
nm-testing/TinyLlama-1.1B-Chat-v1.0-awq-asym-test-awq-asym
0.3B • Updated • 2
nm-testing/TinyLlama-1.1B-Chat-v1.0-NVFP4-1105
Updated
nm-testing/TinyLlama-1.1B-Chat-v1.0-NVFP4-test011
Updated
nm-testing/TinyLlama-1.1B-Chat-v1.0-NVFP4-test
Updated
nm-testing/Kimi-Linear-48B-A3B-Instruct-FP8-DYNAMIC
49B • Updated • 11
nm-testing/llama2.c-stories42M-pruned2.4
Updated • 292
nm-testing/gpt-oss-20B.eagle3.unconverted-drafter
nm-testing/random-weights-llama3.1.8b-2layer-eagle3-unconverted
Updated • 201
nm-testing/Llama-4-Scout-17B-16E-Instruct-BLOCK-FP8
Text Generation
• 109B • Updated • 4
nm-testing/Llama-4-Maverick-17B-128E-Instruct-block-FP8
Text Generation
• Updated • 12
nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK
Text Generation
• Updated nm-testing/Qwen3-30B-A3B-FP8-block
Text Generation
• 3B • Updated • 3
nm-testing/granite-4.0-h-small-FP8-dynamic-test
Updated
nm-testing/tiny-testing-random-weights
584k • Updated • 2.89k
nm-testing/Llama4-Maverick-Eagle3-Speculators-64k-vocab
nm-testing/Llama-3.1-8B-Instruct-KV-FP8-tensor-static_minmax
8B • Updated • 5
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-attn_head-static_minmax
8B • Updated • 3
nm-testing/Llama-3.1-8B-Instruct-KV-FP8-attn_head-static_minmax
8B • Updated • 4
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-tensor-static_minmax
8B • Updated • 4
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-Head
8B • Updated • 2
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-Tensor
8B • Updated • 1
nm-testing/Llama-3.1-8B-Instruct-KV-FP8-Tensor
8B • Updated • 3
nm-testing/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16
2B • Updated • 39
nm-testing/Qwen3-VL-8B-Instruct-W4A16
3B • Updated • 29
nm-testing/Qwen3-VL-8B-Instruct-NVFP4
6B • Updated • 29.3k
• 3
nm-testing/Qwen3-VL-4B-Instruct-NVFP4
3B • Updated • 5
• 2