camenduru/dinov3-vitl16-pretrain-lvd1689m Image Feature Extraction β’ 0.3B β’ Updated Dec 17, 2025 β’ 14.8k β’ 4
view post Post 7405 We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! πLearn how 3 optimizations help your home GPU train models faster:1. Packed-sequence metadata caching2. Double-buffered checkpoint reloads3. Faster MoE routingGuide: https://unsloth.ai/blog/nvidia-collabGitHub: https://github.com/unslothai/unsloth See translation π₯ 19 19 π 4 4 π€ 2 2 π 1 1 + Reply