DedeProGames's picture
Open to Collab

DedeProGames PRO

DedeProGames

AI & ML interests

Thinking and Agentic Finetuning

Recent Activity

reacted to SeaWolf-AI's post with šŸ”„ about 1 hour ago
šŸ”„ 128 Blackwell GPUs — Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first — because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab — dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. šŸ‘‰ https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net — our Proto-AGI architecture (Emergence Engine Ā· Meta-Cognition Ā· SLAI Ā· Multi-Intelligence Ā· Synergy & Critique) — validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 — Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski — Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift — You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs — all shared here openly. šŸ¤— Thank you, Hugging Face. Let's turn the next page together. šŸš€ vidraft Ā· VIDRAFT Ā· Ginigen AI #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
reacted to SeaWolf-AI's post with šŸ¤ about 1 hour ago
šŸ”„ 128 Blackwell GPUs — Thank You, Hugging Face I've been awarded 128 NVIDIA Blackwell GPUs through NIPA (Korea's National IT Industry Promotion Agency). Sharing this here first — because Hugging Face is where it all started. I design LLM architectures from scratch. HF was my lab — dissecting Transformers internals, analyzing thousands of checkpoints, iterating on Spaces with global feedback. Our FINAL Bench reached #5 globally in HF dataset popularity, and this research is exactly what earned the GPU grant. šŸ‘‰ https://huggingface.co/spaces/FINAL-Bench/Leaderboard These 128 Blackwells will scale AETHER-Net — our Proto-AGI architecture (Emergence Engine Ā· Meta-Cognition Ā· SLAI Ā· Multi-Intelligence Ā· Synergy & Critique) — validated at 0.8B with MoE expansion to 2.1B params. Next stop: 166B. People I must thank: @John6666 — Guardian of this ecosystem. Never misses a forum question, interested in every project, active 24/7. I've genuinely wondered if you're a machine. Remarkable. @bartowski — Master of quantization. The hidden infrastructure of open-source LLM. Countless experiments possible thanks to you. @SaylorTwift — You see what others miss. Insight that cuts to the essence. Deep respect. My promise: AETHER-Net design docs, training recipes, checkpoints, and failure logs — all shared here openly. šŸ¤— Thank you, Hugging Face. Let's turn the next page together. šŸš€ vidraft Ā· VIDRAFT Ā· Ginigen AI #OpenScience #HuggingFace #ProtoAGI #AETHER #LLMArchitecture #Blackwell #NIPA
updated a model about 7 hours ago
OrionLLM/GRM2-3b
View all activity

Organizations

Hugging Science's profile picture Orion LLM Labs's profile picture