Open Superintelligence Lab
Making AI development accessible with scalable superintelligence research
Our Vision
Any company or person (even with no technical experience) should be able to download this repository and run it on their GPU setup - from 1 GPU to 1 million GPUs. The system will be able to automatically detect your hardware configuration, tune hyperparameters for optimal performance, and run the best possible training with or without manual configuration from your side.
Auto-Scaling
Seamlessly scale from single GPU to massive distributed clusters
Auto-Tuning
Intelligent hyperparameter optimization for your hardware
Zero-Config
Works out of the box with automatic hardware detection
Blueberry LLM 🫐
Our flagship Mixture of Experts (MoE) language model implementation. Clone, install dependencies, and train your own language model with a single command. Perfect for researchers and developers looking to experiment with cutting-edge AI architectures.