Weekly Research Writing
This is highest impact on humanity you can possibly have.
Our goal is to encourage as many people as possible to write 1 science / engineering blog or social media post per WEEK for the REST OF THEIR LIFE.
- Making 3 blog posts will not amount to much, but with weekly blog posts for the next 50 YEARS you will have a MASSIVE impact.
- Start today, even if your first posts are not the best, you will improve the fastest by doing it (it's questionable if you will improve at all by delaying it).
- Post fundamental things like science or engineering that will be useful for A LONG time.
- This will allow you to LEARN (as you explain it in blog post), and you will have a big impact onto the world even if you do it only in your free time.
- Posting once a week is optimal - it allows more time for higher quality post while being regular enough. Readers / viewers might get fatigued from daily or too frequent posts.
Become AI Researcher
Even high-schoolers are capable of learning to do AI researach with some time.
You may go into study mode or do projects. I recommend doing both and following your curiosity.
Don't chase perfection and get overwhelmed by the difficulty. Chase progress.
Progress might feel slow at times, but that feeling is false and irrelevant, just put in the hours and you will get there (weekly or even more frequent blog posts are an excellent way to progress).
LLMs (Gemini, ChatGPT) can help you learn fast and well since they know these fundamentals well, just ask them to explain things.
Here are some things to learn for AI research (you will eventually know all of this, so study whatever feels interesting):
Math
- Linear Algebra: Tensors, matrix multiplication, and vector spaces.
- Calculus: Derivatives, partial derivatives, and the Chain Rule.
- Probability & Statistics: Random variables, distributions.
PyTorch
- Tensors:
view(),transpose(),reshape(). - Operations:
torch.matmul,einsum, indexing, slicing. - Autograd: Automatic differentiation, gradients.
Fundamentals
- Components: Weighted sums, biases, neurons.
- Activations: ReLU, Sigmoid, Tanh, SwiGLU.
- Training: CrossEntropy, MSE, KL Divergence, Backpropagation.
Architectures
- Transformers: Attention Mechanism, Queries, Keys, Values.
- Vision: CNNs, spatial hierarchies, kernels.
- State Space Models: SSMs, Mamba.
Ideas
- Explain high-level concepts or intuition
- Detail the mathematical foundations and proofs
- Implement the logic in PyTorch or Triton
- Build data pipelines and synthetic environments
- Run experiments and analyze the results
- Review and improve existing work