How I keep up with AI progress (and why you must too)
Thijs VerreckLast Updated: 21st July, 2025
AI is moving faster than any technology I've ever seen. But most people completely misunderstand it.
People either think it's all hype or that it replaces everything. Both are wrong because they don't understand what's actually happening.
The problem is the information environment is terrible. If you're not careful about your sources, you'll get either breathless hype or dismissive takes. Neither helps you understand what's real.
I've put together a list of sources that actually matter. If you're starting from scratch, this is where I'd begin.
Two rules
- Read the original sources. The farther you get from the actual AI labs and researchers, the more noise you encounter. Assume all reporting is wrong unless it comes from the primary source.
- Follow people who know what they're talking about. I've listed people who engage with AI honestly and have built real things.
Start here
Simon Willison's Blog
If I could only follow one source, this would be it. Simon created Django and Datasette. He writes about:
- What new AI can actually do
- Real applications people are building
- Security and ethics issues
Good examples: The Lethal Trifecta, LLMs in 2024
Andrej Karpathy
Former Director of AI at Tesla, founding member of OpenAI. Best person to understand how these models actually work. His 3.5 hour video explaining LLMs is incredible and surprisingly accessible.
He writes about:
- How AI actually works under the hood
- What new capabilities mean
- Cultural observations (he coined "vibe coding" and "jagged intelligence")
Examples: Deep Dive into LLMs like ChatGPT, How I Use LLMs
Read the labs directly
The AI companies sometimes hype things up, but their official posts have the most accurate information about what their models can do.
Follow announcements from OpenAI, Google DeepMind, Anthropic, DeepSeek, Meta AI, xAI and Qwen.
What to look for:
- Announcement posts give you the overview (example)
- Engineering blogs show you how to actually use them (Anthropic, OpenAI, Gemini)
- System cards have the detailed benchmarks and limitations (example)
- Research papers show the technical details (DeepSeek R1, Anthropic)
When someone makes a wild claim about AI capabilities, ignore them and read the original source.
The cookbooks are good starting points but not always the best way to do things. We're all still figuring this out. Your own experience trumps everything.
Also worth following smaller players: Nous Research, Allen AI, Prime Intellect, Pleias, Cohere, Goodfire.
People building real things
These people actually build AI applications. They know what works and what doesn't.
Hamel Husain
ML engineer who runs a consultancy. Great at explaining evals and how to improve AI systems.
Examples: Your AI Product Needs Evals, LLM Eval FAQ
Shreya Shankar
Researcher at UC Berkeley. Writes about AI engineering and what she learns from experiments.
Examples: Data Flywheels for LLM Applications, Short Musings on AI Engineering
Jason Liu
Created Instructor. Knows RAG and evals better than almost anyone.
Examples: The RAG Playbook, Common RAG Mistakes
Eugene Yan
Principal Applied Scientist at Amazon. Goes deeper into the ML fundamentals behind AI applications.
Examples: Task-Specific LLM Evals that Do & Don't Work, Intuition on Attention
What We've Learned From A Year of Building with LLMs
Collection of practitioners (including everyone above) sharing what they've learned building AI systems.
Chip Huyen
Wrote AI Engineering. Great at explaining how to build AI systems in production.
Examples: Common pitfalls when building generative AI applications, Agents
Omar Khattab
Created DSPy. Thinks about better abstractions than just prompting.
Examples: A Guide to Large Language Model Abstractions, twitter post on better abstractions
Kwindla Hultman Kramer
CEO of Daily, created Pipecat. Best source for voice AI.
Examples: Voice AI and Voice Agents: An Illustrated Primer, Advice on Building Voice AI
Han Chung Lee
ML engineer with clear writing about AI techniques and dev tools.
Examples: MCP is not REST API, Poking around Claude Code
Jo Kristian Bergum
Founder of vespa.ai. Best commentary on the "R" in RAG.
Example: Search is the natural abstraction for augmenting AI
David Crawshaw
Co-founder of Tailscale. Writes about programming with AI from a software engineering perspective.
Examples: How I program with LLMs, How I program with Agents
Alexander Doria / Pierre Carl-Langlais
Train LLMs at Pleias. Good insights into training processes and where things are heading.
Examples: The Model is the Product, A Realistic AI Timeline
Nathan Lambert's "Interconnects"
Post-training lead at Allen AI. Technical analysis of AI training and deployment.
Examples: What comes next with Reinforcement Learning, Reinforcement learning with random rewards
Ethan Mollick
Researcher on AI's effects on work and education. Practical guides for everyday use.
Examples: Using AI Right Now: A Quick Guide, Making AI Work
Arvind Narayanan and Sayash Kapoor's "AI Snake Oil"
Princeton CS professors who cut through AI hype and doom with data.
Examples: AGI is not a milestone, Evaluating LLMs is a minefield
News sources
I don't follow much news, but these are clean sources for AI developments.
Twitter / X
Twitter is where AI conversations happen. It can be toxic, but you can use it well.
Shawn Wang (swyx) / AI news
swyx curates industry trends on Latent Space and runs AI news - daily summaries of AI developments across platforms.
Dwarkesh Patel
Best AI podcast. Dwarkesh asks good questions to people who matter.
Deeper stuff
LessWrong / AI Alignment Forum
LessWrong / AI Alignment Forum
Technical discussions about AI safety and alignment. More detailed than mainstream Twitter.
Examples: Claude plays Pokémon breakdown, The Waluigi Effect
Gwern
Encyclopedic writing about AI. He predicted LLM scaling early. Dense but fascinating.
Examples: The Scaling Hypothesis, You could have invented transformers
Prompt researchers
Janus, Wyatt Walls, Claude Backrooms
Researchers who explore LLM boundaries with unusual prompts to understand their hidden behaviors.
Examples: Anomalous tokens reveal model identities, the void
Is this too much work?
Not really. I spend maybe 15-20 minutes scanning Twitter like reading a newspaper. Some things catch my eye, others I skip or save for later.
My Twitter feed has thoughtful commentary that helps me figure out what's worth attention. When someone shares something interesting, I follow them and check out their other work. It's like discovering music.
I actually enjoy this. I grew up on science fiction, and watching AI get built in real time is endlessly fascinating.
I hope this gets you as excited as I am.