How I keep up with AI progress (and why you must too)

Thijs Verreck

Last Updated: 21st July, 2025

AI is moving faster than any technology I've ever seen. But most people completely misunderstand it.

People either think it's all hype or that it replaces everything. Both are wrong because they don't understand what's actually happening.

The problem is the information environment is terrible. If you're not careful about your sources, you'll get either breathless hype or dismissive takes. Neither helps you understand what's real.

I've put together a list of sources that actually matter. If you're starting from scratch, this is where I'd begin.

Two rules

Start here

Simon Willison's Blog

Link

If I could only follow one source, this would be it. Simon created Django and Datasette. He writes about:

Good examples: The Lethal Trifecta, LLMs in 2024

Andrej Karpathy

Twitter and YouTube

Former Director of AI at Tesla, founding member of OpenAI. Best person to understand how these models actually work. His 3.5 hour video explaining LLMs is incredible and surprisingly accessible.

He writes about:

Examples: Deep Dive into LLMs like ChatGPT, How I Use LLMs

Read the labs directly

The AI companies sometimes hype things up, but their official posts have the most accurate information about what their models can do.

Follow announcements from OpenAI, Google DeepMind, Anthropic, DeepSeek, Meta AI, xAI and Qwen.

What to look for:

When someone makes a wild claim about AI capabilities, ignore them and read the original source.

The cookbooks are good starting points but not always the best way to do things. We're all still figuring this out. Your own experience trumps everything.

Also worth following smaller players: Nous Research, Allen AI, Prime Intellect, Pleias, Cohere, Goodfire.

People building real things

These people actually build AI applications. They know what works and what doesn't.

Hamel Husain

Link

ML engineer who runs a consultancy. Great at explaining evals and how to improve AI systems.

Examples: Your AI Product Needs Evals, LLM Eval FAQ

Shreya Shankar

Link

Researcher at UC Berkeley. Writes about AI engineering and what she learns from experiments.

Examples: Data Flywheels for LLM Applications, Short Musings on AI Engineering

Jason Liu

Link

Created Instructor. Knows RAG and evals better than almost anyone.

Examples: The RAG Playbook, Common RAG Mistakes

Eugene Yan

Link

Principal Applied Scientist at Amazon. Goes deeper into the ML fundamentals behind AI applications.

Examples: Task-Specific LLM Evals that Do & Don't Work, Intuition on Attention

What We've Learned From A Year of Building with LLMs

Link

Collection of practitioners (including everyone above) sharing what they've learned building AI systems.

Chip Huyen

Link

Wrote AI Engineering. Great at explaining how to build AI systems in production.

Examples: Common pitfalls when building generative AI applications, Agents

Omar Khattab

Website and Twitter

Created DSPy. Thinks about better abstractions than just prompting.

Examples: A Guide to Large Language Model Abstractions, twitter post on better abstractions

Kwindla Hultman Kramer

Blogs and Twitter

CEO of Daily, created Pipecat. Best source for voice AI.

Examples: Voice AI and Voice Agents: An Illustrated Primer, Advice on Building Voice AI

Han Chung Lee

Link

ML engineer with clear writing about AI techniques and dev tools.

Examples: MCP is not REST API, Poking around Claude Code

Jo Kristian Bergum

Link

Founder of vespa.ai. Best commentary on the "R" in RAG.

Example: Search is the natural abstraction for augmenting AI

David Crawshaw

Link

Co-founder of Tailscale. Writes about programming with AI from a software engineering perspective.

Examples: How I program with LLMs, How I program with Agents

Alexander Doria / Pierre Carl-Langlais

Link

Train LLMs at Pleias. Good insights into training processes and where things are heading.

Examples: The Model is the Product, A Realistic AI Timeline

Nathan Lambert's "Interconnects"

Link

Post-training lead at Allen AI. Technical analysis of AI training and deployment.

Examples: What comes next with Reinforcement Learning, Reinforcement learning with random rewards

Ethan Mollick

Link

Researcher on AI's effects on work and education. Practical guides for everyday use.

Examples: Using AI Right Now: A Quick Guide, Making AI Work

Arvind Narayanan and Sayash Kapoor's "AI Snake Oil"

Link

Princeton CS professors who cut through AI hype and doom with data.

Examples: AGI is not a milestone, Evaluating LLMs is a minefield

News sources

I don't follow much news, but these are clean sources for AI developments.

Twitter / X

Twitter is where AI conversations happen. It can be toxic, but you can use it well.

Shawn Wang (swyx) / AI news

Twitter / AI news

swyx curates industry trends on Latent Space and runs AI news - daily summaries of AI developments across platforms.

Dwarkesh Patel

Link

Best AI podcast. Dwarkesh asks good questions to people who matter.

Deeper stuff

LessWrong / AI Alignment Forum

LessWrong / AI Alignment Forum

Technical discussions about AI safety and alignment. More detailed than mainstream Twitter.

Examples: Claude plays Pokémon breakdown, The Waluigi Effect

Gwern

Link

Encyclopedic writing about AI. He predicted LLM scaling early. Dense but fascinating.

Examples: The Scaling Hypothesis, You could have invented transformers

Prompt researchers

Janus, Wyatt Walls, Claude Backrooms

Researchers who explore LLM boundaries with unusual prompts to understand their hidden behaviors.

Examples: Anomalous tokens reveal model identities, the void

Is this too much work?

Not really. I spend maybe 15-20 minutes scanning Twitter like reading a newspaper. Some things catch my eye, others I skip or save for later.

My Twitter feed has thoughtful commentary that helps me figure out what's worth attention. When someone shares something interesting, I follow them and check out their other work. It's like discovering music.

I actually enjoy this. I grew up on science fiction, and watching AI get built in real time is endlessly fascinating.

I hope this gets you as excited as I am.