I'm More Optimistic About LLMs When The Press Releases are Only Incrementally Better
A coworker friend of mine asked me yesterday, “You seem to be publishing a ton recently [on building with LLMs]. Conscious decision to post more? or are you just hacking more?”
Honestly, it’s both. I’ve just gotten so optimistic about what large language/image/video models can do right now and I feel like they’re actually slowing down in their progress.
Yesterday, we saw:
- the latest Anthropic release is a 2% improvement
- OpenAI are open-sourcing models
- Google product releases are light on the implementation details
I’m no astrologer, but that constellation gives me reason to feel that the LLM zeitgeist is shifting from base-model capability growth to good ol’ software engineering (data plumbing and tooling and abstractions) skills. I have a ton of ideas on where to push and improve in those directions, because that’s what I’m good at.
There’s this scene from one of my all-time favorite TV shows - Patriot - that sums up the shift for me. (And it features my good friend Andy as a one-off character!)
When all the LLM labs are releasing new step-change-in-capability models every season, with new architectures and behavior patterns, I feel like Andy’s character (the one asking the question): struggling to stay afloat, unable to build with confidence on shifting sands. When the major labs only make incremental progress, I feel like Michael Dorman (the main speaker): confident, optimistic, a plumber guiding the fluid intelligence like water.
But who knows, they could release a game-changer tomorrow.
Reference
- Blog
- ai, technology, programming-languages, software-engineering, open-source, machine-learning, communication, research
-
Permalink to
2025.BLG.124
- Edit
← Previous | Next → |
Field Recording #1 | The Barrel-Aged Cocktail Experiment |