I'm More Optimistic About LLMs When The Press Releases are Only Incrementally Better
A coworker friend of mine asked me yesterday, “You seem to be publishing a ton recently [on building with LLMs]. Conscious decision to post more? or are you just hacking more?”
Honestly, it’s both. I’ve just gotten so optimistic about what large language/image/video models can do right now and I feel like they’re actually slowing down in their progress.
Yesterday, we saw:
I’m no astrologer, but that constellation gives me reason to feel that the LLM zeitgeist is shifting from base-model capability growth to good ol’ software engineering (data plumbing and tooling and abstractions) skills. I have a ton of ideas on where to push and improve in those directions, because that’s what I’m good at.
There’s this scene from one of my all-time favorite TV shows - Patriot - that sums up the shift for me. (And it features my good friend Andy as a one-off character!)
When all the LLM labs are releasing new step-change-in-capability models every season, with new architectures and behavior patterns, I feel like Andy’s character (the one asking the question): struggling to stay afloat, unable to build with confidence on shifting sands. When the major labs only make incremental progress, I feel like Michael Dorman (the main speaker): confident, optimistic, a plumber guiding the fluid intelligence like water.

But who knows, they could release a game-changer tomorrow.
Josh Beckman
Widgets
Comments & Replies
You can subscribe or follow or reply here: