The Problem Wasn't the Model
I spent the first three months treating AI like a calculator. Type a question, get an answer. Occasionally useful, mostly generic.
Posts grouped by category and tags, with searchable archives.
Results update as you type.
I spent the first three months treating AI like a calculator. Type a question, get an answer. Occasionally useful, mostly generic.
You're probably sick of being told to learn AI tools. But this isn't that post.
The agent hype isn't about AI capability. It's about human org design, and we're copying the wrong thing.
Five things AI can't replace: trust, context, distribution, taste, and liability. These aren't product categories. They're structural layers that become more valuable as AI gets better, not less.
You've probably noticed I'm writing a lot. Here's why I keep going anyway.
The piece I wrote on harnesses described five component layers that every production agentic system needs: orchestration and planning, an execution environment, validation and quality control, context and memory management, and logic and interaction.
For most of human history, the most valuable place to be was the middle.
Nobody designed the org chart as a management philosophy. It was infrastructure.
!The LLM is the engine. The harness is the car.
Unpopular opinion: building AI workflows that stop for human confirmation isn't careful design. It might just be discomfort dressed up as caution.
Dad. Yogi. Searching for what to build next. That's the order for a reason.
Two CEOs used AI to cut headcount dramatically. They made opposite bets to get there.