Hiya you!
The headlines are noisy. The tools we use are changing daily. And most people in your world are either overwhelmed or over-claiming.
Here's what I keep coming back to: optimism is not naivety. It's a long game.
And the thinking this week backs it up.
Judgment is the real limiter.
Ethan Mollick reframed the AI conversation in a way your peers need to hear.
As AI takes on longer, more complex tasks, the number of judgment calls it makes -- based on your intent -- multiplies. And those judgment calls compound.
His line: "Judgment may be a bigger limiter on agentic work than hallucinations."
Sit with that for a second.
The AI crisis isn't that the machine makes things up. It's that the machine can't replace what you know, how you read a room, or what you're trying to build.
Hallucinations are correctable. Poor judgment quietly compounding over a 20-step autonomous task? That's a different problem -- and no model upgrade fully solves it.
This is the reframe your audiences need. The conversation has been stuck on "AI makes stuff up." The real conversation is: AI can't replace your judgment, your taste, or your voice.
That's not a warning. That's an opening.
So what does that actually require of you?
If judgment is the real limiter -- and it is -- the question isn't how good is the AI?
It's: how clear are you?
What is your ruling principle? The one that governs how you make calls when the situation is ambiguous, the stakes are high, and no one's watching? Not your values as a list on a website. The actual filter you run decisions through.
Here's why this matters more than most people realize.
Anthropic's alignment team -- the people building Claude -- wrote what they call a "soul document" for the model. A set of character traits, values, and ruling principles that govern how it exercises judgment in the wild. The philosopher behind it, Amanda Askell, frames it simply: if the agent is going to make judgment calls on your behalf, you have to encode what good judgment looks like. You have to give it a soul.
That's not a metaphor. It's an engineering problem.
And it's yours too.
The AI is going to inherit your judgment. It will act on your intent, extend your reasoning, and make dozens of downstream decisions based on what it infers about what you care about. If you haven't made your ruling principles explicit -- to yourself, let alone to the tools you're using -- the compounding goes in the wrong direction.
So take 10 minutes this week. Not to optimize your workflow. Not to test a new tool.
Write down your rules. The real ones. What do you refuse to trade away? What's the one principle that, if violated, means the output is wrong -- regardless of how polished it looks?
That's your soul document. And right now, it matters more than any prompt.












