The Wrong Question

29 Jul 2025 henrygarner.com

There are two grand goals of AI research, and we’re fixated on the wrong one. It’s not whether AI can think, but whether it helps us think better.

Since the earliest days of cybernetics, we’ve been captivated by building systems that can replace human judgment entirely. Yet there’s always been a parallel goal focused on extending human capabilities instead. William Ross Ashby wrote about Intelligence Amplification (IA, not AI) back in 1956.

While autonomous agents dominate current headlines, the commercial AI successes have come from the amplification side: sophisticated models working with, not instead of, human judgment.

Practical wisdom

Aristotle had a word for the kind of judgment we need: phronesis (fro-NEE-sis), usually translated as ‘practical wisdom’. It’s the ability to navigate complex trade-offs and find balance between competing values rather than following rigid rules. Every technology professional exercises phronesis daily: balancing user experience against security, innovation against reliability, speed against correctness.

What concerns me about our AI moment is that we risk outsourcing this judgment to algorithms. When AI suggests solutions, do we thoughtfully evaluate them, or do we gradually lose the situational awareness needed for proper oversight?

Recent research demonstrates that investors who developed their own investment thesis first, then received AI feedback, achieved better results and 67% satisfaction vs just 43% for those who simply reviewed AI recommendations. The first group maintained their agency while leveraging AI’s capabilities, and the difference in outcomes was striking.

The control we need

As leader of JUXT’s AI Chapter, I’ve observed first-hand how easy it is to fall into ‘vibe coding’, or accepting AI suggestions uncritically and gradually losing situational awareness. It’s seductive because it feels productive. But when something goes wrong (like the recent Replit AI agent that deleted a production database), who’s accountable?

In every regulatory framework now and for the foreseeable future, humans are. To exercise effective oversight, we need context, not just conclusions.

The tools we use shape how we think. When vendors pitch AI as a “teammate,” they’re pushing us toward unpredictable, opaque systems. Steve Jobs famously once said that computers could be “a bicycle for our mind”, a tool we invent that helps us travel further and faster than we can on our own. And bicycles are nothing like people: they amplify what we can do through the reliable, predictable mechanics necessary for human mastery and responsibility.

Why this matters

I explored these themes at last month’s XT25 conference, drawing on insights from medicine and ethics to cut through the AI hype. The talk goes beyond technology to address what I believe is the core challenge: maintaining our phronesis in an age of increasingly sophisticated automation.

From Unstructured to Actionable: AI-Powered Regulatory Intelligence

The most urgent question isn’t whether AI can think for us, but whether it helps us think better. If you’re grappling with these questions in your own work, whether in financial services, ML product development, or elsewhere, I’d love to continue the conversation. Drop me an email or connect on LinkedIn.