- Fast Principles
- Posts
- The Two-Steve Problem
The Two-Steve Problem
Why AI Agents Need Both Vision and Technical Depthew Post

A newsletter for designers and design technologists navigating the AI agent revolution
The Tale of Two Design Approaches
Last week, Nick Moran, GP at New Stack Ventures, shared an insight that's been stuck in my head: "Everyone wants Jobs, but they invest in Wozniak." While he was talking about founders, this dynamic perfectly captures what's happening in AI agent design right now.
We're seeing two distinct approaches emerge:
The Jobs Approach: Beautiful interfaces, obsessive user experience focus, and intuitive interactions that "just work"
The Wozniak Approach: Technical sophistication, complex prompt engineering, and impressive AI capabilities under the hood
But here's the thing – great AI agents need both.
Why Most AI Agents Feel Broken
I’ve been trying every AI agent I can get my hands on and the problem immediately surfaces with immature AI agents.
The purely Jobs-inspired agents look gorgeous but fumble basic tasks. They're designed like consumer apps – clean, minimal, friendly – but they can't handle the messy reality of actual work. A UX design agent that can't make screens tied to a human teams product requirements or user research insights or a research assistant that gets confused by follow-up questions.
The purely Wozniak-inspired agents are technically impressive but unusable. They can process complex data and execute sophisticated workflows, but their interfaces feel like programming languages. You need a manual to ask for help.
The disconnect happens because each approach misunderstands what AI agents actually are — collaborators on human teams.
What AI Agents Really Need to Eat
This is where the Wozniak perspective becomes critical. Unlike traditional software, AI agents are fundamentally hungry systems. They need to be fed:
Context: Not just the immediate request, but understanding of the user's goals, constraints, and working style
Data: Structured information about the domain, previous interactions, and relevant background knowledge
Feedback loops: Continuous input about what's working and what isn't, so the agent can adapt
Clear boundaries: Understanding of what it should and shouldn't do, when to ask for help, and how to fail gracefully
Consider Notion's AI assistant. It works because it's been fed the structure of your workspace, understands your content patterns, and has clear boundaries about what it can edit. Compare that to a generic "AI writing assistant" that knows nothing about your context.
The Data Science Strategy Behind Great Agents
Here's what the best AI agent designers understand: the interface is just the tip of the iceberg.
The real design work happens in:
Information Architecture: How do you structure data so the AI can find what it needs quickly? What gets stored, what gets discarded, what gets elevated?
Interaction Patterns: How do you design conversations that gradually build the agent's understanding? How do you handle ambiguity and clarification?
Feedback Systems: How does the agent learn from each interaction? What signals indicate success or failure?
Context Management: How much history should the agent remember? When should it forget things? How does it balance personalization with privacy?
This isn't just technical work – it's design strategy that requires understanding both human behavior and AI capabilities.
Taste Meets Technology: Three Examples
Example 1: Perplexity's Search Agent
Jobs element: Clean, conversation-focused interface that feels natural
Wozniak element: Sophisticated source verification and citation systems that most users never see
Result: Search that feels magical but maintains accuracy
Example 2: GitHub Copilot
Jobs element: Seamless integration into existing workflows, invisible until needed
Wozniak element: Deep understanding of code context, patterns, and developer intent
Result: AI assistance that enhances rather than disrupts existing practices
Example 3: Midjourney's Creative Agent
Jobs element: Simple text-to-image interface accessible to non-artists
Wozniak element: Complex parameter systems and community-driven prompt engineering
Result: Professional-quality creative output with approachable entry points
What This Means for Your Next Project
Next time you're designing an AI agent, ask yourself:
What does this agent need to "eat" to be effective? Map out the data, context, and feedback it requires.
How can the interface gradually teach the agent about the user? Design interactions that build understanding over time.
Where should you hide complexity vs. expose it? Some users want simple magic, others need detailed control.
How will you handle the inevitable failures? AI agents will misunderstand – how does your design recover gracefully?
The future belongs to AI agents that feel as intuitive as Jobs-designed products but work as reliably as Wozniak-built systems.
Start learning AI in 2025
Keeping up with AI is hard – we get it!
That’s why over 1M professionals read Superhuman AI to stay ahead.
Get daily AI news, tools, and tutorials
Learn new AI skills you can use at work in 3 mins a day
Become 10X more productive