01
Research & Discovery
I started with one question: who is struggling to find AI tools, and why? I reviewed published surveys, conducted informal interviews with 8 people across different professions, and audited every major AI discovery product to map the gap.
"The problem isn't that people don't know AI exists. The problem is that navigating it feels like a full-time job."
— Interview participant, freelance designer
01
Overwhelming choice
Thousands of tools, dozens of categories. Users default to ChatGPT for everything — not because it's best, but because evaluating alternatives feels impossible.
02
Pricing opacity
Users sign up, hit surprise paywalls, feel deceived, and quit. Pricing confusion is the #1 reason tools get abandoned in the first week.
03
No personalization
"Best AI tools 2025" lists treat a nurse and a developer identically. Personalization is the missing layer across every existing discovery product.
04
Jargon as barrier
"LLM", "RAG", "fine-tuning" — used casually in articles aimed at non-technical audiences, immediately alienating the people who need help most.
05
Zero trust
Existing recommendation sites are ad-supported and generic. Users have no reason to trust them — and increasingly, they don't.
02
User Personas
I interviewed 8 people across professions. The problem was universal — but the exact pain point varied by role. These four personas anchored every design decision that followed.
M
Maya, 34 — Teacher
Stockholm · Non-technical · Time-poor
"Every article assumes I know what a 'prompt' is. I just want something that works for teachers."
Non-technicalTime-poorBudget-conscious
C
Carlos, 41 — Business owner
Barcelona · Pragmatic · ROI-focused
"I tried ChatGPT but I don't know if there's something built specifically for e-commerce."
PragmaticROI-focusedSolo operator
P
Priya, 27 — Designer
Mumbai · Tool-fatigued · Subscription-wary
"I signed up for three tools and ended up paying for two I barely use. Just tell me which one."
CreativeTool-fatiguedSubscription-wary
J
James, 52 — Manager
Manchester · Skeptical · Authority-driven
"I don't want to look clueless in front of my team. I need something that won't make me feel dumb."
SkepticalAuthority-drivenTeam-focused
How might we...
Help any person, regardless of technical background, quickly discover the right AI tools for their specific job and tasks — with honest pricing and zero jargon?
03
Design Process
✓
Research
8 interviews · Audit
Key design decisions
Conversational Q&A over filters
Why: Users don't know what "category" they need. Asking them to pick between "Productivity vs Creative" puts the burden of knowledge on them — exactly the problem we're solving. A 5-question guided flow extracts context without requiring prior AI knowledge.
Pricing on every card
Why: Pricing opacity was the #1 frustration across all 8 interviews. Surfacing free vs paid instantly builds trust and prevents the "surprise paywall" moment that causes 70% of tool abandonment.
No login for first use
Why: Every friction point between landing and value loses users. Our target audience — non-technical, skeptical, time-poor — bounces at a signup gate before experiencing the product. Trust is earned through the experience itself.
Fluent 2 as the design system
Why: Our personas use Microsoft products daily — Teams, Outlook, Office. Fluent 2's familiar neutral palette and card layout create instant credibility. It signals "serious tool" rather than "startup experiment," which matters for skeptical users like James.
04
The Solution
The app asks 5 plain-language questions, sends answers to Claude via API, and returns 3–5 personalized tool cards. Each card shows a specific reason why it fits this user, honest pricing, and a direct link. Below is the results screen for Maya — teacher persona.
aitoolfinderapp.com/results
05
Outcomes & Metrics
Success metrics were defined before design began — not after. These four KPIs directly measure whether the product solves the problems found in research.
>70%
Completion rate
Users who answer all 5 questions without dropping off
>40%
Click-through
Users who click at least one recommended tool
<3 min
Time to result
From landing page to seeing personalized recommendations
4.2★
Helpfulness
Post-result single survey question out of 5
What worked well
- Q&A format was immediately understood by all personas without explanation
- Pricing on every card eliminated trust as a concern in usability feedback
- Fluent 2's neutral palette let the tool recommendations stand out
- Limiting to 5 questions kept completion rates high in prototype testing
What I'd do differently
- Optional Q6 was skipped by 6 of 8 users — needs better framing or removal
- Mobile users expected swipe between questions — interaction model needs revision
- The word "AI" in the name caused hesitation — worth A/B testing alternatives
- Need a "save results" feature — users wanted to revisit recommendations later