Ideas Lab Methodology
Every idea gets 4 questions, 5 expert lenses, and a verdict.
The Ideas Lab finds real problems from 9 sources, runs them through the FL Method, validates with real-world evidence from Google and Reddit, and tells you whether to build, validate, or skip.
01
Find
Real problems from 9 sources. Reddit, Product Hunt, X, Hacker News, GitHub, YC, Google Trends, and more.
02
Research + Score
Google Search for competitors and market signals. Reddit for community sentiment. Then 4 questions across 5 frameworks.
03
Verdict
BUILD, VALIDATE FIRST, or SKIP. Every idea gets a clear recommendation backed by evidence.
The FL Method
4 questions. 100 points.
The FL Method is the home court scorer. It breaks every idea into four weighted pillars that together answer one question: is this worth a solo builder's time?
Is the pain real?
Are people actively searching for solutions, complaining online, or paying for workarounds? Real pain leaves evidence.
Is there room?
Is the current market underserving this need? Are existing solutions clunky, overpriced, or missing entirely?
Would people act?
Would the target audience actually pay for a better solution? Evidence of spending behavior in adjacent categories matters more than survey answers.
Can I ship?
Can a solo builder ship an MVP in weeks, not months? Technical complexity, API availability, and existing tools all factor in.
Builder's Note
These four questions are the ones I kept coming back to before starting anything. Not “is it cool?” or “could it go viral?” but “is the pain real, and can I actually build this?” The point system just formalized what was already a gut check.
Validation Layer
Scores mean nothing without evidence.
After scoring, every idea goes through a validation layer. We search for real-world evidence that either supports or contradicts the score. This is what separates an AI opinion from a data-backed recommendation.
Evidence Search
Google Search for competitors, pricing, and market signals. Reddit for community sentiment. Real data from real conversations, not surveys.
Competitive Intel
How many competitors exist? Are they well-funded or vulnerable? What are users complaining about in their reviews? Where are the gaps?
Confidence Rating
Each idea gets a confidence level (high, medium, low) based on how much evidence was found. Low evidence means the score is less reliable, not that the idea is bad.
The Verdict
Every idea gets a clear recommendation.
After 4 questions, 5 expert lenses, and real-world validation, the system synthesizes everything into one of three verdicts.
Strong signal across the board. The idea clears enough bars to justify building an MVP. Ship something small and test with real users.
The signal is there but not conclusive. Talk to potential users, run a landing page test, or find more evidence before investing build time.
Not enough signal to justify the time investment. Move on to the next idea. The best builders are fast at saying no.
Score interpretation guide
Strong signal across all dimensions. This is rare. If you see it, pay attention.
Solid fundamentals with minor gaps. Worth exploring with a focused MVP.
Promising but unproven. Needs more research or a different angle before committing time.
Significant gaps in the scoring. Could work with a major pivot, but risky as-is.
Weak signal across most dimensions. Not worth pursuing without a fundamentally different approach.
Expert Perspectives
4 additional lenses, each scoring 0 to 100.
Beyond the FL Method, every idea is evaluated through four additional AI frameworks. Each brings a different perspective. None of these frameworks are endorsed by or affiliated with the people who inspired them. They are our interpretation, adapted for solo builder idea evaluation.
Value Equation
20%Dream outcome multiplied by perceived likelihood of achievement, divided by time delay and effort.
Inspired by the value equation framework. Adapted for solo builder idea evaluation.
Dream Outcome
20 ptsHow desirable is the end result? The bigger the transformation, the higher the perceived value.
Perceived Likelihood
25 ptsDoes the buyer believe it will actually work for them? Social proof, specificity, and guarantees move this needle.
Time to Result
15 ptsHow quickly does the user see results? Faster wins beat bigger promises.
Effort & Sacrifice
20 ptsHow much work does the buyer have to put in? Lower effort means higher perceived value.
Execution Speed
20 ptsCan a solo builder launch fast enough to capture the window? Time kills deals.
One-Person Business
20%Personal brand leverage. Evaluates whether the idea resonates with a solo creator audience and positions the builder as someone others want to learn from.
Inspired by the one-person business philosophy. Adapted to score personal brand leverage.
Curiosity Factor
25 ptsDoes the idea spark genuine curiosity? Ideas that make people think "I want to know more" have natural pull.
Identity Resonance
20 ptsDoes this align with who the builder is? Authenticity compounds over time.
Audience Overlap
15 ptsDoes the idea connect to an existing audience or community? Built-in distribution matters.
Leverage Potential
15 ptsCan this be turned into content, courses, or templates? One effort, multiple outputs.
Monetization Path
15 ptsIs there a clear path from free value to paid product? Free forever is not a business model.
Uniqueness
5 ptsIs there a non-obvious angle? Differentiation through personal experience.
Platform Fit
5 ptsDoes the format fit existing platforms? Build where the audience already hangs out.
Viral Frameworks
20%Hook strength and shareability. Optimizes for the first 3 seconds and the impulse to send it to a friend.
Inspired by micro-SaaS and viral content frameworks. Adapted for idea-level evaluation.
Problem Clarity
20 ptsCan you explain the problem in one sentence? Clear problems get shared.
Hook Strength
25 ptsDoes the idea stop the scroll? Pattern interrupts and unexpected angles win.
Shareability
20 ptsWould someone send this to a friend? The "you need to see this" factor.
Monetization Fit
15 ptsIs the revenue model obvious? Subscriptions, one-time, usage-based.
Risk Level
10 ptsWhat is the downside of trying? Low-risk ideas get more attempts.
Testability
10 ptsCan you validate this with a landing page and $50? Fast experiments win.
Builder Lens
20%Builder credibility. Evaluates whether the idea demonstrates real usage, real metrics, and real decisions. Penalizes vaporware.
Inspired by startup evaluation criteria. Adapted for pre-launch idea assessment.
Problem Evidence
15 ptsIs there quantifiable evidence that this problem exists at scale? Data beats opinions.
Market Timing
15 ptsIs this the right moment? Too early is as bad as too late.
Founder-Market Fit
15 ptsDoes the builder have an unfair advantage? Domain knowledge, existing audience, or technical edge.
Defensibility
15 ptsWhat stops someone from copying this in a weekend? Data, community, or switching costs.
Simplicity
15 ptsCan you explain the business model on a napkin? Complexity is a red flag at the idea stage.
Growth Potential
15 ptsDoes the idea have natural expansion paths? Good ideas grow into ecosystems.
Do Your Own Research
The scoring system is a starting point, not a business plan. AI models can miss context, overweight trends, or undervalue niche markets. Use the scores as one input alongside your own research, conversations with potential users, and judgment. The best ideas still require a human who cares enough to build them well.
Every idea is free to browse. Sign up to see full scores and analysis.