77 million scientific papers exist. 4,000 more publish every single day. Nobody has built the system to make sense of them yet. We're doing it, and we need someone who can build the core of that intelligence.
The volume is crushing. No human can read what's relevant anymore. The signal is buried under noise.
Credibility is invisible. A n=12 observational study and a randomized trial of 10,000 look identical in a headline. Most people can't tell the difference.
The alternatives are broken. Social media rewards virality, not rigor. AI summarizers don't have judgment. Nobody's doing evaluation — just translation.
Magnify gives people a personalized daily feed of new research, curated to their interests, evaluated for quality, and translated into what it actually means for their lives.
Before a study changes your behavior, five questions need answers: Was it well-designed? Is the sample meaningful? Does it replicate? Does it conflict with prior evidence? Should you actually act on it? No platform answers any of them. We will.
This isn't another RSS reader with a summary button. The interesting part is building the evaluation engine: AI quality analysis, a rigorous statistical framework for judging research, a solid backend to hold it together, and a matching system that learns what each user actually cares about.
Ranking scientific papers isn't like ranking web pages. Study design, methodology, sample size, replication status, citation quality, and contradiction with existing evidence all matter differently depending on the domain. You'll need to build systems that reason about all of it.
Using AI to parse what a study actually claims, how it was designed, and whether those claims hold up — going well beyond what a headline conveys.
Building a principled scoring framework: sample size, effect size, study design, reproducibility, and conflict with consensus. Not a citation count dressed up as credibility.
Ingesting and processing thousands of new papers daily. Designing the systems that make real-time evaluation possible and keep it reliable as we grow.
Connecting users to research that's relevant to them, then learning from how they engage to get better over time. Relevance and credibility, not just recency.
"Anyone can summarize a paper. Very few people can tell you whether it's worth reading."
We're not pitching you a napkin. We have a working MVP, a waitlist with genuine interest, and a clear thesis about what we're building.
You've probably thought about why most people can't evaluate research, and what it would take to fix it. You're not looking for a job. You're looking for the problem that's worth the next five years.
This is a founding role with real equity. It's early. It's high risk. The upside is real if we execute. We're looking for someone who understands that and chooses it anyway, because the problem is worth it.
Send a short note about what you've built and what about this problem interests you. No resume required to start — we'll ask for one if we want to keep talking.
magnify.app.support@gmail.com →Stanford area. In-person preferred. Remote considered for the right person.