Magnify
Founding CTO
Open Role · Stanford Area · Full-Time

Build the brain
behind trusted science.

77 million scientific papers exist. 4,000 more publish every single day. Nobody has built the system to make sense of them yet. We're doing it, and we need someone who can build the core of that intelligence.

Science is broken at the interface layer.

4K+ New studies daily
77M Papers on PubMed
~25% Adults are scientifically literate
🔬

The volume is crushing. No human can read what's relevant anymore. The signal is buried under noise.

⚠️

Credibility is invisible. A n=12 observational study and a randomized trial of 10,000 look identical in a headline. Most people can't tell the difference.

📱

The alternatives are broken. Social media rewards virality, not rigor. AI summarizers don't have judgment. Nobody's doing evaluation — just translation.

A credibility layer between people and science.

Magnify gives people a personalized daily feed of new research, curated to their interests, evaluated for quality, and translated into what it actually means for their lives.

The Core Insight

Before a study changes your behavior, five questions need answers: Was it well-designed? Is the sample meaningful? Does it replicate? Does it conflict with prior evidence? Should you actually act on it? No platform answers any of them. We will.

This isn't another RSS reader with a summary button. The interesting part is building the evaluation engine: AI quality analysis, a rigorous statistical framework for judging research, a solid backend to hold it together, and a matching system that learns what each user actually cares about.

This is genuinely difficult CS.

Ranking scientific papers isn't like ranking web pages. Study design, methodology, sample size, replication status, citation quality, and contradiction with existing evidence all matter differently depending on the domain. You'll need to build systems that reason about all of it.

AI Quality Analysis

Using AI to parse what a study actually claims, how it was designed, and whether those claims hold up — going well beyond what a headline conveys.

Statistical Rigor Engine

Building a principled scoring framework: sample size, effect size, study design, reproducibility, and conflict with consensus. Not a citation count dressed up as credibility.

Backend Infrastructure

Ingesting and processing thousands of new papers daily. Designing the systems that make real-time evaluation possible and keep it reliable as we grow.

Personalization and Matching

Connecting users to research that's relevant to them, then learning from how they engage to get better over time. Relevance and credibility, not just recency.

"Anyone can summarize a paper. Very few people can tell you whether it's worth reading."

Early. Real. Moving.

We're not pitching you a napkin. We have a working MVP, a waitlist with genuine interest, and a clear thesis about what we're building.

MVP live on TestFlight
Active waitlist
Near Stanford

You want to build this.
Not manage it. Build it.

You've probably thought about why most people can't evaluate research, and what it would take to fix it. You're not looking for a job. You're looking for the problem that's worth the next five years.

You own the technical roadmap. All of it.

Real Talk

This is a founding role with real equity. It's early. It's high risk. The upside is real if we execute. We're looking for someone who understands that and chooses it anyway, because the problem is worth it.

Founding Role · Equity + Salary · In-Person Preferred

Sound like you?

Send a short note about what you've built and what about this problem interests you. No resume required to start — we'll ask for one if we want to keep talking.

magnify.app.support@gmail.com →

Stanford area. In-person preferred. Remote considered for the right person.