April 13, 2026

Every New Feature is an Experiment

3 min
read
Every New Feature is an Experiment

Every new feature is an experiment

Founders come to me with big ideas all the time. "We need an export feature. Beautiful PDF, custom branding, our logo in the header, the whole thing." Cool. How do you know your customers want a PDF?

Most don't. Most think they do. And the gap between thinking and knowing usually costs your engineering team weeks.

This is the part of MVP thinking that founders nod at and then quietly ignore: every new feature is an experiment. Every shipped thing is a hypothesis you're testing. Your job is not to build the right answer on the first try. Your job is to figure out the cheapest possible way to find out what the right answer even is.

A real example

I was working with a team where customers had been asking for an export feature. The founder wanted to invest real engineering time into a polished Excel file with formatting, formulas, the works.

We shipped a CSV instead. Quick, dirty, no formatting, just the data dumped into a downloadable file. It took an afternoon.

Customers complained. But not about the format. Not one customer asked for an Excel file. They complained about the columns. "I need this field, not that one. Why isn't this metric in here? Can you split this column out?"

That was the win. We got the real feedback by spending four hours on the wrong-looking version of the right thing. If we had built the polished export first, we would have spent two weeks on a beautiful spreadsheet that still had the wrong columns. The complaints would have been the same. We just would have arrived at them slower and poorer.

Why this matters more for AI products

Everything you build at an AI startup rests on assumptions that are probably wrong. Your assumptions about what users want the model to do. Your assumptions about which workflow they will trust the AI inside of. Your assumptions about how much hand-holding they need.

You don't know which of those assumptions are wrong. That's the whole point. The only way to find out is to put something in front of a real user and watch what they do.

A clickable prototype in Claude or Figma counts. A hacky flow that's mostly hardcoded counts. A button that doesn't actually do anything but logs the click counts. The format of the test does not matter. The information you get back does.

How to actually do this

When a founder pitches me a new feature, I run a mental checklist:

- What is the hypothesis we're testing

- What is the absolute minimum we can ship to test it

- What do we expect users to do, and what would surprise us

- What do we do with the answer either way

If you can't answer the first one, you're not building a feature. You're building a guess in expensive clothing.

If you can answer it but the minimum version still takes six weeks, you're doing too much. There is almost always a smaller test hiding underneath.

If you don't know what would surprise you, you're not actually paying attention to the result. You're just shipping.

The founders who skip this

I've watched founders insist on building the most complex version of every feature. The ones who can't imagine a smaller version achieving the same vision. The ones who think shipping ugly is beneath their brand.

Most of those companies aren't around anymore.

That's not a moral judgment, it's a survival math problem. Startups have a finite amount of engineering time and a finite amount of runway. Every week you spend building the wrong polished thing is a week you didn't spend learning what to build next.

What to do this week

Pick the next feature on your roadmap. Write down the hypothesis it's testing in one sentence. Then ask your team what the cheapest, ugliest, fastest version of that test looks like.

If the answer is "a CSV instead of a PDF," ship the CSV. See what people actually complain about. Then build the right thing, once you know what right is.

Want to talk through your roadmap? Get in touch.