Hacker News new | past | comments | ask | show | jobs | submit login

I think the confusion sneaks in because “measuring” is something you do several times and each iteration has different connotation.

Once you “find” a candidate change you measure it to see if what you did made things worse and you put it back if it did, or maybe you try it in combination with other changes to see if its value is complementary.

But people fuck up all the time reading the initial telemetry, which is often where I come in. I get tired of hearing people say, “we’ve done all we can, look how flat this chart is,” and hand someone my beer. You won’t find all of the good candidates in the percentage of run time list. That’s not all the data that’s there, and not every change that works needs to even be supported by the initial data. It only needs to be supported by the delta afterward.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact