Profiling to find suboptimal code is perfectly fine. Then you need to figure out how to fix it. Many people don't understand how performance optimization works, so they blindly add caching, improve constant time by invoking more low-level methods, etc. This obviously doesn't work, yet intuitively (to those people, anyway) it should produce good results.
That's why the mantra exists: don't trust your intuition, don't believe it when it says these changes improve performance, instead measure that performance and only apply changes that work. This is also perfectly fine, but this is a double-edged sword, and I've seen people go too far in this direction.
For example, they refuse to do any changes that don't immediately improve performance according to the profiler. If they modify one computation and performance decreases, they abandon this path altogether. They treat optimization as a game with a dense fog of war, and they refuse to apply deductive reasoning and, of course, intuition to apply changes that, according to the profiler at least, are not immediately rewarding.
I think there's a related problem where profiling/measurements can be made poorly and not reflect the real world.
Eg: indexing or partitioning a database table may appear to make things slower if you don't have both a representative amount of data and representative query patterns when you're measuring the change.
You should still measure your changes, but sometimes you need to be careful about measuring them in the right way, and possibly simulating a future context (eg: more scale) before drawing a conclusion.
Intuition about how the context will evolve and what effect that might have on the tradeoffs of different approaches is helpful
Sounds like a tricky balancing act. There are things that are extremely difficult to "game out." CPUs are very complicated. There are optimizations that seem like they could be cache friendly in theory, but aren't in practice.
Sure, but avoiding CPU cache misses is hardly the only form of optimization. Some optimizations are blatantly obvious. As an example, there was some code I was working on that called the database every single time it was executed. The data it was pulling from the database got updated once a year, and the entire dataset could fit into about 4kb of memory. Caching it in memory was an obvious fix, and given how fast the rest of the code ran once it had this data it was basically a 500% speedup that anyone with half a brain could have roughly estimated.
"Intuitively" literally means without having to learn something.
Adding caches or switching to lower-level calls is definitely something learned, and I wouldn't call it "intuitive".
What I think you are referring to is that sometimes, simply reading and understanding the code can tell you where the problem really is — still, my experience is that you want to measure the before and after to at least identify the general area you should be looking at, and more often than not, I could figure out what needs optimizing and how without having to get detailed profiles.
I did end up being surprised a few times, but it was mostly due to external code like buggy library implementations that didn't do the things they promised (eg. async library really synchronizing everything with a badly placed mutex).
At the same time, it's wrong to focus on a single case or a single profile (executing one code path in one set of circumstances), but what you are describing simply sounds like bad engineering — the fact that you can misuse the tool does not make the tool bad, it's still the engineer wielding it who's at fault.
And yet their statement makes perfect sense to me.
Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).
Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.
Profiling to find suboptimal code is perfectly fine. Then you need to figure out how to fix it. Many people don't understand how performance optimization works, so they blindly add caching, improve constant time by invoking more low-level methods, etc. This obviously doesn't work, yet intuitively (to those people, anyway) it should produce good results.
That's why the mantra exists: don't trust your intuition, don't believe it when it says these changes improve performance, instead measure that performance and only apply changes that work. This is also perfectly fine, but this is a double-edged sword, and I've seen people go too far in this direction.
For example, they refuse to do any changes that don't immediately improve performance according to the profiler. If they modify one computation and performance decreases, they abandon this path altogether. They treat optimization as a game with a dense fog of war, and they refuse to apply deductive reasoning and, of course, intuition to apply changes that, according to the profiler at least, are not immediately rewarding.