Agree with this. But not what I concluded from OP. Architectural decisions from the start is where most optimizations should happen. I remember from school some kids that did this super optimized loop and the teacher said. Do you really have to do that same calculation on every iteration?
But, in the real world. Code bases are massive. And it is hard to predict when worlds collide. Most things does not matter until they do. So measuring is the way to go I believe.
Measuring is also useless once someone has introduced bottom up caching.
There’s so much noise at that point that even people who would usually catch problems start to miss them.
There’s usual response to this is, “well you can turn caching off to do profiling” but that’s incorrect because once people know they can get a value from the cache they stop passing it on the stack. So your function that calls A() three times that should have called it 2? You find now that it’s being called ten times.
And the usual response to that is, “well it’s free now so who cares?” Except it’s not free. Every cache miss now either costs you multiple, or much more complex cache bookkeeping which is more overhead, and every hit resets the MRU data on that entry making it more likely that other elements get evicted.
For instance in NodeJS concurrent fetches for the same resource often go into a promise cache, but now the context of the closure for the promise is captured in the cache, and it doesn’t take much to confuse v8 into keeping a bunch of data in scope that’s not actually reachable. I’ve had to fix that a few times. Hundreds of megabytes in one case because it kept an entire request handler in scope.
And I forgot the worst part which is that most of these workflows assume that A() will return the same answer for the duration of the interaction and that’s just not true. Passing value objects on the stack guarantees snapshotting. For the duration of the call sequence, all of the code will see the same A. Not so with the cache.
You make still run into problems where you expect A and B to have a relationship between them that doesn’t hold if there’s a gap between looking them up, but it’s often less likely or severe than if for instance half a page has the data in state S and half of it is in state T.
But, in the real world. Code bases are massive. And it is hard to predict when worlds collide. Most things does not matter until they do. So measuring is the way to go I believe.