Here's the main issue I have with this reasoning. It builds in a fault to the program that doesn't need to be there.
If n is usually less than 100 then sure maybe it does make sense to do the more brute force approach. However, what's the impact of using the hash table first? If n is usually less than 100, now this method is slower usually, but is that a problem? Well, you should use a profiler to find out if it is or isn't. However, that may be entirely tolerable performance wise.
On the other hand, if your assumption that "n < 100" doesn't hold up and, indeed you start seeing n > 100, or 1000, or 10000. Now all the sudden this method becomes a fault in the application you developed that needs to be rewritten.
What's devious about the "n > 100" problem is when you first write the code, you may have very well been correct that "n < 100". But time and inputs change the way code is executed.
A lot of the code I've fixed has been due to this. I've seen the counter where really searching a list/array was the better option only a handful of times.
To me, premature optimization is using a known algorithmically inferior approach, of the same complexity as the superior approach, because you know that for small n, it is faster without knowing whether or not that will be the case.
> It builds in a fault to the program that doesn't need to be there.
Not really. You're just exploring an unknown frontier. Once the frontier is known and you actually understand what you are trying to solve you're still going to put in the right algorithm.
I mean, you might not if you're vibe coding, which I guess is apparently a thing somehow, but I'm assuming we're talking about actual engineering here, not speaking the sordid tales of cowboys. The "premature optimization" line was definitely not directed at those who think code is a feeling.
When the frontier is unknown, whatever algorithm gets something operating as soon as possible is the right one. The given example may be so contrived that the hash map would be what you see as the quickest and dirtiest solution anyway, but with a little more complexity loops do tend to be easier to reason about on a first-pass basis.
> Now all the sudden this method becomes a fault in the application you developed that needs to be rewritten.
In the same way that finding out that a blue button doesn't resonate with users, requiring you to rewrite your application to make the button green. Such is the life of building software. But that's why engineers get paid the big bucks – nobody else is stupid enough to do it.
> A lot of the code I've fixed has been due to this.
You're not really fixing it, you're updating it to reflect the information you gleaned during the exploratory phase. I mean, unless you're coding on vibes or something, but, again, assuming actual engineering is taking place here. Nobody should be coding non-prototypes by the guide of their feelings.
If n is usually less than 100 then sure maybe it does make sense to do the more brute force approach. However, what's the impact of using the hash table first? If n is usually less than 100, now this method is slower usually, but is that a problem? Well, you should use a profiler to find out if it is or isn't. However, that may be entirely tolerable performance wise.
On the other hand, if your assumption that "n < 100" doesn't hold up and, indeed you start seeing n > 100, or 1000, or 10000. Now all the sudden this method becomes a fault in the application you developed that needs to be rewritten.
What's devious about the "n > 100" problem is when you first write the code, you may have very well been correct that "n < 100". But time and inputs change the way code is executed.
A lot of the code I've fixed has been due to this. I've seen the counter where really searching a list/array was the better option only a handful of times.
To me, premature optimization is using a known algorithmically inferior approach, of the same complexity as the superior approach, because you know that for small n, it is faster without knowing whether or not that will be the case.