How can you know that? And for smaller n, the choice hardly matters.
> can not run out of memory due to the hash table
If n gets large enough that hash table size becomes a problem, n is large enough that you need the hash table or some other structure that reduces the complexity down from O(n^2).
> compiles to less code
This is just nonsense. Any codebase of significant size already has a hashtable setup and unless you are using a language like C++, the code size of using a hash table anywhere is constant. The cost is already paid.
The actual amount of compiled code from the switch is identical.
> does not need item to be hashable
True.
There would be reasons not to choose option 2, but it should be default until it's been proven to be a problem. And even then, the next step is almost certainly not going to be option 1.
But then this could also be language and application specific. I deal primarily with backend services/Java where memory is plentiful, and allocation is fast. Algorithmic complexity is for my applications very often the better indicator of what good code should be.
Disagree
> faster for small n (which is the common case)
How can you know that? And for smaller n, the choice hardly matters.
> can not run out of memory due to the hash table
If n gets large enough that hash table size becomes a problem, n is large enough that you need the hash table or some other structure that reduces the complexity down from O(n^2).
> compiles to less code
This is just nonsense. Any codebase of significant size already has a hashtable setup and unless you are using a language like C++, the code size of using a hash table anywhere is constant. The cost is already paid.
The actual amount of compiled code from the switch is identical.
> does not need item to be hashable
True.
There would be reasons not to choose option 2, but it should be default until it's been proven to be a problem. And even then, the next step is almost certainly not going to be option 1.
But then this could also be language and application specific. I deal primarily with backend services/Java where memory is plentiful, and allocation is fast. Algorithmic complexity is for my applications very often the better indicator of what good code should be.