I agree with many of the comments here including some of the ones that are in disagreement. The difference is context where for most situations the starting point is far from optimal and the any of N better choices is a good improvement.
That doesn't seem to be what this post it talking about. It seems to talking about well worn areas trying to improve the state of the art. An example that illustrates it for me is DeepMind's AlphaTensor finding a better way to multiply matrices[0] in 2022. It wasn't a brute-force solution, but the scale of it makes it appear so.
> On 4x4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5x5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4x4 algorithm, and separately tweaked Deepmind's 96-step 5x5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24] in normal arithmetic.[25] Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic.
This to me shows that direct profiling and observation wouldn't have led to the optimization. Improvements needed a sort-of but not actually brute-force effort of many people trying, but also being clever with their attempts.
That doesn't seem to be what this post it talking about. It seems to talking about well worn areas trying to improve the state of the art. An example that illustrates it for me is DeepMind's AlphaTensor finding a better way to multiply matrices[0] in 2022. It wasn't a brute-force solution, but the scale of it makes it appear so.
> On 4x4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5x5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4x4 algorithm, and separately tweaked Deepmind's 96-step 5x5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24] in normal arithmetic.[25] Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic.
This to me shows that direct profiling and observation wouldn't have led to the optimization. Improvements needed a sort-of but not actually brute-force effort of many people trying, but also being clever with their attempts.
[0] https://en.wikipedia.org/wiki/Matrix_multiplication_algorith...