This week, Sakana AI, an Nvidia-backed startup that’s raised a whole lot of tens of millions of {dollars} from VC companies, made a exceptional declare. The corporate mentioned it had created an AI system, the AI CUDA Engineer, that might successfully velocity up the coaching of sure AI fashions by an element of as much as 100x.
The one drawback is, the system didn’t work.
Users on X quickly discovered that Sakana’s system truly resulted in worse-than-average mannequin coaching efficiency. According to one user, Sakana’s AI resulted in a 3x slowdown — not a speedup.
What went improper? A bug within the code, in keeping with a post by Lucas Beyer, a member of the technical workers at OpenAI.
“Their orig code is improper in [a] delicate manner,” Beyer wrote on X. “The actual fact they run benchmarking TWICE with wildly totally different outcomes ought to make them cease and suppose.”
In a postmortem published Friday, Sakana admitted that the system has discovered a option to “cheat” (as Sakana described it) and blamed the system’s tendency to “reward hack” — i.e. establish flaws to attain excessive metrics with out conducting the specified aim (rushing up mannequin coaching). Comparable phenomena has been noticed in AI that’s trained to play games of chess.
In accordance with Sakana, the system discovered exploits within the analysis code that the corporate was utilizing that allowed it to bypass validations for accuracy, amongst different checks. Sakana says it has addressed the difficulty, and that it intends to revise its claims in up to date supplies.
“Now we have since made the analysis and runtime profiling harness extra strong to eradicate lots of such [sic] loopholes,” the corporate wrote within the X publish. “We’re within the strategy of revising our paper, and our outcomes, to mirror and talk about the consequences […] We deeply apologize for our oversight to our readers. We are going to present a revision of this work quickly, and talk about our learnings.”
Props to Sakana for proudly owning as much as the error. However the episode is an efficient reminder that if a declare sounds too good to be true, especially in AI, it in all probability is.