Ars has reached out to HackerOne for remark and can replace this put up if we get a response.
“Extra instruments to strike down this conduct”
In an interview with Ars, Stenberg stated he was glad his put up—which generated 200 feedback and practically 400 reposts as of Wednesday morning—was getting round. “I am tremendous completely happy that the problem [is getting] consideration in order that presumably we are able to do one thing about it [and] educate the viewers that that is the state of issues,” Stenberg stated. “LLMs can’t discover safety issues, a minimum of not like they’re getting used right here.”
This week has seen 4 such misguided, clearly AI-generated vulnerability experiences seemingly looking for both repute or bug bounty funds, Stenberg stated. “A method you possibly can inform is it is all the time such a pleasant report. Pleasant phrased, excellent English, well mannered, with good bullet-points … an abnormal human by no means does it like that of their first writing,” he stated.
Some AI experiences are simpler to identify than others. One unintentionally pasted their immediate into the report, Stenberg stated, “and he ended it with, ‘and make it sound alarming.'”
Stenberg stated he had “talked to [HackerOne] earlier than about this” and has reached out to the service this week. “I would really like them to do one thing, one thing stronger, to behave on this. I would really like assist from them to make the infrastructure round [AI tools] higher and provides us extra instruments to strike down this conduct,” he stated.
Within the feedback of his put up, Stenberg, buying and selling feedback with Tobias Heldt of open supply safety agency XOR, advised that bug bounty packages might doubtlessly use “current networks and infrastructure.” Safety reporters paying a bond to have a report reviewed “could possibly be one technique to filter indicators and cut back noise,” Heldt stated. Elsewhere, Stenberg stated that whereas AI experiences are “not drowning us, [the] pattern just isn’t trying good.”
Stenberg has beforehand blogged on his own site about AI-generated vulnerability experiences, with extra particulars on what they seem like and what they get unsuitable. Seth Larson, safety developer-in-residence on the Python Software program Basis, added to Stenberg’s findings with his own examples and suggested actions, as noted by The Register.
“If that is taking place to a handful of initiatives that I’ve visibility for, then I believe that that is taking place on a big scale to open supply initiatives,” Larson wrote in December. “This can be a very regarding pattern.”