A not too long ago launched Google AI mannequin scores worse on sure security exams than its predecessor, in line with the corporate’s inside benchmarking.
In a technical report printed this week, Google reveals that its Gemini 2.5 Flash mannequin is extra prone to generate textual content that violates its security tips than Gemini 2.0 Flash. On two metrics, “text-to-text security” and “image-to-text security,” Gemini 2.5 Flash regresses 4.1% and 9.6%, respectively.
Textual content-to-text security measures how incessantly a mannequin violates Google’s tips given a immediate, whereas image-to-text security evaluates how carefully the mannequin adheres to those boundaries when prompted utilizing a picture. Each exams are automated, not human-supervised.
In an emailed assertion, a Google spokesperson confirmed that Gemini 2.5 Flash “performs worse on text-to-text and image-to-text security.”
These stunning benchmark outcomes come as AI firms transfer to make their fashions extra permissive — in different phrases, much less prone to refuse to answer controversial or delicate topics. For its latest crop of Llama models, Meta stated it tuned the fashions to not endorse “some views over others” and to answer to extra “debated” political prompts. OpenAI stated earlier this 12 months that it might tweak future models to not take an editorial stance and supply a number of views on controversial matters.
Typically, these permissiveness efforts have backfired. TechCrunch reported Monday that the default mannequin powering OpenAI’s ChatGPT allowed minors to generate erotic conversations. OpenAI blamed the conduct on a “bug.”
In keeping with Google’s technical report, Gemini 2.5 Flash, which continues to be in preview, follows directions extra faithfully than Gemini 2.0 Flash, inclusive of directions that cross problematic strains. The corporate claims that the regressions will be attributed partly to false positives, nevertheless it additionally admits that Gemini 2.5 Flash generally generates “violative content material” when explicitly requested.
Techcrunch occasion
Berkeley, CA
|
June 5
“Naturally, there’s pressure between [instruction following] on delicate matters and security coverage violations, which is mirrored throughout our evaluations,” reads the report.
Scores from SpeechMap, a benchmark that probes how fashions reply to delicate and controversial prompts, additionally counsel that Gemini 2.5 Flash is much much less prone to refuse to reply contentious questions than Gemini 2.0 Flash. TechCrunch’s testing of the mannequin by way of AI platform OpenRouter discovered that it’ll uncomplainingly write essays in help of changing human judges with AI, weakening due course of protections within the U.S., and implementing widespread warrantless authorities surveillance packages.
Thomas Woodside, co-founder of the Safe AI Challenge, stated the restricted particulars Google gave in its technical report demonstrates the necessity for extra transparency in mannequin testing.
“There’s a trade-off between instruction-following and coverage following, as a result of some customers might ask for content material that may violate insurance policies,” Woodside informed TechCrunch. “On this case, Google’s newest Flash mannequin complies with directions extra whereas additionally violating insurance policies extra. Google doesn’t present a lot element on the precise circumstances the place insurance policies have been violated, though they are saying they aren’t extreme. With out understanding extra, it’s exhausting for impartial analysts to know whether or not there’s an issue.”
Google has come underneath fireplace for its mannequin security reporting practices earlier than.
It took the corporate weeks to publish a technical report for its most succesful mannequin, Gemini 2.5 Professional. When the report ultimately was printed, it initially omitted key safety testing details.
On Monday, Google launched a extra detailed report with further security data.