OpenAI, the creator of ChatGPT, has developed a textual content watermarking instrument that might assist it adjust to the Synthetic Intelligence (AI) Act, however the firm has but to launch it, reportedly as a result of it fears dropping customers.
The EU AI Act requires suppliers of AI programs, like ChatGPT, that generate artificial audio, picture, video, or textual content content material to mark the system’s outputs as detectable as artificially generated or manipulated.
The Act entered into power on 1 August, however this requirement applies from 2 August 2026.
On Sunday (4 August), the Wall Road Journal (WSJ) quoted sources who mentioned OpenAI has been able to deploy a textual content watermarking instrument for a yr however is hesitant to take action as a result of it fears dropping customers.
In a weblog submit replace on Sunday, OpenAI confirmed that it has developed a “extremely correct” textual content watermarking instrument that they “proceed to contemplate as we analysis alternate options”.
Nonetheless, the corporate mentioned they’re at the moment weighing the dangers of such a instrument, for instance “our analysis suggests the textual content watermarking technique has the potential to disproportionately affect some teams,” equivalent to non-native English audio system.
Additionally they doubt its efficacy towards globalised tampering, equivalent to translation programs and rewording with one other generative mannequin, “making it trivial to circumvention by dangerous actors”.
In keeping with the WSJ, OpenAI has been debating and conducting surveys since November 2022 to resolve whether or not to deploy the watermarking instrument.
In a survey, almost 30% of loyal ChatGPT customers mentioned they’d use the chatbot much less if OpenAI deployed watermarks and a rival didn’t. Nonetheless, one other survey carried out by OpenAI discovered that, within the summary, 4 out of 5 individuals worldwide supported the thought of an AI detection instrument.
Watermarking as a instrument
The instrument would digitally stamp any content material created by ChatGPT, making it tougher for individuals to misuse AI-generated content material or use it to cheat. The misuse of such content material has brought about huge issues with employers, lecturers, and professors, who’ve referred to as for tactics to clamp down on the difficulty.
The WSJ reported that the watermarking instrument doesn’t go away seen traces in written textual content, however when that textual content is checked by an AI detection instrument, it’s flagged as AI-generated content material.
Present AI detection instruments are considerably unreliable and can provide combined outcomes, making it tougher for lecturers and professionals to flag AI-generated content material.
The AI Act
Watermarking is talked about within the draft AI Pact, a set of voluntary commitments corporations can join to organize for compliance with the AI Act. The Fee hopes to have a pledge-signing occasion in September the place corporations publicly decide to the Pact.
“The organisations could decide to […] to the extent possible, design generative AI programs in order that AI-generated content material is marked by technical options, equivalent to watermarks and metadata identifiers,” the draft pact reads.
As not too long ago as final week, OpenAI publicly acknowledged its dedication to adjust to the EU’s AI Act, which requires suppliers of generative AI fashions like ChatGPT to “make sure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.”
However as for specifics on how the watermarking instrument may help OpenAI with AI Act compliance, an organization spokesperson pointed Euractiv to the 4 August weblog submit.
The Fee didn’t reply to Euractiv’s request for remark on the time of publication.
[Edited by Rajnish Singh/Alice Taylor]