EU Officials Suggest All AI-Generated Content be Labelled
European Union (EU) officials suggest additional measures promoting transparency in artificial intelligence (AI) tools, including OpenAI’s ChatGPT, to tackle the spread of fake news. The Vice President for Values and Transparency at the European Commission highlighted the need for companies deploying generative AI tools to label their content and implement safeguards against disseminating AI-generative disinformation.
On June 5 2023, the European Commission’s Vice President for Values and Transparency, Vera Jourova, told a media house that companies deploying generative AI tools such as ChatGPT and Bard with the potential to generate misinformation should place labels on their content as an effort to combat fake news.
Vice President Jourova stated: “Signatories who have services with the potential to disseminate AI-generated disinformation should, in turn, put in place technology to recognise such content and clearly label this to users.”
Jourova added: “She had asked the 44 companies and organisations signed on to the bloc’s voluntary code of practice to fight disinformation to adhere to a new track in the pact to apply such labelling. Companies that integrate generative AI into their services end up creating safeguards to prevent malicious actors from utilising them for disinformation purposes.”
Companies Integrating AI into Their Services
In 2018 the EU created a Code of Practice on Disinformation which acts as a tool for players and an agreement in the tech industry on self-regulatory standards to impact disinformation.
It states:
“The 2022 Code of Practice is the result of the work carried out by the signatories. It is for the signatories to decide which commitments they sign up for, and it is their responsibility to ensure the effectiveness of their commitment’s implementation… Signatories committed to taking action in several domains, such as demonetising the dissemination of disinformation, ensuring the transparency of political advertising, empowering users, enhancing the cooperation with fact-checkers and providing researchers with better access to data.
Vice President Jourova noted that companies and other users “should report on new safeguards for AI this upcoming July.”
TikTok, Meta and Google’s YouTube platform have signed the EU’s 2022 Code of Practice on Disinformation. However, Twitter, recently purchased by billionaire Elon Musk and dramatically cut its staff, announced last month that it was withdrawing from the code. Jourova warned Twitter to expect more regulatory scrutiny. She said that the platform “should anticipate more scrutiny from regulators since its withdrawal from the code of practice.”
What Led to the EU Setting up AI Regulations?
In an interview with experts at Bitcode Method, it was mentioned that the statement from the Vice President comes as the European Union prepares its forthcoming EU AI Act, which will be a comprehensive set of guidelines for the public use of AI and the companies deploying it.
On April 28 2023, legislators in the EU pushed forward a draft bill classifying the risks of AI tools and ways to force developers of generative-AI applications to disclose the use of any copyrighted material. The bill stated:
“The high-risk tools will not be banned entirely, though they will be subjected to stricter transparency procedures. In particular, generative AI tools, including ChatGPT and Midjourney, will be obliged to disclose any use of copyrighted materials in AI training.”
In response to the bill’s current status, a Member of the EU Parliament, Svenja Hahn, notes: “The current bill’s status is a middle ground between too much surveillance and over-regulation that protects citizens, and fosters innovation and boosts the economy.” The Director for EMEA financial service solutions at Google Cloud, Georgina Bulkeley, said that AI is “too important not to regulate”.
On April 25 2023, the EU’s data watchdog predicted a terrible predicament for the US-based AI companies that run afoul of the General Data Protection Regulation (GDPR). Europe’s data watchdog, Wojciech Wiewiórowski, said that the rapid pace of development in the AI space means data protection regulators should be better prepared to avoid issues arising.
Wiewiórowski noted:
“OpenAI currently finds itself between a European rock and a US hard place, legally speaking…The European approach is connected with the purpose for which you use the data. So when you change the purpose for which the data is used, and especially if you do it against the information you provide people with, you are in breach of law.”
OpenAI’s ChatGPT has become the fastest-growing consumer application in history and set off a race among tech companies to bring generative AI products to market. However, concerns are mounting about the potential abuse of the technology and the possibility that bad actors and even governments may use it to produce far more disinformation than before. In the meantime, European officials are urging companies to take proactive measures and adopt a voluntary code of conduct to ensure the responsible development and deployment of generative AI technology.