By Yola Verbruggen
Hyper-realistic deepfakes – recordings that replace an individual’s face or voice with that of someone else, while appearing real – are time-consuming and expensive to create. The introduction of new and more readily accessible generative artificial intelligence (AI) tools might be about to change this. This has led to increasing concern about the misuse of synthetic media, referring to media produced by AI or with its assistance, and lawmakers are scrambling to find ways to regulate to mitigate its abuse.
A recently released AI chatbot, ChatGPT, is capable of producing creative writing based on internet databases. ChatGPT has already spread fear among educators about the prospect of cheating students. ‘What until recently was the preserve of research labs, is now in our pockets’, says Henry Ajder, an expert on deepfakes and generative AI and presenter of the BBC series ‘The Future Will Be Synthesised’.
Deep synthesis media can be used to generate text, video and audio content and images by using AI. With the potential for abusive uses of the technology, for example to create non-consensual pornography or defraud people, it’s no surprise that legislators want to put regulations in place. Dana Green is Chair of the IBA Media Law Committee and Senior Counsel at The New York Times Company. She says that in reality, the issues deepfakes cause don’t currently ‘come up that often. Though, with more accessible applications, this can accelerate quickly.’
In some jurisdictions, new laws have been introduced to curtail the abuse of deep synthesis technology. None, however, go as far as China’s. In January, China enacted a far-reaching law to regulate the technology. It requires AI-manipulated media to be labelled, transparency on data sourcing and for consent to be obtained from the person whose likeness is being used. ‘While China’s regulation is pioneering, at the same token, it’s an Orwellian Ministry of Truth. They decide what’s real’, says Ajder.
‘China’s approach may not be practical or legally acceptable in other jurisdictions, but that doesn’t mean deepfakes operate in a legal vacuum’, says Green. ‘Though the tech is new, the harm is not. Existing legal regimes already provide many remedies for abusive, fraudulent, or defamatory use of deepfakes – in ways that are compatible with the [US] First Amendment and international human rights conventions.’
Existing legal regimes already provide many remedies for abusive, fraudulent, or defamatory use of deepfakes
Dana Green
Chair, IBA Media Law Committee
Bills banning non-consensual deepfake pornography have been introduced in several states in the US and in other countries, including South Korea and the UK. New York State established the right to control the commercial exploitation of a performer’s digitally manipulated likeness for 40 years after their passing. In California an individual can be persecuted for spreading deepfakes aimed at deceiving or damaging the reputation of a political candidate if an election is 60 days or less away. In Texas, it’s an offence to make and share deepfake videos that could harm candidates running for public office or that could influence elections.
‘China’s approach may not be practical or legally acceptable in other jurisdictions, but that doesn’t mean deepfakes operate in a legal vacuum’, says Green. ‘Though the tech is new, the harm is not. Existing legal regimes already provide many remedies for abusive, fraudulent, or defamatory use of deepfakes – in ways that are compatible with the [US] First Amendment and international human rights conventions.’
Bills banning non-consensual deepfake pornography have been introduced in several states in the US and in other countries, including South Korea and the UK. New York State established the right to control the commercial exploitation of a performer’s digitally manipulated likeness for 40 years after their passing. In California an individual can be persecuted for spreading deepfakes aimed at deceiving or damaging the reputation of a political candidate if an election is 60 days or less away. In Texas, it’s an offence to make and share deepfake videos that could harm candidates running for public office or that could influence elections.
In Europe, the proposed AI Act will introduce transparency obligations for the use of deepfakes, while the Digital Services Act, which will come into effect in 2024, imposes tight regulations on internet platforms to prevent online harm. However, Ajder believes that ‘the problem with legislation is that you are appealing to the people who are not going to be the problem’.í
Media and tech companies are working together with non-governmental organisations and others – through initiatives such as the Content Authenticity Initiative and the Coalition for Content Provenance and Authenticity – to ensure that a secure data log is created for media such as image and sound, to protect its origins. But it has its limits. ‘Though such a programme can show the authenticity of the media, it cannot determine whether it is fake or not’, says Ajder.
Partnership on AI – a non-profit coalition of academic, civil society, industry and media organisations – is creating a code of ethics for companies working with deep synthesis technology. Such a code is intended to provide a blueprint for these businesses, but won’t, however, be enforceable.
When deepfakes began to appear online, concerns arose about the use of fake evidence in courts. ‘But we’ve only seen the flipside: people claiming that real or legitimate imagery is falsified’, says Matthew Ferraro, counsel at WilmerHale in Washington, DC, who advises on matters related to defence and national security, cybersecurity and crisis management. This is also known as the liar’s dividend: the ability to deny authenticity of a piece of media based on the mere knowledge that deepfakes can be created. In proceedings related to the storming of the US Capitol on 6 January 2021, some of the accused have already – unsuccessfully – used this argument in their defence.
Regulating deep synthesis technology could impair positive uses of the technology, of which there are many. Synthetic media could be used to give a voice to people whose speech is impaired by ALS, a progressive neurodegenerative disease. It could also be used to finish films if actors pass away before filming has finished, or to prevent the need for reshoots. The MIT Media Lab and UNICEF [the UN Children’s Fund] have recently created a project called Deep Empathy, in which they transformed images of Western cities into warzones to create empathy for victims of disasters taking place far from viewers’ homes. ‘Regulations should not focus on the tech, but on how to regulate the media’s effects’, concludes Ferraro.
Image credit: Sono Creative/AdobeStock.com
Source: ibanet.org