Zali Steggall MP Speaks on the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024

1 July 2024


I rise to strongly support this bill, the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, but I also say we need to go further in addressing deepfakes. Whilst this bill addresses deepfakes in a very specific context, when it comes to nonconsensual sexual material—and that is horrendous, and I welcome that it will soon be a crime—we need to address the broader question of deepfakes and the use of AI. This is especially so when it comes to advertising, where, for example, a consumer is misled into thinking an image or something presented to them is real when in fact it is AI generated—in fact all forms of non-consensual use of deepfake material using a person's name, image and likeness. I'll come to that later.

In that respect I move:

That all words after "That" be omitted with a view to substituting the following words:

"the House:

(1) notes that:

(a) deepfakes present inherent risks including the spread of misinformation, threats to privacy and security, altering election outcomes and misleading and deceiving voters;

(b) Australians have lost over $8 million to scams linked to online investment trading platforms, often using deepfakes;

(c) the Australian Electoral Commissioner has warned that Australia is not ready for its first Artificial Intelligence election that is coming within the next 12 months;

(d) deepfakes of Australian politicians have already been used; and

(e) the Commonwealth Electoral Amendment (Voter Protections in Political Advertising) Bill 2023, introduced by the member for Warringah on 13 November 2023, bans the use of deepfakes in political advertising; and

(2) calls on the Government to take immediate action to extensively ban the creation and transmission of deepfakes without consent of the subject, to address the risks deepfakes present.

In relation to sexual material caught by this bill, we absolutely need to send a strong and unambiguous message that distributing deepfakes of non-consensual sexual material is horrendous and will be a crime. A person's identity is their own and must be protected. Non-consensual sharing of sexual material is a disturbing and damaging form of abuse. Victims face humiliation and mental distress, and it affects their reputation and relationships. They experience intense emotional distress, including anxiety, depression and a profound sense of violation. The psychological and social effects of deepfake pornography are severe, and they can impact on the smallest person, from students—maybe someone who is unsuspecting and fairly anonymous—to someone like Taylor Swift, who we know recently had an incident where deepfakes were made in relation to her image and likeness. We know that they can spread rapidly online, making it nearly impossible for victims remove the content. This persistent digital footprint can have long-lasting repercussions on their personal and professional lives, impacting their relationships, career prospects and mental health. This bill is crucial for safeguarding the dignity, privacy and safety of all Australians, particularly women and girls.

While I welcome the bill, the government is well behind other jurisdictions when it comes to grappling with the broader issues that deepfakes present. As we enter this era of AI technology, deepfake pornography has become an increasingly prevalent issue. Manipulation of images or videos has been an issue for some time, but advances in technology over recent years have made it far easier to produce this sort of fake content. It's so realistic that it's indistinguishable to the naked eye. What once required significant time, skill and resources can now be done instantaneously, with minimal effort or skill required. Advances in AI have taken this even further, allowing rapid generation of content that is extremely realistic and incredibly difficult to identify as being fake. This form of online abuse involves using artificial images to create fake videos or images that depict individuals in sexually explicit scenarios without their consent. And the statistics are quite shocking: 96 per cent of all deepfake videos online are pornographic; 99 per cent of deepfake pornography targets women; and there was a 2,000 per cent rise in 2023 in the number of websites generating non-consensual sexual material using AI—a 2,000 per cent rise!

We need to go further: not only should deepfakes of this nature be banned but we need to think about where deepfakes are using and usurping another person's name, identity or likeness without their consent. That's why this amendment is important—for the government to be on notice that it must do more to look at the broader picture of deepfakes, across all the different ways in which they're used.

Let's consider the broader impact in relation to this bill. A 2024 Australian study found that more than 52 per cent of boys and 32 per cent of girls had reported viewing pornography by the age of 14. For most parents, that's quite shocking, because it's now of a completely different nature. This isn't about the magazines and things from previous eras; it's of a wholly different nature, and incredibly more confronting. This statistic is alarming because of its implications for young people's perceptions of sex, consent and relationships—what forms a healthy, respectful relationship. Exposure to pornography at a young age, particularly material generated by AI—and, certainly, non-consensual use of images—can distort understanding of healthy and respectful relationships. Further, our current crisis of intimate partner violence, particularly against women, demands strong and immediate preventative action against harmful ways that distort young people's understanding of consent and how to treat their partners respectfully. This bill sends a clear message that technology should not be used as a tool for abuse. With this bill, the Criminal Code is being amended to make it an offence to use a carriage service for non-consensual sexual material.

Governments are failing when it comes to preventing children from having access to AI-generated pornography websites. Websites are legal in Australia, with nothing being done to prevent access by children. That is another area that must be addressed. As reported this weekend in the Sydney Morning Herald, more than 100,000 people used the Undress AI website every day. According to the article, users upload a photo and choose from picture settings like 'nude', 'BDSM' or 'sex' and from age options including 'subtract five'—even that concept is quite revolting—which uses AI to make the subject look five years younger. This is a supply chain issue. It's not just the users; it's the application companies, the app stores that allow these apps on their platforms, the search engines and even the credit card companies that facilitate payment and use of these tools.

There are also broader considerations. I know that this bill is framed in the context of the distribution of sexual content without consent and does not directly regulate all deepfakes created using the identity of people without their consent, but the bill before us does not define 'deepfake', nor does it address the many other risks that deepfakes pose, beyond sexual consent. That is why the amendment I've moved is important and why it should be addressed.

The government states that there are a number of workstreams on foot to tackle deepfakes in other parts of government, under the industry department, and further electoral reform, but greater action and ambition are needed. As a former professional athlete, I know from experience that the one thing you have to attract sponsors and earn a living as an athlete is the sale of your name, image and likeness, and these are clauses you have in your contracts. But what we now see with deepfakes is that that can be usurped, taken over without consent, to great detriment. It's important that we recognise that a person owns their own IP and their consent is required for it to be used or manipulated. That contractual right needs to be recognised, pulled into legislation and applied to all deepfakes created using a person's name, image, likeness or voice without their consent. No deepfakes should be permitted to the extent that they infringe on an individual's right to their own identity.

We are living in an advanced technological and digital age. It has seen immense and very fast progress but also new challenges that demand our attention and that we, as lawmakers, should be addressing. AI is moving faster than any regulator can keep up with, but that's not an excuse. Government needs to get on the front foot. Other countries, including the United States, the European Union, China and the United Kingdom, are acting faster than Australia in combating and regulating against the use of non-consensual deepfakes. Just this month Republican and Democrat senators have co-sponsored a bill to require social media companies to take down deepfakes within two days of them being reported. The European Union has passed the world's first comprehensive AI law, including a requirement for deepfakes to contain a watermark to inform users that they are fake images. Put yourself in the seat of a consumer that sees advertising: it might look unrealistic, but without a watermark, for example, as legislated by the European Union, how would a person know what is real and what is fake? I very much urge the government to consider the European Union legislation on the use of watermarks to inform viewers that what they are looking at is an AI generated deepfake, not a real image.

The stakes are high, and the road map for implementing further steps needs to be considered by the government. There are always implications with technological advancements. Sometimes they are for good and sometimes they are deeply concerning. It's imperative that we recognise the inherent risks of deepfakes, including the spread of misinformation—again, talking about consumers not being aware that they are deepfakes; threats to privacy and security; the altering of election outcomes; and the misleading and deceiving of voters. I've been a long-term advocate for protecting voters from misinformation and disinformation, and I've introduced a private member's bill, the Commonwealth Electoral Amendment (Voter Protections in Political Advertising) Bill 2023, which bans the use of deepfakes in political advertising. The integrity of our electoral process and campaigning is also threatened, alongside the outcomes of Australians' democratic right to choose government and elect their representative. The Australian Electoral Commission has said it doesn't have the tools to detect or deter AI-generated misinformation at the upcoming federal election and that Australia is not ready for its first AI election. That is concerning, yet we still haven't seen action from the government, despite a commitment to do so, to either debate the voter protections bill that I have tabled or introduce its own legislation to ensure there is no misinformation, or deceptive and misleading political advertising.

We have seen huge amounts of misinformation and disinformation. Often it is identifiable as being misinformation and disinformation but at least from a source that doesn't look like people you trust. It is used not only for elections. We are seeing huge losses for individuals and corporations as a result of scams. Scams often use deepfakes. Only a few years ago, the biggest scams a company faced were false invoices; now it is deepfakes. British engineering firm Arup lost $25 million earlier this year when a deepfake video of company executives was used to convince an employee to transfer money after they took a video call, as others seen in the call sounded and looked like colleagues he recognised.

In March this year, the ACCC reported that last year Australians lost some $800 million to online investment trading platform trading scams, and, only last week, the Channel 7's YouTube channel was packed and branded with fake Tesla content to promote a cryptocurrency scheme. The National Anti-Scam Centre is warning consumers that they need to be much more aware of fake news articles and deepfake videos of public figures that endorse and link to online investment trading platform scams, particularly on social media, especially the ones using Hollywood actors, high-profile people. They are using deepfakes of these people to then try and scam people, especially when it comes to financial scams.

In Australia, deepfakes of politicians, including senior federal government ministers, are being used to promote these fake financial schemes as well. So I welcome the work being done by the government, including their interim response to the safe and responsible AI consultation, and the establishment of the artificial intelligence experts group. I also acknowledge the work done by the Australian Human Rights Commission on adopting AI in Australia, which calls on the government to adopt watermarking as a priority.

It must also be highlighted the risks that AI presents to embedding past biases, inequalities, including gender and race, in up to 44 per cent of AI models. It is embedded in those AI models this idea of the norm generally being a white Caucasian male. For example, doctors are always considered men, nurses are always described as women, and poor people as people of colour. That is what happens in those AI models. It is an issue that has been highlighted not only by the Australian Human Rights Commissioner but the United Nations and other jurisdictions.

I fully support an extensive ban on deepfakes without consent and I fully support this bill as a first step in creating a safer online environment for all Australians, particularly women. We need further, faster action from the government on a number of fronts with AI, so I urge them to act.

Let us not forget the need for education and awareness. We need to make sure young people are aware of the danger and legal ramifications of creating and sharing deepfake content that will come as a result of this legislation and that implementation of this bill is accompanied by robust supports for victims, offering psychological, legal and social assistance. I commend the bill to the House but I also urge the government take note of the amendment moved.