Artificial intelligence

Source: Pixabay

Actors and artists need to negotiate now to protect labour in anticipation of how powerful AI will become as the technology expands exponentially, a TIFF Industry Conference panel heard on Friday.

The Perspectives session ‘AI and film: bridging the gap between innovation and responsibility’ at Glenn Gould Studio also addressed broader ethical issues of representing marginalised voices in the creation of the technology, consumer protection, copyright and data privacy.

The panel of four speakers referenced the rapid pace at which venture capital has been flooding into the space for years, and noted that government intervention will emerge amid a growing docket of lawsuits over unathorised use of content.

Patricia Thaine, the CEO and co-founder of Private AI, got to the heart of one of striking SAG-AFTRA’s key demands – AI regulation – in its stalled talks with the studios and streamers.

“We’re not talking about how good AI is at the moment; we’re talking about what kind of negotiation [actors] have to fight for to protect themselves for a future where it might be even better than it is now,” said Thaine, who has a background in language modelling technology.

She added, “This is very much about what points will I have to concede if I’m not fighting for it now.”

Claire Leibowicz, head of the AI and media integrity programme at the Partnership on AI, said: “What does it mean for my likeness to be used in perpetuity… my colleagues who work on the labour impact of AI have underscored the need for consent to be a key part in how people’s likenesses are used.”

Legal minds are focused on copyright protection, which is a nascent albeit fiercely contested field in the space, and the debate over governmental intervention is ongoing amid a lack of strict guidelines in place.

Japanese publishers, some politicians and other interested parties are paying close attention after their government said earlier this summer it does not view AI systems training based on copyrighted material to be an infringement.  

In the US Getty Images is suing Stability AI, creator of the art generator Stable Diffusion, after the stock photography company alleged Stability AI infringed copyright when it copied 12million of its images to build a rival database.

The comedian and actor Sarah Silverman is suing Meta and Open AI – the creator of the ChatGPT app that launched in late 2022 – after her memoir The Bedwetter was used to train their AI systems without her say-so. 

Mia Shah-Dand, CEO of Lighthouse3 and founder of Women In AI Ethics, focused her opening comments on representation of women and marginalised voices. She founded Women In AI in 2018 to promote diverse voices in this space and encourage AI technologists to be inclusive in their endeavours.

“In AI so many ethical issues are based on the exclusion of voices… so when you build facial recognition technologies they’re not going to recognise those faces,” Shah-Dand said.

“If you believe somebody’s life doesn’t matter,” she continued, ”you’re more likely to create a system that will put them out of a job or make them homeless. Technologies are built with the values of their founders, so their biases seep into the systems they’re building, which is why we can’t leave this work to just technologists that are mostly male.

The feverish activity of venture capitalism and the growing hordes of AI technologists hung over the panel. Moderator Will Douglas Heaven, senior editor for AI, at MIT Technology Review, noted that when he visited Open AI he saw that the company’s engineers believe AI is the most important technology ever invented.

“They also had a sense of inevitability about this,” he said. “If they don’t make it, somebody else will. The future is not deterministic. What tech is made, who makes it and how it is used is something we should all have a say in.”

However at several junctures of Friday’s panel, the speakers conceded they simply did not know how developments would unfold in the years ahead.

Private AI’s Thaine said the lines were still blurred in terms of distinguishing between content generated by humans and AI. “We do not have the technology infrastructure to determine sources,” she said, adding; “It can be done in some cases and not in others.”

On the subject of fake content online Shah-Dand said, “Misinformation is being weaponised against women; there aren’t enough tools out there [to protect women].”

Screen is the Perspectives media partner.