Actors’ new spec aims to defeat attack of the AI clones

aI + ML

RSL Media expands machine-readable licensing rules to cover AI use of identities and creative works

AI models can take your written work, they can take your voice, and they can even take your likeness to use for training material and for creating content that looks exactly like it came from you. Now, some actors are promoting a new licensing spec designed to protect their famous faces and yours too.

The newly formed public benefit non-profit is extending the Really Simple Licensing (RSL) spec developed by the RSL Internet Collective with the draft RSL Media Human Consent Standard (RSL-MEDIA) 1.0, which aims to cover creative works as well as people’s names, likenesses, voices, and other identity attributes.

The initial launch allows people to sign up and reserve an identifier that will serve as a key to structured data entered into the RSL Media public registry, scheduled to launch next month.

The registry will allow people to verify their identities, set permissions governing the use of their works and likeness, encode those permissions for machine consumption, and verify that AI systems are checking declared permissions.

Whether there will be any legal consequences for AI services that ignore registry settings remains to be seen. The data broker industry in the US hasn’t exactly suffered due to the notional existence of “privacy rights.” And public concern about non-consensual AI nudification and explicit deepfakes hasn’t really put an end to that form of technological abuse or punished the social media sites distributing it. But this time, Hollywood has shown up.

“AI technologies are expanding rampantly, essentially unchecked and unregulated,” said celebrated actress and RSL Media co-founder Cate Blanchett, in a statement. 

“In order for humans to remain in front of these technologies, consent must be the first consideration. RSL Media is a simple, effective and free solutions-based technology for facilitating and activating consent. It’s also the industry’s first practical solution where people everywhere, not just public figures, can assert control over how their work is used by AI.”

Nikki Hexum, co-founder and CEO of RSL Media, said, “AI can’t respect rights it can’t see, and this means human consent is virtually invisible in this new digital era. The right to decide whether AI can use your work or identity should not be reserved for only those who can afford lawyers or have platforms big enough to be heard, it is a basic human right.”

That’s not entirely correct. Rights do not need to be seen to be respected; due diligence prior to using material that may be copyrighted is expected. Ignorance of copyright does not excuse infringement, even if it might mitigate potential liability. 

AI model makers could have chosen to respect rights by default, by seeking permission to use data for training. They could have chosen to seek permission to crawl websites and could have heeded existing signals to crawlers like the Robots Exclusion Protocol. They could have chosen to abide by the requirements of open source software licenses in harvested code.

They did not do so, because Silicon Valley prefers to ask forgiveness rather than seek permission. Permission is expensive; there wouldn’t be much of an AI industry if that were the norm. The law may be one of the things broken by those applying Meta’s shelved mantra “move fast and break things.”

So far, industry disinterest in seeking permission has worked well – AI companies have been held to account in only a few of the hundred-plus lawsuits objecting to AI content capture. 

The underlying RSL standard is slowly gaining adoption. The RSL Collective says more than 1,500 media organizations, brands, technology companies, and standards groups now support it following the launch of RSL 1.0 last December and the relevant RSL XML file can be seen at sites like The Guardian.

While it’s unclear what impact the RSL has had on AI biz behavior, extending the RSL to cover personal identity with the RSL-MEDIA standard may stir broader interest in AI rules and their enforcement.

Or it may just affirm the XKCD comic about specifications and how they proliferate. There are already several similar protocols: TDM AI and TDMRep, Spawning’s ai.txt, AI Preferences, not to mention a few that focus solely on images and commercial offerings like Cloudflare’s Pay per crawl.

But RSL Media may have a leg up thanks to the involvement of high-profile celebrities like Blanchett and endorsements from similarly well-known peers.

“Of course artists and cultural creatives will inevitably be involved with AI,” said Dame Emma Thompson in a statement. “At the moment, however, AI is merely stealing from us all. This is an urgent and essential initiative. It’s also eminently doable, so let’s do it without delay.” ®

Editor’s note: This story was amended post-publication with clarification about the relationship between RSL Media and the RSL Internet Collective.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *