The cat is out of the bag and despite many years of warning before this and similar technology became widely available, nobody was really prepared for it - and everyone is solely acting in their own best interests (or what they think their best interests to be). I think the biggest failure is that despite there being warnings signs long before, every single country failed to enact legislation that could actually meaningfully protect people, their identity and their work(s) while still leaving enough room for research and the beneficial use of generative AI (or at least finding beneficial use cases).
In a way, this is the flip side of the coin of providing such easy access to cutting edge tech like machine learning to everyone. I don’t want technology itself to become the target of censorship, but where it’s being used in a way that harms people, like the examples used in the article and many more, there should be mechanisms, legal and otherwise, for victims to effectively fight back.
nasty things people do with AI [trigger warning]
“I went on to this stream because somebody gave me a heads up and I went on and heard my own voice reading rape porn. That’s the level of stuff we’ve had to deal with since this game came out and it’s been horrible, honestly.”
Amelia Tyler.
I cannot imagine going into a stream of someone playing a game you have poured your heart and soul into for years, and hear you own voice reading stuff like that
Edit: fixing spoiler tag.
Don’t know if just me, but this spoiler tag doesn’t work on either Sync nor Boost.
Works in Voyager now! Didn’t used to, but was updated recently.
I use jerboa and it is working (I used the toolbar to generate it, but had to fix it because my mobile keyboard is a massive PITA for any corrections and I haven’t had time to find something new).
Anyway, looks like sync and boost are not lemmy-markdown-compatibleWorking for Connect on Android.
deleted by creator
I feel there needs to be more nuance to how this AI is used.
For commercial settings (including streaming), permission from the voice actors must be given first, or at the very bare minimum monetarily compensated at their full rates for the amount of time those voice lines are used.
However, if I want to mod Baldur’s Gate 3 for fun and add a new companion into the game without any expectation of profit, as long as my usage of the Narrator’s and other companion’s voice lines don’t stray from the established style of the game, I should be allowed to use AI to create those voice lines until I secure funding (either through donations or Patreon) to actually hire the voice actors themselves.
I disagree. It would be better to set a precedent that using people’s voices without permission is not okay. Even in your example, you’re suggesting that you would have a Patreon while publishing mods that contain voice clips made using AI. In this scenario, you’ve made money from these unauthorized voice recreations. It doesn’t matter if you’re hoping to one day hire the VAs themselves, in the interim you’re profiting off their work.
Ultimately though, I don’t think it matters if you’re making money or not. I got caught up in the tech excitement of voice AI when we first started seeing it, but as we’ve had the strike and more VAs and other actors sharing their opinions on it I’ve come to be reminded of just how important consent is.
In the OP article, Amelia Tyler isn’t saying anything about making money off her voice, she said “to actually take my voice and use it to train something without my permission, I think that should be illegal”. I think that’s a good line to draw.
From the quotes in the article, I have to agree with drawing that line. On the one hand, making a non-profit mod using AI-generated voices has no opportunity cost to the actors since they wouldn’t have been hired for that anyway. On the other hand, and this is why I am leaning against training AI voices off people at all without permission, it can cause actual harm to the actor to hear themselves saying things they would otherwise be offended by and wouldn’t ever say in reality. In other words, the AI voices can directly harm people (and already have, according to the article at least).
And we thought identity theft was shitty before. I hope that we’ll have better tools to identify AI voices in the future. In some cases right now I have a hard time telling between an actual person and a faked voice.
This problem cannot be solved by tools, because you can use these tools to make AI-generated content more realistic (adversarial training).