Haha. Maybe.
I doubt the VCs will provide much followup funding if they can’t control the code base but weirder things have happened.
Haha. Maybe.
I doubt the VCs will provide much followup funding if they can’t control the code base but weirder things have happened.
There are a lot of scams around AI and there’s a lot of very serious science.
While generative AI gets all the attention there are many other fields of AI that you probably use on a regular basis.
The reason we don’t see the rest of the AI iceberg is because it’s mostly interesting when you have enormous amounts of data you want to analyze and that doesn’t apply to regular people. Most of the valuable AIs (as in they’ve been proven to make or save a bunch of money) do stuff like inventory optimization, protein expression simulation, anomaly detection, or classification.
It’s otherwise a fairly well written article but the title is a bit misleading.
In that context, scare quotes usually mean that generative AI was trained on someone’s work and produced something strikingly similar. That’s not what happened here.
This is just regular copyright violations and unethical behavior. The fact that it was an AI company is mostly unrelated to their breaches. The author covers 3 major complaints and only one of them even mentions AI and the complaint isn’t about what the AI did it’s about what was done with the result. As far as I know the APL2.0 itself isn’t copyrighted and nobody cares if you copy or alter the license itself. The problem is that you can’t just remove the APL2.0 from some work it’s attached to.
Just started playing Shapez 2. https://shapez2.com/
Hot damn, is that addictive.
I keep wondering if information like this will change anyone’s mind about Disney.
It seems like all Iger has to do is throw a little shade at Trump or DeSantis and everyone instantly believes that Disney is some sort of bastion of progressive thought that doesn’t have a vile history of exploitation.
What’s the most prominent instance of a studio being forced to use Sweet Baby Inc.?
Maybe.
There have been a number of technologies that provided similar capabilities, at least initially.
When photography, audio recording, and video recording were first invented, people didn’t understand them well. That made it really easy to create believable fakes.
No modern viewer would be fooled by the Cottingley Fairies.
The sound effects in old radio shows and movies wouldn’t fool modern audiences either.
Video effects that stunned audiences at the time just look old fashioned now.
I expect that, over time, people will learn to recognize the low-effort scams. Eventually we’ll reach an equilibrium where most people won’t fall for them and there will still be skilled scammers who will target gullible people and get away with it.