

And the data is not available. Knowing the weights of a model doesn’t really tell us much about its training costs.
And the data is not available. Knowing the weights of a model doesn’t really tell us much about its training costs.
A license that requires source. And since then there have been many different licenses, all with the same requirement. Giving someone a binary for free and saying they’re allowed to edit the hex codes and redistribute it doesn’t mean it’s open source. A license to use and modify is necessary but not sufficient for something to be open source. You need to provide the source.
“Open source” is not a license, it’s a description. Things can be free with no license restrictions and still not be “open source”.
A freely available and unencumbered binary (e.g., the model weights) isn’t the same thing as open-source. The source is the data. You can’t rebuild the model without the data, nor can you verify that it wasn’t intentionally biased or crippled.
Why stop there? The digital computer was introduced in 1942 and methods for solving linear equations were developed in the 1600s.
All of my artist friends also found it soul sucking, they just needed to make (real) money. Friends of friends with the occasional $20 to spare for a commission just don’t pay the bills. I think the only artist friends I have that make a living off their chosen medium and don’t hate their job are lifestyle photojournalists.
What? Alexnet wasn’t a breakthrough in that it used GPUs, it was a breakthrough for its depth and performance on image recognition benchmarks.
We knew GPUs could speed up neural networks in 2004. And I’m not sure that was even the first.
None of these appeals to relative complexity, low level structure, or training corpuses relates to whether a human or NN “know” the meaning of a word in some special way. A lot of your description of what “know” means could be confused to be a description of how Word2Vec encodes words. This just indicates ignorance of how ML language processing works. It’s not remotely on the same level as a human brain, but your view on how things work and what its failings are is just wrong.
No, not even remotely. And that’s kind of like citing “the first program to run on a CPU” as the start of development for any new algorithm.
It isn’t. People design a scene and then change and refine the prompt to add elements. Some part of it could be refreshing the same prompt, but that’s just like a photographer taking multiple photos of a scene they’ve directed to catch the right flutter of hair or a dress or a creative director saying “give me three versions of X”.
Ready to get back to my original questions?
I don’t disagree, just pointing out that it’s not “good riddance” for a lot of artists that depend on that to have any job in art.
No, both of those examples involve both design and selection, which is reminiscent to the AI art process. They’re not just typing in “make me a pretty image” and then refreshing a lot.
Is a photographer an artist? They need to have some technical skill to capture sharp photos with good lighting, but a lot of the process is designing a scene and later selecting among the photos from a shoot for which one had the right look.
Or to step even further from the actual act of creation, is a creative director an artist? There’s certainly some skill involved in designing and recognizing a compelling image, even if you were not the one who actually produced it.
The problem is that shit art is what employs a lot of artists. Like, in a post-scarcity society no one needing to spend any of their limited human lifespan producing corporate art would be awesome, but right now that’s one of the few reliable ways an artist can actually get paid.
I’m most familiar with photography as I know several professional photographers. It’s not like they love shooting weddings and clothing ads, but they do that stuff anyway because the alternative is not using their actual expertise and just being a warm body at a random unrelated job.
I’m referencing ChatGPT’s initial benchmarks to its capabilities to today. Observable improvements have been made in less than two years. Even if you just want to track time from the development of modern LLM transformers (All You Need is Attention/BERT), it’s still a short history with major gains (alexnet isn’t really meaningfully related). These haven’t been incremental changes on a slow and steady march to AI sometime in the scifi scale future.
Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.
So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence.
This is a misunderstanding of what “probabilistic word choice” can actually accomplish and the non-probabilistic systems that are incorporated into these systems. People also make mistakes and don’t actually “know” the meaning of words.
The belief system that humans have special cognizance unlearnable by observation is just mysticism.
Bonus trivia, sometimes you may see a downvote on a Beehaw post. As far as I understand the system, that’s because someone on your server downvoted the thing. The system then sends it off to Beehaw to be recorded on the “real” post and Beehaw just doesn’t apply it.
This is a post on the Beehaw server. They don’t propagate downvotes.
Except those things didn’t really solve any problems. Well, dotcom did, but that actually changed our society.
AI isn’t vaporware. A lot of it is premature (so maybe overblown right now) or just lies, but ChatGPT is 18 months old and look where it is. The core goal of AI is replacing human effort, which IS a problem wealthy people would very much like to solve and has a real monetary benefit whenever they can. It’s not going to just go away.
They don’t seem to actually identify the cookies as tracking (as opposed to just identifying that the account can bypass further challenges), just assuming that any third party cookie has a monetary tracking value.
It also appears to be unreviewed and unpublished a few years later. Just being in paper format and up on arXiv doesn’t mean that the contents are reliable science.