There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) “Write an A-Level paper on the themes in Shakespeare’s Romeo and Juliet.”
I propose a fourth: AI is now as good as it’s going to get, and that’s neither as good nor as bad as its fans and haters think, and you’re still not going to get an A on your report.
You see, now that people have been using AI for everything and anything, they’re beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.
My take is LLMs can speed up some work, like paraphrasing, but all the time that gets saved is diverted to verifying the output.
LLMs are super cool. You provide text A, and text B, add a little cosine similarity or something, and you’ve got a distance between the two texts.
Right, they also generate text. I guess embeddings aren’t really new.
Well the embeddings are nice anyway. Makes it easy to do semantic text searching (or even images or other kinds of inputs). Not sure what that has to do with the general public, but it’s great if you’re writing a search tool.