Little bit of everything!

Avid Swiftie (come join us at [email protected] )

Gaming (Mass Effect, Witcher, and too much Satisfactory)

Sci-fi

I live for 90s TV sitcoms

  • 23 Posts
  • 1.1K Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle




  • They said they were open to it but they had zero priority of doing it themselves, and essentially “submit a PR if you want it”. A shame really, their interface is great, and such an easy setup. If they implemented either xmpp or matrix I would switch immediately. All of my friends want a discord clone that “just” works, but no one wants to go to this server for this group and then login to that server for that group. They want a single-pane interface like what discord offers.

    Shortsighted to not implement that IMO.













  • Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI’s “Superalignment” team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called “Situational Awareness” about the emergence of artificial general intelligence and related security risks.[2] He is the founder and chief investment officer (CIO) of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.

    Wikipedia

    So, I’m calling bullshit. I’ve read the papers, I’ve kept up on everything. I run AI models myself to keep up with everything, I’ve built my own agents and my own agentic workflows. It keeps coming back to a few big things that unless they’ve suddenly had another massive breakthrough - I don’t see happening.

    • LLMs already have the vast majority of data associated with them, and they still hallucinate. The docs say that it will take exponentially more data to train them on a linear trajectory. So to get double the performance, we’d need the current amount of data squared.
    • LLMs and Agentic flows are very cool, and very useful for me. But they’re incredibly unreliable. And it’s just how models work - it’s a black box. You can say “that didn’t work” and it’ll train next time that it was a bad option, but it’s never going be zero. Businesses are learning (see Taco Bell and several others), that AI is not code. It is not true or false. It’s probably true or probably false. That doesn’t work when you’re putting in an order or deciding how much money to spend.
    • We’ve certainly plateaued with AI for the time being. There will be more things that come out, but until the next major leap we’re pretty much here. GPT5 proved that, it was mediocre, it was… the next version. They promised groundbreaking, but there just isn’t any more ground to break with current AI. Like I said agents were kind of the next thing, and we’re already using them.