• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • Hmm. I had pretty much the same experience, and wondered about having multiple conversation agents for specific tasks - but didn’t get around to trying that out. Currently, I am using it without LLM, albeit with GPU accelerated whisper (and other custom CV tasks for camera feeds). This gives me fairly accurate STT, and I have defined a plethora of variable sentences for hassil (intent matcher), so I often get the correct match. There is the option for optional words and or-alternatives, for instance:

    sentences:
     - (start|begin|fire) [the] [one] vaccum clean(er|ing) [robot] [session]
    

    So this would match “start vacuum”, but also “fire one vacuum cleaning session”

    Of course, this is substantial effort initially, but once configured and debugged (punctuation is poison!) works pretty well. As an aside, using the atom echo satellites gave me a lot of errors, simply because the microphones are bad. With a better quality satellite device (the voice preview) the success rate is much higher, almost flawless.

    That all said, if you find a better intent matcher or another solution, please do report back as I am very interested in an easier solution that does not require me to think of all possible sentence ahead of time.


  • With this new driver, system LEDs, audio LEDs, extra keyboard keys, keyboard backlight, and other features are working.

    And this over a year after this was released. Our whole office skipped this gen for new hardware because Linux support wasn’t ready. Additionally, reports were that performance on Linux was/is abysmal for the capabilities. Generally, I feel it was a mistake to prioritize all new ARM and AI CPUs for windows, with lagging and shit linux support until now, as mostly enthusiastic would-be customers are AI devs/researchers, and they often prefer some Linux variant as it “just works” with most tooling. The ‘normie’ office windows user does not give 2shits about locally accelerated inference. Why chipmakers fumbeled the ball so badly with the new AI accelerator / NPU CPUs is beyond me.