• Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Nifty! I wrote something similar a couple years ago using Vosk for the stt side. My project went a little further though, automating navigating the programs you start. So you could say: “play the witcher” and it’d check if The Witcher was available in a local Kodi instance, and if not, then figure out which streaming service was running it and launch the page for it. It’d also let you run arbitrary commands and user plugins too!

    I ran into two big problems though that more-or-less killed my enthusiasm for developing on it: (1) some of the functionality relied on pyautogui, but with the Linux desktop’s transition to Wayland, some of the functionality I relied on was disappearing. (2) I wanted to package it for Flatpak, and it turns out that Flatpak doesn’t play well with Python. I was also trying to support both arm64 and amd64 which it turns out is also really hard (omg the pain of doing this for the Pi).

    Anyway, maybe the project will serve as some inspiration.

    • caseyweederman@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Very necessary, I very much want to be rid of Okay Google but open-source alternatives like Mycroft keep getting shut down.

      • currycourier@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        Shoot Mycroft got shut down? I remember looking into it a bit ago and filing that away as a future project, rip. I know Homeassistant also has one now too

        • Fungah@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          9 months ago

          I be been fiddling with home assistants voice thing a bit and like wvwry4hing home assistant the process has been frustrating and bordering on Kafkaesque. I bought these atom echo things they recommend which don’t seem to make the best google home replacements, and in struggling to figure out how to get home assistant to pipe the sound out of another device, thereby making them useful.

          Admittedly this may be simpler if all I was looking to do is say things and have stuff happen in a default voice model, but I fine tuned my own RTS voice model(s) and am looking to be able to use them for controlling homeass as well as for general inference when i feel like it.

          I’ve spent some tim3, not a lot but some, trying to find out what devices can be m2dia players and under what conditions and how (or whether) you can use esp home to pipe audio through the media player / use USB mics as microphones for the voice stuff.

          I’m kind of at a loss as far as understanding what the actual intention was for homeless’ year of the voice, so I’ve be3n thinking that maybe offloading some of my goals to a container or VM on TNT server running homeless on proxmox may be a better path forward. I came across this post just in time it seems.