• frog 🐸@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Sure, you should be free to make one. But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!), there are valid questions about whether that will cause them harm rather than help - and grieving people do not always make the most rational decisions. They can very easily be convinced that interacting with AI-you would be good for them, but it actually prolongs their grief and makes them feel worse. Grieving people are vulnerable, and I don’t think AI companies should be free to prey on the vulnerable, which is a very, very realistic outcome of this technology. Because that is what companies do.

    So I think you need to ask yourself not whether you should have the right to make an AI version of yourself for those who survive your death… but whether you’re comfortable with the very likely outcome that an abusive company will use their memories of you to exploit their grief and prolong their suffering. Do you want to do that to people you care about?

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      5 months ago

      But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!)

      You can stop right there, you’re just imagining a scenario that suits your prejudices. Of all the applications for AI that I can imagine that would be better served by a model that is entirely under my control this would be the top of the list.

      With that out of the way the rest of your rhetorical questions are moot.