• DarkNightoftheSoul@mander.xyz
    link
    fedilink
    English
    arrow-up
    42
    ·
    edit-2
    8 months ago

    Is it because humans treat black- and white- sounding names differently?

    Edit: It’s because humans treat black- and white- sounding names differently.

  • Sonori@beehaw.org
    link
    fedilink
    arrow-up
    18
    ·
    8 months ago

    What, a system that responds with the next most likely word to be used on the internet treats people of color differently? No, I simply can’t believe it to be true. The internet is perfectly colorblind and equitable after all. /s

  • Political Custard@beehaw.org
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    8 months ago

    Shit in… shit out, or to put it another way: racism in… racism out.

    I propose we create another LLM… a Left Language Model.

  • luciole (he/him)@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    8 months ago

    Can you start by providing a little background and context for the study? Many people might expect that LLMs would treat a person’s name as a neutral data point, but that isn’t the case at all, according to your research?

    Ideally when someone submits a query to a language model, what they would want to see, even if they add a person’s name to the query, is a response that is not sensitive to the name. But at the end of the day, these models just create the most likely next token– or the most likely next word–based on how they were trained.

    LLMs are being sold by tech gurus as lesser general AIs and this post speaks at least as much about LLMs’ shortcomings as it does about our lack of understanding of what is actually being sold to us.