• deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Kinda sorta.

    AI, or rather LLMs, can barf out a lot of passable text quickly. That can be useful as a starting point for something useful, if a human mind is willing and able to review and repair it. It’s like having an idiot intern.

    But the number of people who use LLMs in a way that reflects and understanding of their limitations is diminishingly small. Most people just don’t assume that something that looks valid needs to be fully and critically reviewed. That’s why we’ve had multiple cases of lawyers having ChatGPT write theis legal briefs based on hallucinated legal precedent.

    • 0xD@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      That’s not a problem of the technology though, that’s human idiocy.

      • deathbird@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        On the one hand, absolutely, human idiocy.

        On the other hand, as a society it behooves us to thenk about how to stop idiots from hurting themselves and others. With IT, and in the context of corpo marketing hype, I am deeply concerned about politicians using AI or allowing AI to be used to do things poorly and thus hurt people simply because they have too much faith in the tool or its salesmen. Like, for example, rewriting the Social Security database.