I don’t read my replies

  • 9 Posts
  • 167 Comments
Joined 2 years ago
cake
Cake day: July 6th, 2023

help-circle












  • “When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts,” the blog post notes. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”

    See? Even the people who make AI don’t trust it with important decisions. And the “trained” humans don’t even see it if the AI doesn’t flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.

    This is exactly the use-case for an LLM and even OpenAI can’t make it work.



  • yesman@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    2
    ·
    2 months ago

    found no evidence to date that Microsoft’s Azure and AI technologies have been used to target or harm people in the conflict in Gaza.

    -Microsoft, in May

    Dear Microsoft, If you looked for evidence, that is going to imply that your software could totally be used to harm people, it just isn’t in this case. As far as you know.