• werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    If this works, why not have an AI automatically investigate Judges and government officials. The AI should indicate for example if the judge needs to recuse him or herself… That came up several times this year. And for politicians, the AI would tell us if they are lying or if they are allowing or taking part in corruption. For this purpose, they should wear a microphone and camera the entire time they are government officials. Don’t like it? Too bad, that’s the law. Right?

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      why not have an AI automatically investigate Judges and government officials

      Because the power is supposed to originate with said Judges/Officials. The AI tool is a means of justifying their decisions, not a means of exerting their authority. If you give the AI power over the judges/officials, why would they want to participate in that system? If they were proper social climbers, they would - instead - aim to be CEOs of AI companies.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    9 months ago

    I have a sneaking suspicion if police in places like America start using AI to review bodycam footage that they’ll just “pay” someone to train their AI so that way it’ll always say that the police officer was in the right when killing innocent civilians so that the footage never gets flagged That, or do something equally as shady and suspicious.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      These algorithms already have a comical bias towards the folks contracting their use.

      Case in point, the UK Home Office recently contracted with an AI firm to rapidly parse through large backlogs of digital information.

      The Guardian has uncovered evidence that some of the tools being used have the potential to produce discriminatory results, such as:

      An algorithm used by the Department for Work and Pensions (DWP) which an MP believes mistakenly led to dozens of people having their benefits removed.

      A facial recognition tool used by the Metropolitan police has been found to make more mistakes recognising black faces than white ones under certain settings.

      An algorithm used by the Home Office to flag up sham marriages which has been disproportionately selecting people of certain nationalities.

      Monopoly was a lie. You’re never going to get that Bank Error In Your Favor. It doesn’t happen. The House (or, the Home Office, in this case) always wins when these digital tools are employed, because the money for the tool is predicated on these agencies clipping benefits and extorting additional fines from the public at-large.