AKA “shit, looks like now we need to re-hire some of those engineers”
AKA “shit, looks like now we need to re-hire some of those engineers”
TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do
deleted by creator
About 20 new cases of gender violence arrive every day, each requiring investigation. Providing police protection for every victim would be impossible given staff sizes and budgets.
I think machine-learning is not the key part, the quote above is. All these 20 people a day come to the police for protection, a very small minority of them might be just paranoid, but I’m sure that most of them had some bad shit done to them by their partner already and (in an ideal world) would all deserve some protection. The algorithm’s “success” in defined in the article as reducing probability of repeat attacks, especially the ones eventually leading to death.
The police are trying to focus on the ones who are deemed to be the most at risk. A well-trained algorithm can help reduce the risk vs the judgement of the possibly overworked or inexperienced human handling the complaint? I’ll take that. But people are going to die anyway. Just, hopefully, a bit less of them and I don’t think it’s fair to say that it’s the machine’s fault when they do.
I have to admit It was a solid idea, though. Dick pics should be one of the best training sets you can find on the internet and you can assume that the most prolific senders are the ones with the lowest chance of having an STI (or any real-life sexual activity).
Just wanted to point out that the Pinterest examples are conflating two distinct issues: low-quality results polluting our searches (in that they are visibly AI-generated) and images that are not “true” but very convincing,
The first one (search results quality) should theoretically be Google’s main job, except that they’ve never been great at it with images. Better quality results should get closer to the top as the algorithm and some manual editing do their job; crappy images (including bad AI ones) should move towards the bottom.
The latter issue (“reality” of the result) is the one I find more concerning. As AI-generated results get better and harder to tell from reality, how would we know that the search results for anything isn’t a convincing spoof just coughed up by an AI? But I’m not sure this is a search-engine or even an Internet-specific issue. The internet is clearly more efficient in spreading information quickly, but any video seen on TV or image quoted in a scientific article has to be viewed much more skeptically now.
I think they don’t matter with outrage, because outrage explodes in ways that are hard to predict. I mean, I can see the problem with the ad now that it has been pointed out to me. After reading about it repeatedly, I now find it bad and ridiculous and what were they thinking? But at a first look, as a test audience I would have probably rated it as “meh, ok”.