

I saw a paper a while back that argued that AI is being used as “moral crumple zones”. For example, an AI used for health insurance acts allows for the company to reject medically necessary procedures without employees incurring as much moral injury as part of that (even low level customer service reps are likely to find comfort in being able to defer to the system.). It’s an interesting concept that I’ve thought about a lot since I found it.
I think you make some compelling points overall, but fair use has always been more complex than this. The intent is taken into account when evaluating whether something is fair use, but so is the actual impact — “fair use” is a designation applied to the overall situation, not to any singular factors (so a stated purpose can’t be fair use)