California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush...
Not a legal expert, but this use case doesn’t seem very fair. Copying the content for a journalism class or for critique makes logical sense. You don’t need know anything about the details of a given legal doctrine to understand this.
This is just a tech-enabled copying device.
I strongly disagree with your analogy. Anyone can set up a blog covering the exact same niche topic; you would not have to give any kickback to anyone or ask for permission.
We’re saying the same thing here. It’s just your characterization of gen AI as a “tech-enabled copying device” isn’t accurate. You should read this which breaks down how all this works.
I agree with the high level socio-political commentary around sectoral bargaining and the discussion around the technical and social limitations of copyright law.
I still disagree with the notion that developing AIBlog 2000 SEO-optimized slop generator falls under fair use (in terms of principles, not necessarily legal doctrine).
Academics programmatically going through the blog contents to analyze something about how perceptions of the niche topics changed. That sounds reasonable.
Someone creating a commercial review aggregation service that scraped the blog to find reviews and even includes review snippets (with links to the source) and metadata. Sure.
Spambot 3000, where the only goal is to leverage your work to shit out tech-enabled copies for monetization does not seem like fair use or even beneficial for broader society.
Perhaps the first two examples are not possible without the third one and we have to tolerate Spambot 3000 on that basis, but that’s not the argument that was provided in this thread.
One of the provisions of fair use is the effects on the market. If your spambot is really shitting up the place, you may very well run afoul of the doctrine.
Not a legal expert, but this use case doesn’t seem very fair. Copying the content for a journalism class or for critique makes logical sense. You don’t need know anything about the details of a given legal doctrine to understand this.
This is just a tech-enabled copying device.
I strongly disagree with your analogy. Anyone can set up a blog covering the exact same niche topic; you would not have to give any kickback to anyone or ask for permission.
Am I missing something here?
We’re saying the same thing here. It’s just your characterization of gen AI as a “tech-enabled copying device” isn’t accurate. You should read this which breaks down how all this works.
I agree with the high level socio-political commentary around sectoral bargaining and the discussion around the technical and social limitations of copyright law.
I still disagree with the notion that developing AIBlog 2000 SEO-optimized slop generator falls under fair use (in terms of principles, not necessarily legal doctrine).
Academics programmatically going through the blog contents to analyze something about how perceptions of the niche topics changed. That sounds reasonable.
Someone creating a commercial review aggregation service that scraped the blog to find reviews and even includes review snippets (with links to the source) and metadata. Sure.
Spambot 3000, where the only goal is to leverage your work to shit out tech-enabled copies for monetization does not seem like fair use or even beneficial for broader society.
Perhaps the first two examples are not possible without the third one and we have to tolerate Spambot 3000 on that basis, but that’s not the argument that was provided in this thread.
One of the provisions of fair use is the effects on the market. If your spambot is really shitting up the place, you may very well run afoul of the doctrine.