Your comment made my day. Thanks.
Your comment made my day. Thanks.
Anyone spreading this misinformation and trying gatekeep being an artist after the avant-garde movement doesn’t have an ounce of education in art history. Generative art, warts and all, is a vital new form of art that’s shaking things up, challenging preconceptions, and getting people angry - just like art should.
Entertainment.
Their policy could never stop anyone in the first place.
Using copyrighted works without permission isn’t illegal and shouldn’t be. You should check out this article by Kit Walsh, a senior staff attorney at the EFF, and this open letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.
Someone dumb enough could easily flatten someone backing up with that bug.
Or just not show people what you’re typing.
I can’t tell if this is a joke or not.
A computer like that is useful outside of work. I’d pay for it out of pocket if I had to.
The only thing I got from this is that bro loves ads more than anything in the world.
I accept regulations are real, but not all ways to help people require you dealing with regulations. I’m still waiting on that proof by the way.
There are more ways to help people than making medical software. Rather than saying they could focus on doing simpler things, you automatically jumping to all projects running afoul of FDA regulations is pretty telling. All while still having not provided a single project halted by FDA order.
Which projects have been shut down by FDA order?
Open source AI is huge, and I don’t think you need FDA approval to distribute a model. Where are you even getting that from?
What about open source projects?
This isn’t about research into AI, what some people want will impact all research, criticism, analysis, archiving. Please re-read the letter.
You should read this letter by Katherine Klosek, the director of information policy and federal relations at the Association of Research Libraries.
Why are scholars and librarians so invested in protecting the precedent that training AI LLMs on copyright-protected works is a transformative fair use? Rachael G. Samberg, Timothy Vollmer, and Samantha Teremi (of UC Berkeley Library) recently wrote that maintaining the continued treatment of training AI models as fair use is “essential to protecting research,” including non-generative, nonprofit educational research methodologies like text and data mining (TDM). If fair use rights were overridden and licenses restricted researchers to training AI on public domain works, scholars would be limited in the scope of inquiries that can be made using AI tools. Works in the public domain are not representative of the full scope of culture, and training AI on public domain works would omit studies of contemporary history, culture, and society from the scholarly record, as Authors Alliance and LCA described in a recent petition to the US Copyright Office. Hampering researchers’ ability to interrogate modern in-copyright materials through a licensing regime would mean that research is less relevant and useful to the concerns of the day.
It should be fully legal because it’s still a person doing it. Like Cory Doctrow said in this article:
Break down the steps of training a model and it quickly becomes apparent why it’s technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive shouldn’t exist, then you should support scraping at scale: https://pluralistic.net/2023/09/17/how-to-think-about-scraping/
Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia: https://www.theguardian.com/books/2009/apr/03/agatha-christie-alzheimers-research
The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software: https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech/
That’s all these models are, someone’s analysis of the training data in relation to each other, not the data itself. I feel like this is where most people get tripped up. Understanding how these things work makes it all obvious.
They don’t train on random social media posts. Everything is sorted and approved.
Here’s a video explaining how diffusion models work, and this article by Kit Walsh, a senior staff attorney at the EFF.