Protecting Artists from AI Technologies

Hello,
We are the Concept Art Association, an advocacy organization for artists working in entertainment. Our board member, Karla Ortiz, has been one of the leaders in our industry fighting back against the unethical practices happening in the AI text-to-image space. As an organization and as individuals we deeply care about this issue, not just for those actively working as visual artists, but for future generations of artists and for the preservation of our creative industries. But before we dive into our plan…

What are text-to-image AI/ML models?
A text-to-image model takes input from a user in the form of a natural language prompt and produces an image matching that prompt. To condition that capability the model needs to be trained on a huge collection of images, media, and text descriptions scraped from the web and collected in the form of a “dataset ” in order to extract and encode an intricate statistical survey of the dataset's items. Images are generated from an input prompt by assembling visual data that attempts to best simulate the statistical correlations between text in the dataset and images in the dataset in order to produce "acceptable" results.

Some of this data is the copyrighted work of artists and the private data of the public. As these models produce derivative works based on probability and statistics, they are prone to reproducing biases, stereotypes, and copyrighted works present within the datasets. Essentially, it could be described as “an advanced photo mixer” generating potential derivations based on statistical probability. **we are committed to an accurate description of the technology and the issues facing them, so we decided to update our language for a more detailed look at the issues. Read the original verbiage below**


What are these unethical practices?
Images and text descriptions across the internet are gathered and taken by a practice called data mining and/or data scraping. This technique allows AI/ML companies to build the massive datasets necessary to train these AI/ML models.

Stability AI funded the creation of the biggest and most utilized database called LAION 5B. The LAION 5B database, originally created on the pretext of “research” contains 5.8 billion text and image data, including copyrighted data and private data, gathered without artists, individuals, and businesses' knowledge or permission.

MidJourney, Stability AI, Prisma AI (Lensa AI) & other AI/ML companies are utilizing these research datasets that contain private and copyrighted data, for profit. They did so without any individual’s knowledge or consent, and certainly without compensation.

What do we plan to do about it?
Firstly, there are lots of things we all can do about it. Just because it's out in the world and happening doesn’t mean we can’t come together as a community and push back. We urgently want to take this conversation to D.C. and educate government officials and policymakers on the issues facing the creative industries if this technology is left unchecked. The speed at which this is moving means we also need to be moving quickly. Working alongside a lobbyist some potential solutions/asks would be:

  • Updating IP and data privacy laws to address this new technology
  • Updating laws to include careful and specific use cases for AI/ML technology in entertainment industries, i.e. ensuring no more than a small percentage of the creative workforce is AI/ML models or similar protections. Also update laws to ensure artists Intellectual Property is respected and protected with this new technologies.
  • Requiring AI companies to adhere to a strict code of ethics, as advocated by leading AI Ethics organizations.
  • Requiring AI companies to work alongside Creative Labor Unions, Industry coalitions, and Industry Groups to ensure fair and ethical use of their tools.
  • Governments hold Stability AI accountable for knowingly releasing irresponsible Open Source models with no protections to the public.

Getting to D.C.
We have had several conversations with lobbyist experts for creators' rights in D.C. and it's a long and expensive road. However, with the help of the community, we know we can do it and that it is worth fighting for. Below is a projected budget of what we will need for the first year of this fight:

Staff:
$187,500 for one year of full-time lobbyist in D.C.
$40,000- Full-time Employee to coordinate and asset creation of all policy and PR needs of the movement
$10,700 federal and state payroll taxes for the employee

Software:
$144 email
$216 website

D.C. Educational Event for Legislators
$3,000 airfare
$1,000 accommodations
$200 costume models
$100 for fliers
$200 social media advertising
$3,000 food, drinks & rentals

Additional meetings in D.C.
$10,000 - flights, accommodation, and expenses.

Additional Needs
$3,000 Copyright Alliance membership
$3,000 Copyright Alliance scholarship for other artist advocacy orgs
$7,940 GoFundMe service fees

$270,000 total ask (including GoFundMe service fees)

So what next? The future of AI text-to-image models.

We are not anti-tech and we know this technology is here to stay one way or another but there are more ethical ways that these models can co-exist with visual artists. This is what we will be proposing these future models look like:
  • Ensure that all AI/ML models that specializes in visual works, audio works, film works, likenesses, etc. utilizes public domain content or legally purchased photo stock sets. This could potentially mean current companies shift, even destroy their current models, to the public domain.
  • Urgently remove all artist’s work from data sets and latent spaces, via algorithmic disgorgement. Immediately shift plans to public domain models, so Opt-in becomes the standard.
  • Opt-in programs for artists to offer payment (upfront sums and royalties) every time an artist’s work is utilized for a generation, including training data, deep learning, final image, final product, etc. AI companies offer true removal of their data within AI/ML models just in case licensing contracts are breached.
  • AI Companies pay all affected artists a sum per generation. This is to compensate/back pay artists for utilizing their works and names without permission, for as long as the company has been for profit.


Coming together as a community

Outside of this GoFundMe we want to nurture a grassroots movement and empower individuals and other artist orgs to speak up and mobilize. If you are not currently in a position to donate here is a list of things you can do for free:

  • Reach out to your local, state and federal politicians to push for updated IP and data privacy laws to address this new technology and to protect creative workers.
  • Call out irresponsible companies' unethical use of creators’ intellectual property and personal data when you see it.
  • Avoid using AI/ML models that use unethical datasets for now. You are literally feeding the beast!
  • Invest in or volunteer with advocacy organizations working toward change.
  • Send feedback or complaints to government agencies (like the U.S.’s FTC or EU Data Protection Agencies (DPAs) and make them hold irresponsible companies accountable.
  • Stay educated and informed, including sharing your education and knowledge with your peers.
  • Share our GoFundMe with everyone!
  • Stay motivated, and continue to create your art and share it with the world!

Learn more about what's been happening in the movement:
Concept Art Association Townhall 2: With the US Office of Copyright https://www.youtube.com/watch?v=7u1CeiSHqwY&ab_channel=ConceptArtAssociation

Steven Zapata’s Video Essay:

Further reading:

1. Concept Art Association in the Guardian
2. NBC NEWS:
3. Arts News on Lensa AI unethical practices:
4. Latin Times, AI as an authenticity threat to art
6. New York Time:
7. Washing Post:
8. Upworthy:
9. CNN:
10. USA Today
11. Slate
12. Buzzfeed
13. Nintendo magazine interview:
14. AI users intentionally copy an artist’s work:
17. Concerns of Deepfakes for AI
18. Montreal AI Ethics article against Unstable Diffusion.
19.Forbes:
21. NPR Air Talk with Larry mantle.
22. Medical records found in Laion Database
23. Vice, data bases contain non consensual porn:
24. An AI user tries to copyright data. User was eventually denied by the US Copyright Office, but is currently appealing the decision.
25. LAION 5b database search engine (find out if your work has been trained on!)
26. Stable diffusion-generated imagery and prompt search engine.
27. More ways to find if your work has been used for training:
28. AI cannot forget:

We as artists have a reputation of just letting things happen. Together we can break that myth and advocate together for a future that doesn't exclude us. Join us in this fight and help shape the future of the arts!

**Original verbiage for What are text-to-image AI/ML models?: AI/ML text-to-image models work by entering a prompt or several prompts, the model then responds by searching its huge collection of images, media, and text descriptions, known as “data sets”. It then finds the connections between the visual data, text data, and “prompts” given by the user, and proceeds to generate an image based on the prompt that was given. Essentially, it could be described as “an advanced photo mixer” generating potential derivations based on statistical probability.
Become the first supporter

Your donation matters

GoFundMe protects your donation

We guarantee you a full refund for up to a year in the rare case that fraud occurs. See our GoFundMe Giving Guarantee.

See all

Organizer

Concept Art Association
Organizer
Arcadia, CA

Your easy, powerful, and trusted home for help

  • Easy

    Donate quickly and easily.

  • Powerful

    Send help right to the people and causes you care about.

  • Trusted

    Your donation is protected by the  GoFundMe Giving Guarantee.