Name the artwork behind AI art with this algorithmically clever tool!

General
Name the artwork behind AI art with this algorithmically clever tool!

Update: The creators of the tool posted a Twitter thread (opens in new tab) explaining in more detail how the tool works and reiterating that it does not work as intended except for AI-generated images. They also noted that "obviously there is a lot of room for improvement here" and that not everything is working yet with the tool for the public.

Stable Attribution, which appears to be an actually unstable tool for that, is still in beta; determining where the AI learned to create what, or how many of its billions of images are involved in the creation of another image, is not an easy It is not a process. Perhaps this tool will morph over time into something useful for attribution. Or perhaps the company behind this AI image generator will find another way to calm the artists who demand some token for their work to make it all happen. But that may just be wishful thinking on my part on behalf of the artists.

Original Article The popularity of AI art tools has skyrocketed in the past year, with millions of people using incredibly impressive tools like "DALL-E 2" and "Stable Diffusion" to generate images out of nowhere using text prompts. stable Attribution (opens in new tab) wants everyone to know more about the human-made art from which AI art ultimately derives.

Stable Attribution is an algorithm that can sniff out the likely source images used to create AI art. It is a kind of reverse engineering algorithm that finds human-made artwork that helped the AI survive, something that could be very important to artists in their ongoing feud with AI image generation tools. Stable attribution may provide a way for artists to regain control over the use of their images.

An example of how this might work: enter the prompt "giant PC roaming the woods looking for fresh PC parts to consume" into the AI image generation tool Stable Diffusion. the AI will spit out the following image.

I then download this image and drop it into Stable Attribution, which spits out a collection of images (opens in new tab) that I assume were used in the Stable Diffusion training and referenced in the creation of my prompt image. In this case, product banners, images of Spanish vocational schools, product lifestyle shots, and much more.

If any of these images are yours or known to you, you may submit a link for proper credit.

Stable Attribution works by decoding AI models into the most similar examples it can find from available datasets; some companies, like Stable Diffusion and LAION, use publicly available datasets and Stable Attribution can also index copies of the dataset for cross-referencing. However, such tools are not available for DALL-E 2 because OpenAI training data is not publicly available.

Stable Attribution is also not yet a perfect algorithm for attribution.

"Version 1 of the Stable Attribution algorithm is not perfect, partly because the training process is noisy and the training data contains many errors and redundancies. Stable Attribution says.

How similar the output image is to the image used in training is not a perfect way to discern what was done to create the image at that exact instance AI algorithms are becoming increasingly complex, and the more complex they are, the more likely they are to be used in training.

"But this is not an impossible problem," Stable Attribution continues. [23] [24] As AI expert Alex J. Champander points out, this can be a useful tool for AI artists as well as for artists who feel their copyrights have been infringed. If you are selling art generated by AI, you Is there a case for a copyright claim against you; is the responsibility for using the AI tool on you, or rather on the company that created the AI tool? Is training on a dataset using copyrighted material unfair and in violation of copyright?" These are issues that have no clear precedent yet, but are sure to be debated for years to come.

It comes down primarily to training; AI image tools use large amounts of data for training. Training is the process of teaching an AI to do something, which in the case of Stable Diffusion and DALL-E 2 is to produce an image that matches a given description. training an AI to do something is not a matter of training it to be morally ambiguous or outrageously There is nothing inherently wrong with training an AI to do something, unless it is morally ambiguous or outrageously evil. In this case, these AIs are trained to generate simple, mostly harmless images. What's wrong with that?

No, not necessarily. [i.e., the URLs and descriptions of the images are stored in a database and later fed into the algorithm. These datasets can contain millions of URL and description tag pairs, and are most often sold or provided by third-party dataset collection companies.

The problem that artists have noted with these AI art tools is that these datasets are often filled with copyrighted images. The datasets may contain art that you uploaded to a forum in 2008 or art that you once entered into an online competition. It may also contain images that are offensive and must be removed (opens in new tab). It is even possible that the dataset contains a picture of you. If you don't want the AI to be trained on your likeness, you must ask the dataset company to remove it.

It's pretty tricky. Laws have not kept up with AI tools, but multiple lawsuits have begun to be filed against StabilityAI, OpenAI, and many AI businesses like them for copyright infringement (opens in new tab). Artists feel that their art styles are being copied by AI and used to support multi-billion dollar companies (opens in new tab). On the other hand, they have never consented to their images being used, regardless of their copyright position on these images, and rarely receive a penny for their contributions.

However, there have been some attempts to make AI art tools more equitable for artists; Shutterstock is currently partnering with OpenAI to offer AI art (opens in new tab), but pays royalties to artists whose work is used in its creation Stable Diffusion also plans to allow artists to opt out of future versions of the tool (open in a new tab), though it never offered such an option in the first place. Stable Diffusion has no claim to these images in any legal sense, even if their use in training is somewhat of a gray area.

These court cases will not be answered for some time, and even if they are, one opinion will not end the matter. The law has a lot of catching up to do regarding AI, and image generation is only a small part of the discussion.

.

Categories