AI-generator Civitai under fire for creating NSFW images of children



AI image generator Civitai recently got under fire over being used to create problematic images. In fact, some of them might be child pornography, which led tech provider OctoML to sever ties with Civitai.

About Civitai

Civitai uses special AI technology to make different kinds of images, from illustrations to realistic pictures. However, some people use it for harm, creating fake, inappropriate pictures. Some of them look like real people, including celebrities. And, of course, they were made without their permission.

Civitai got $5 million from a big company called Andreessen Horowitz. People criticized it for encouraging these wrongful uses with its “bounties” feature. And it looks like this has gone way too far.

What happened?

404 Media investigated leaked documents and discovered that the AI model, which was supposed to be safe, was actually making unethical images. As I mentioned, some could reportedly even be categorized as sexual images of children.

The engine that powers Civitai, OctoML, first reportedly stopped all NSFW content from being made on Civitai. Then, they decided to completely end their business with them. “We have decided to terminate our business relationship with Civitai,” OctoML told 404 Media. “This decision aligns with our commitment to ensuring the safe and responsible use of AI.”

The Founder’s View

Civitai founder Justin Maier told Venture Beat that he knew some people were making NSFW content. However, he added that they were also helping to improve the technology. “We could have prevented that stuff from being posted, but I felt like it would put us at risk of hampering the development of the community too early,” he said. “We’re kind of at the center of open-source AI development around images.”

“People that are there to make these NSFW things are creating and pushing for these models in ways that kind of transcend that use case,” Maier said. “It’s been valuable to have the community even if they’re making things that I’m not interested in, or that I prefer not to have on the site.”

This situation has made people think more about how AI should be used correctly and the responsibilities of those who make and provide AI technology. We’ve already seen the use of AI-generated images for extortion, misogynistic apps, propaganda, and fake news. It’s a big issue, and I’m afraid it’s just going to get bigger and more complex as AI keeps evolving.

via PetaPixel

We will be happy to hear your thoughts

Leave a reply

Bean town discount store
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart