Of course, I’m not in favor of this “AI slop” that we’re having in this century (although I admit that it has some good legitimate uses but greed always speaks louder) but I wonder if it will suffer some kind of piracy, if it is already suffering or people simple are not interested in “pirated AI”
I mean they stole people’s actual work already, so they’re the bad kind of pirates.
Just like people steal movies from the high seas? I hope this is sarcasm.
Nope, there’s a difference there in that they aren’t taking something from ordinary people who need the money in order to survive. Actors, producers, directors etc have already been paid and besides, hollywood etc aren’t exactly using that money to give back to society in any meaningful way most of the time.
They did not take money from anyone. Are ‘t we on the priacy community? What is with the double standards? It’s theft if it’s against the Little Guy™ but it’s civil copyright violation if it’s against the Corpos?
I’m against corporations. Not actual people. I don’t see how that’s double standards at all.
How is what they’re doing different from, say, an IPTV provider?
I’m talking about the data sets LLMs use, just so we’re on the same page.
So a while ago I was on a platform where the community valued art that was made by actual artists. AI Art was strictly forbidden and anyone who showcased said AI art in their gallery or used it as a profile picture, had it removed and could face penalty.
AI has been trained to generate art styles from many artists by crawling through the web. Anyone can go to any AI generating source, punch in a few descriptive keywords, tell AI to mimic a style as closely as possible and now you have a copy of said material.
The difference is, is that IPTV is just an internet-based streaming station similar to how networks operate to broadcast television shows. There’s nothing to really pirate.
Not sure it it counts in any way as piracy per say, but there is at least jail broken bing’s copilot AI (Sydney version) using SydneyQT from Juzeon on github.
Some of the “open” models seem to have augmented their training data with OpenAI and Anthropic requests (I. E. they sometimes say they’re ChatGPT or Claude). I guess that may be considered piracy. There are a lot of customer service bots that just hook into OpenAI APIs and don’t have a lot of guardrails, so you can do stuff like ask a car dealership’s customer service to write you Python code. Actual piracy would require someone leaking the model.
That makes no sense. Define pirated AI first.
Yeah the whole of generated AI feels like legal piracy (that they charge for) based on how they train their data
There already is. You can download copies of AI that are similar or better than ChatGPT from hugging face. I run different models locally to create my own useless AI slop without paying for anything.
Are you referring to ollama?
No because that is just an API that can run LLMs locally. GPT4All is an all in one solution that can run the .gguf file. Same with kobold ai.
Cool I’ll check that out
You can just run Automatic1111 locally if you want to generate images. I don’t know what the text equivalent is though, but I’m sure there’s one out there.
There’s no real need for pirate ai when better free alternatives exist.
There are quite a few text equivalents. text-generation-webui looks and feels like Automatic1111, and supports a few backends to run the LLMs. My personal favorite is open-webui for that look and feel, and then there is Silly Tavern for RP stuff.
For generation backends I prefer ollama due to how simple it is, but there are other options.
@incognito08 AI could be a direction in piracy too imo