Thanks, I just hope my $5000 will be enough
Thanks, I just hope my $5000 will be enough
The book was kind of all over the place, couldn’t seem to pick a thread and follow it until halfway through, and then they kill off the main character and the rest is just lectures and moralizing? It had a fun bit at the end, though.
But they’ll take all of our incredibly desirable jobs!
On top of that, it doesn’t have to power the city for a day, it only has to store unused energy produced during off-peak hours while the sun is shining and/or wind is blowing.
Clearly they only think bigger, better batteries are magic and physics defying. The batteries we have now are the best batteries that physics allows for, and they can’t be made more or bigger because… We already used up all the stuff for them. Yeah, that tracks.
Geothermal too is possible
Pinky promise I’ll stop fighting you if you just let me have all your stuff and stop asking other people for help.
Okay but real question… Can dolphins sneeze?
“most moral army in the world” is a bit like the dryest lake.
ask for full price with the bugs and multiplayer disabled
At least the bugs are disabled /s
Well, the word deep fake is literally from the ai boom, but I understand you to mean doctored images to make it look like someone was doing a porn when they didn’t was already a thing.
And yeah, it very much was. But unless you were already a high profile individual like a popular celebrity, or mayyybe if you happened to be attractive to the one guy making them, they didn’t tend to get made of you, and certainly not well. Now, anyone with a crush and a photo of you can make your face and a pretty decent approximation of your naked body move around and make noises while doing the nasty. And they can do it many orders of magnitude faster and with less skill than before.
So no, you don’t need ai for it to exist and be somewhat problematic, but ai makes it much more problematic.
Add to that the fact that before ai, unless you’re already pretty famous, no one cares enough to make nonconsensual porn of you. After, anyone vaguely attracted to you can snap or find a few pictures and do a decent job of it without any skill or practice.
On its own, it’s just the same as hate for porn. But there’s also deep fake porn, ai porn of real people, and that’s potentially far more problematic.
But they’re the only ones who agree with meeeee!
1st, I didn’t just say 1000x harder is still easy, I said 10 or 1000x would still be easy compared to multiple different jailbreaks on this thread, a reference to your saying it would be “orders of magnitude harder”
2nd, the difficulty of seeing the system prompt being 1000x harder only makes it take 1000x longer of the difficulty is the only and biggest bottleneck
3rd, if they are both LLMs they are both running on the principles of an LLM, so the techniques that tend to work against them will be similar
4th, the second LLM doesn’t need to be broken to the extent that it reveals its system prompt, just to be confused enough to return a false negative.
And the second llm is running on the same basic principles as the first, so it might be 2 or 4 times harder, but it’s unlikely to be 1000x. But here we are.
You’re welcome to prove me wrong, but I expect if this problem was as easy to solve as you seem to think, it would be more solved by now.
Maybe. But have you seen how easy it has been for people in this thread to get gab AI to reveal its system prompt? 10x harder or even 1000x isn’t going to stop it happening.
It would see it. I’m merely suggesting that it may not successfully notice it. LLMs process prompts by translating the words into vectors, and then the relationships between the words into vectors, and then the entire prompt into a single vector, and then uses that resulting vector to produce a result. The second LLM you’ve described will be trained such that the vectors for prompts that do contain the system prompt will point towards “true”, and the vectors for prompts that don’t still point towards “false”. But enough junk data in the form of unrelated words with unrelated relationships could cause the prompt vector to point too far from true towards false, basically. Just making a prompt that doesn’t have the vibes of one that contains the system prompt, as far as the second LLM is concerned
censoring that’s just gonna drive them into echo chambers
Also, we’re not talking about censoring the speech of individuals here, we’re talking about an ai deliberately designed to sound like a reliable, factual resource. I don’t think it’s going to run off to join an alt right message board because it wasn’t told to do any “both-sides-ing”
In the US, every employment contract has a line where any “invention” of an employee belongs to the company, so