Jojo, Lady of the West

  • 0 Posts
  • 29 Comments
Joined 7 months ago
cake
Cake day: March 4th, 2024

help-circle











  • Well, the word deep fake is literally from the ai boom, but I understand you to mean doctored images to make it look like someone was doing a porn when they didn’t was already a thing.

    And yeah, it very much was. But unless you were already a high profile individual like a popular celebrity, or mayyybe if you happened to be attractive to the one guy making them, they didn’t tend to get made of you, and certainly not well. Now, anyone with a crush and a photo of you can make your face and a pretty decent approximation of your naked body move around and make noises while doing the nasty. And they can do it many orders of magnitude faster and with less skill than before.

    So no, you don’t need ai for it to exist and be somewhat problematic, but ai makes it much more problematic.





  • 1st, I didn’t just say 1000x harder is still easy, I said 10 or 1000x would still be easy compared to multiple different jailbreaks on this thread, a reference to your saying it would be “orders of magnitude harder”

    2nd, the difficulty of seeing the system prompt being 1000x harder only makes it take 1000x longer of the difficulty is the only and biggest bottleneck

    3rd, if they are both LLMs they are both running on the principles of an LLM, so the techniques that tend to work against them will be similar

    4th, the second LLM doesn’t need to be broken to the extent that it reveals its system prompt, just to be confused enough to return a false negative.




  • It would see it. I’m merely suggesting that it may not successfully notice it. LLMs process prompts by translating the words into vectors, and then the relationships between the words into vectors, and then the entire prompt into a single vector, and then uses that resulting vector to produce a result. The second LLM you’ve described will be trained such that the vectors for prompts that do contain the system prompt will point towards “true”, and the vectors for prompts that don’t still point towards “false”. But enough junk data in the form of unrelated words with unrelated relationships could cause the prompt vector to point too far from true towards false, basically. Just making a prompt that doesn’t have the vibes of one that contains the system prompt, as far as the second LLM is concerned