Google has developed a tool called SynthID that can watermark AI-generated images in a way that is imperceptible to humans but detectable by AI. The watermark is embedded in pixel values without noticeably changing the image. SynthID is being launched for use with Google Cloud’s image generator to verify original photos. While aimed initially at detecting deepfakes, the tool could also help businesses check AI-generated images used for tasks like product descriptions. Google hopes SynthID may become a web-wide standard but recognizes others are working on detection methods too. The launch marks the start of an arms race as hackers will try to circumvent the system, requiring it to continuously improve. Overall, SynthID is a first step toward greater transparency around AI-generated content online.

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    22
    ·
    10 months ago

    The title here is misleading. It cannot detect AI generated images. It can detect watermarked images

  • conciselyverbose@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    10 months ago

    And of course it will be impossible to remove this watermark programs can see programmatically because humans can’t see it, right?

    I mean, go for it if you want. We’re already, today, past the point where a photo or video in and of itself constitutes reliable evidence due to how close known tools can get. You need to show chain of custody like you would any other forensic evidence, including a credible original source on the record, for it to be actually reliable. Faking anything is absolutely plausible.

  • cmnybo@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    How well will the watermark survive resizing and compression? What about sharpening or blurring the image?

    If the watermark is robust, then it would be nice if web browsers could flag any AI generated images so long as the watermark detection can be done 100% client side.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    🤖 I’m a bot that provides automatic summaries for articles:

    Click here to see the summary

    It’s called SynthID, and it’s designed to essentially watermark an AI-generated image in a way that is imperceptible to the human eye but easily caught by a dedicated AI detection tool.

    “But it’s robust to various transformations — cropping, resizing, all of the things that you might do to try and get around normal, traditional, simple watermarks.” As SynthID’s underlying models improve, Hassabis says, the watermark will be even less perceptible to humans but even more easily detected by DeepMind’s tools.

    SynthID is rolling out first in a Google-centric way: Google Cloud customers who use the company’s Vertex AI platform and the Imagen image generator will be able to embed and detect the watermark.

    They may not be quite as viscerally important as fake Trump mug shots or a swagged-out pope, but these are the ways AI is already showing up in day-to-day business.

    It could even vary by topic: maybe you don’t much care if the Slides background you’re using was created by humans or AI, but “if you’re in hospitals scanning tumors, you really want to make sure that was not a synthetically generated image.”

    “It would be premature to think about the scaling and the civil society debates until we’ve proven out that the foundational piece of the technology works.” That’s the first job and the reason SynthID is launching now.


    Saved 83% of original text.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    As these companies make these types of watermarks, we need people to make tools to counter these watermarks. Can’t trust the large corporations not to abuse their watermark tool things to suppress anything they dislike.