• capt_kafei@lemmy.ca
    link
    fedilink
    English
    arrow-up
    90
    ·
    6 months ago

    Damn, it is actually scary that they managed to pull this off. The backdoor came from the second-largest contributor to xz too, not some random drive-by.

      • Alex@lemmy.ml
        link
        fedilink
        arrow-up
        57
        arrow-down
        1
        ·
        6 months ago

        It’s looking more like a long game to compromise an upstream.

        • cjk@feddit.de
          link
          fedilink
          arrow-up
          17
          ·
          6 months ago

          Either that or the attacker was very good at choosing their puppet…

          • Alex@lemmy.ml
            link
            fedilink
            arrow-up
            34
            ·
            6 months ago

            Well the account is focused on one particular project which makes sense if you expect to get burned at some point and don’t want all your other exploits to be detected. It looks like there was a second sock puppet account involved in the original attack vector support code.

            We should certainly audit other projects for similar changes from other psudoanonymous accounts.

    • Alex@lemmy.ml
      link
      fedilink
      arrow-up
      31
      ·
      edit-2
      6 months ago

      Time to audit all their contributions although it looks like they mostly contribute to xz. I guess we’ll have to wait for comments from the rest of the team or if the whole org needs to be considered comprimised.

      • sim642@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Assuming that it’s just that person, that it’s their actual name and that they’re in the US…

      • ugjka@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        6 months ago

        there will be federal investigation just speculation if the culprit is a foreign actor

  • chameleon@kbin.social
    link
    fedilink
    arrow-up
    70
    ·
    6 months ago

    This is a fun one we’re gonna be hearing about for a while…

    It’s fortunate it was discovered before any major releases of non-rolling-release distros were cut, but damn.

    • rolaulten@startrek.website
      link
      fedilink
      arrow-up
      7
      ·
      6 months ago

      That’s the scary thing. It looks like this narrowly missed getting into Debian and RH. Downstream downstream that is… everything.

    • mumblerfish@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      6 months ago

      Gentoo just reverted back to the last tar signed by another author than the one seeming responsible for the backdoor. The person has been on the project for years, so one should keep up to date and possibly revert even further back than just from 5.6.*. Gentoo just reverted to 5.4.2.

    • flying_sheep@lemmy.ml
      link
      fedilink
      arrow-up
      8
      arrow-down
      10
      ·
      6 months ago

      Backdoor only gets inserted when building RPM or DEB. So while updating frequently is a good idea, it won’t change anything for Arch users today.

        • flying_sheep@lemmy.ml
          link
          fedilink
          arrow-up
          12
          ·
          6 months ago

          No, read the link you posted:

          Arch does not directly link openssh to liblzma, and thus this attack vector is not possible. You can confirm this by issuing the following command:

          ldd "$(command -v sshd)"
          

          However, out of an abundance of caution, we advise users to remove the malicious code from their system by upgrading either way.

        • progandy@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          6 months ago

          Those getting the most recent software versions, so nothing that should be running in a server.

        • Laser@feddit.de
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          Fedora 41, Fedora Rawhide, Debian Sid are the currently known affected ones AFAIK.

        • flying_sheep@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          I think it needs to be

          • rolling release (because it was caught so quickly that it hasn’t made its way into any cadence based distro yet)
          • using the upstream Makefile task to build a RPM or DEB (because the compromised build script directly checks for that and therefore doesn’t trigger for a destdir build like Gentoo’s or Arch’s)
          • using the upstream provided tarball as opposed to the one GitHub provides, or a git clone (because only that contains the compromised Makefile, running autotools yourself is safe)

          Points 1 and 2 mean that only rolling release RPM and DEB distros like Debian Sid and Fedora are candidates. I didn’t check if they use the Makefile and the compromised tarballs.

    • chameleon@kbin.social
      link
      fedilink
      arrow-up
      53
      arrow-down
      1
      ·
      6 months ago

      Won’t help here; this backdoor is entirely reproducible. That’s one of the scary parts.

      • OsrsNeedsF2P@lemmy.ml
        link
        fedilink
        arrow-up
        29
        arrow-down
        4
        ·
        edit-2
        6 months ago

        The backdoor wasn’t in the source code, only in the distributed binary. So reproducible builds would have flagged the tar as not coming from what was in Git

        • chameleon@kbin.social
          link
          fedilink
          arrow-up
          41
          ·
          6 months ago

          Reproducible builds generally work from the published source tarballs, as those tend to be easier to mirror and archive than a Git repository is. The GPG-signed source tarball includes all of the code to build the exploit.

          The Git repository does not include the code to build the backdoor (though it does include the actual backdoor itself, the binary “test file”, it’s simply disused).

          Verifying that the tarball and Git repository match would be neat, but is not a focus of any existing reproducible build project that I know of. It probably should be, but quite a number of projects have legitimate differences in their tarballs, often pre-compiling things like autotools-based configure scripts and man pages so that you can have a relaxed ./configure && make && make install build without having to hunt down all of the necessary generators.

          • flying_sheep@lemmy.ml
            link
            fedilink
            arrow-up
            11
            ·
            6 months ago

            Time to change that tarball thing. Git repos come with built in checksums, that should be the way to go.

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              Honestly, while the way they deployed the exploit helped hide it, I’m not sure that they couldn’t have figured out some similar way to hide it in autoconf stuff and commit it.

              Remember that the attacker had commit privileges to the repository, was a co-maintainer, and the primary maintainer was apparently away on a month-long vacation. How many parties other than the maintainer are going to go review a lot of complicated autoconf stuff?

              I’m not saying that your point’s invalid. Making sure that what comes out of the git repository is what goes to upstream is probably a good security practice. But I’m not sure that it really avoids this.

              Probably a lot of good lessons that could be learned.

              • It sounds like social engineering, including maybe use of sockpuppets, was used to target the maintainer, to get him to cede maintainer status.

              • Social engineering was used to pressure package maintainers to commit.

              • Apparently automated software testing software did trip on the changes, like some fuzz-tesing software at Google, but the attacker managed to get changes committed to avoid it. This was one point where a light really did get aimed at the changes. That being said, the attacker here was also a maintainer, and I don’t think that the fuzzer guys consider themselves responsible for identifying security holes. And while it did highlight the use of ifunc, it sounds like it was legitimately a bug. But, still, it might be possible to have some kind of security examination taking place when fuzzing software trips, especially if the fuzzing software isn’t under control of a project’s maintainer (as it was not, here).

              • The changes were apparently aimed at getting in shortly before Ubuntu freeze; the attacker was apparently recorded asking and ensuring that Ubuntu fed off Debian testing. Maybe there needs to be more-attention paid to things that go in shortly before freeze.

              • Part of the attack was hidden in autoconf scripts. Autoconf, especially with generated data going out the door, is hard to audit.

              • As you point out, using a chain that ensures that a backdoor that goes into downstream also goes into git would be a good idea.

              • Distros should probably be more careful about linking stuff to security-critical binaries like sshd. Apparently this was very much not necessary to achieve what they wanted to do in this case; it was possible to have a very small amount of code that performed the functionality that was actually needed.

              • Unless the systemd-notifier changes themselves were done by an attacker, it’s a good bet that the Jia Tan group and similar are monitoring software, looking for dependencies like the systemd-notifier introduction. Looking for similar problems that might affect similar remotely-accessible servers might be a good idea.

              • It might be a good idea to have servers run their auth component in an isolated module. I’d guess that it’d be possible to have a portion of sshd that accepts incoming connections (and is exposed to the outside, unauthenticated world) as an isolated process. That’d be kind of inetd-like functionality. The portion that performed authentication (and is also running exposed to the outside) as an isolated process, and the code that runs only after authentication succeeds run separately, with only the latter bringing in most libraries.

              • I’ve seen some arguments that systemd itself is large and complicated enough that it lends itself to attacks like this. I think that maybe there’s an argument that some sort of distinction should be made between more- or less-security-critical software, and different policies applied. Systemd alone is a pretty important piece of software to be able to compromise. Maybe there are ways to rearchitect things to be somewhat more-resilient and auditable.

              • I’m not familiar with the ifunc mechanism, but it sounds like attackers consider it to be a useful route to hide injected code. Maybe have some kind of auditing system to look for that.

              • The attacker modified the “in the event of an identified security hole” directions to discourage disclosure to anyone except the project for a 90-day embargo period, and made himself the contact point. That would have provided time to continue to use the exploit. In practice, perhaps software projects should not be the only contact point – perhaps it should be the norm to both notify software projects and a separate, unrelated-to-a-project security point. That increases the risk of the exploit leaking, but protects against compromise of the project maintainership.

              • flying_sheep@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                You’re right, there’s more parts to it, especially social engineering. Maybe there’s other ways to hide a payload, but there aren’t many avenues. You have to hide the payload in a binary artefact, which are pretty suspicious when you don’t do it in a (well scrutinized) cryptography lib, or a compression lib.

                Then that payload has to be executed for some reason, which means you need a really good reason to embed it (e.g. something like widevine), or have to modify the build script.

        • Virulent@reddthat.com
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          Not exactly - it was in the source tarbal available for download from the releases page but not the git source tree.

    • Daniel Quinn@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      Why didn’t this become a thing? Surely in 2024, we should be able to build packages from source and sign releases with a private key.

      • Natanael@slrpnk.net
        link
        fedilink
        arrow-up
        5
        ·
        6 months ago

        It’s becoming more of a thing but a lot of projects are so old that they haven’t been able to fix their entire build process yet

  • Doombot1@lemmy.one
    link
    fedilink
    arrow-up
    22
    ·
    6 months ago

    ELI5 what does this mean for the average Linux user? I run a few Ubuntu 22.04 systems (yeah yeah, I know, canonical schmanonical) - but they aren’t bleeding edge, so they shouldn’t exhibit this vulnerability, right?

  • lemmyreader@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    6 months ago

    t y for sharing.

    #showerthoughts The problem is in upstream and has only entered Debian Sid/unstable. Does this mean that for example bleeding edge Arch (btw) sshd users are compromised already ?

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 months ago

      Apparently the backdoor reverts back to regular operation if the payload is malformed or the signature from the attacker’s key doesn’t verify. Unfortunately, this means that unless a bug is found, we can’t write a reliable/reusable over-the-network scanner.

      Maybe not. But it does mean that you can write a crawler that slams the door shut for the attacker on any vulnerable systems.

      EDIT: Oh, maybe he just means that it reverts for that single invocation.