• 1 Post
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Unless something changed in the specification since I read it last, the attested environment payload only contains minimal information. The only information the browser is required to send about the environment is that: this browser is {{the browser ID}}, and it is not being used by a bot (e.g. headless Chrome) or automated process.

    Depending on how pedantic people want to be about the definition of DRM, that makes it both DRM and not DRM. It’s DRM in the sense that it’s “technology to control access to copyrighted material” by blocking bots. But, it’s not DRM in the sense that it “enables copyright holders and content creators to manage what users can do with their content.”

    It’s the latter definition that people colloquially know DRM as being. When they’re thinking about DRM and its user-hostility, they’re thinking about things like Denuvo, HDCP, always-online requirements, and soforth. Technologies that restrict how a user interacts with content after they download/buy it.

    Calling web environment integrity “DRM” is at best being pedantic to a definition that the average person doesn’t use, and at worst, trying to alarm/incite/anger readers by describing it using an emotionally-charged term. As it stands right now, once someone is granted access to content gated behind web environment integrity, they’re free to use it however they want. I can load a website that enforces WEI and run an adblocker to my heart’s content, and it can’t do anything to stop that once it serves me the page. It can’t tell the browser to disable extensions, and it can’t enforce integrity of the DOM.

    That’s not to say it’s harmless or can’t be turned into user-hostile DRM later, though. There’s a number of privacy, usability, ethical, and walled-garden-ecosystem concerns with it right now. If it ever gets to widespread implementation and use, they could later amend it to require sending an extra field that says “user has an adblocker installed”. With that knowledge, a website could refuse to serve me the page—and that would be restricing how I use the content in the sense that my options then become their way (with disabled extensions and/or an unmodified DOM) or the highway.

    The whole concept of web environment integrity is still dubious and reeks of ulterior motives, but my belief is that calling it “DRM” undermines efforts to push back against it. If the whole point of its creation is to lead way to future DRM efforts (as the latter definition), having a crowd of people raising pitchforks over something they incorrectly claim it does it just gives proponents of WEI an excuse to say “the users don’t know what they’re talking about” and ignore our feedback as being mob mentality. Feedback pointing out current problems and properly articulating future concerns is a lot harder to sweep under the rug.


  • The image needs to have already been downloaded the moment the client even fetches it, or you can use the image to track of a particular user is online/has read the message.

    Oh wow… That’s an excellent point. And even if the client downloads it the moment it fetches the message, that would still be enough to help determine when somebody is using Lemmy. I don’t think advertisers would have a reason to do that1, but I wouldn’t put it past a malicious individual to use it to create a schedule of when somebody else is active.

    1 It’s probably easier for them to host their own instance and track the timestamp of when somebody likes/dislikes comments and posts since that data is shared through federation.

    This needs to be implemented in the backend. Images already get downloaded to and served from the server’s pictr-rs store in some instances, so there’s code to handle this problem already.

    That would be ideal, I agree. This comment on the GitHub issue explains why some instances would want the ability to disable it, though. If it does eventually get implemented, having Sync as a fallback for instances where media proxying is disabled would be a major benefit for us Sync users.

    A small side note: that comment also points out a risk of a media proxy running the risk of downloading illegal media. I don’t necessarily think lj would need to worry about it in the same way, though. From my understanding, the risk with that is that an instance would download the media immediately after receiving a local or federated post pointing it. An on-demand proxy would (hopefully) not run the same risk since it would require action (or really bad timing) on the part of a user.

    On the other hand, such a system would also pose a privacy problem: suppose someone foolishly believes Lemmy’s messaging feature is secure and sends a message with personal pictures (nudes, medical documents, whatever). Copying that data around to other servers probably isn’t what you want.

    Fair, but it’s a bit of a moot point. Sending the message between instances is already copying that data around, and even if it’s between two users of a single instance, it’s not end-to-end encrypted. Instance admins can see absolutely everything their users do.

    Orbot can do per-app VPNs for free if you’re willing to take the latency hit.

    Interesting! I wasn’t aware that there were any Android VPNs capable of doing per-app tunneling.


  • Thank you for making an informative and non-alarmist website around the topic of Web Environment Integrity.

    I’ve seen (and being downvoted for arguing against) so many articles, posts, and comments taking a sensationalized approach to the discussion around it, and it’s nice to finally see some genuine and wholly factual coverage of it.

    I really can’t understate how much I appreciate your efforts towards ethical reporting here. You guys don’t use alarm words like “DRM,” and you went through the effort of actually explaining both what WEI does and how it poses a risk for the open web. Nothing clickybaity, ragebaity, and you don’t frame it dishonesty. Just a good, objective description of what it is in its current form and how that could be changed to everything people are worried about.

    Is there anything that someone like me could help contribute with? It seems like our goals (informing users without inciting them, so they can create useful feedback without FUD and misinformation) align, and I’d love to help out any way I can. I read the (at the time incomplete) specs and explainer for WEI, and I could probably write a couple of paragraphs going over what they promised or omitted. If you check my post history, I also have a couple of my own example of how the WEI spec could be abused to harm users.






  • And here’s a concern about the decentralized-but-still-centralized nature of attesters:

    From my understanding, attesting is conceptually similar to how the SSL/TLS infrastructure currently works:

    • Each ultimately-trusted attester has their own key pair (e.g. root certificate) for signing.

    • Some non-profit group or corporation collects all the public keys of these attesters and bundles them together.

    • The requesting party (web browser for TLS, web server for WEI) checks the signature sent by the other party against public keys in the requesting party’s bundle. If it matches one of them, the other party is trusted. If it doesn’t, they are not not trusted.

    This works for TLS because we have a ton of root certificates, intermediate certificates, and signing authorities. If CA Foo is prejudice against you or your domain name, you can always go to another of the hundreds of CAs.

    For WEI, there isn’t such an infrastructure in place. It’s likely that we’ll have these attesters to start with:

    • Microsoft
    • Apple
    • Google

    But hey, maybe we’ll have some intermediate attesters as well:

    • Canonical
    • RedHat
    • Mozilla
    • Brave

    Even with that list, though, it doesn’t bode well for FOSS software. Who’s going to attest to various browser forks, or for browsers running on different operating systems that aren’t backed by corporations?

    Furthermore, if this is meant to verify the integrity of browser environments, what is that going to mean for devices that don’t support Secure Boot? Will they be considered unverified because the OS can’t ensure it wasn’t tampered with by the bootloader?


  • Adding another issue to the pile:

    Even if it isn’t the intent of the spec, it’s dangerous to allow for websites to differentiate between unverified browsers, browsers attested to by party A, and browser attested to by party B. Providing a mechanism for cryptographic verification opens the door for specific browsers to be enforced for websites.

    For a corporate example:

    Suppose we have ExampleTechFirm, a huge investor in a private AI company, ShutAI. ExampleTechFirm happens to also make a web browser, Sledge. ExampleTechFirm could exert influence on ShutAI so that ShutAI adds rate limiting to all browsers that aren’t verified with ShutAI as the attester. Now, anyone who isn’t using Sledge is being given a degraded experience. Because attesting uses cryptographic signatures, you can’t bypass this user-hostile quality of service mechanism; you have to install Sledge.

    For a political example:

    Consider that I’m General Aladeen, the leader of the country Wadiya. I want to spy on my citizens and know what all of them are doing on their computers. I don’t want to start a revolt by making it illegal to own a computer without my spyware EyeOfAladeen, nor do I have the resources to do that.

    Instead, I enact a law that makes it illegal for companies to operate in Wadiya unless their web services refuse access to Wadiyan citizens that aren’t using a browser attested to by the “free, non-profit” Wadiyan Web Agency. Next, I have my scientists create and release a renamed versions of Chromium and Firefox with EyeOfAladeen bundled in them. Those are the only two browsers that are attested by the Wadiyan Web Agency.

    Now, all my citizens are being encouraged to unknowingly install spyware. Goal achieved!



  • Circular dependencies can be removed in almost every case by splitting out a large module into smaller ones and adding an interface or two.

    In your bot example, you have a circular dependency where (for example) the bot needs to read messages, then run a command from a module, which then needs to send messages back.

        v-----------\
      bot    command_foo
        \-----------^
    

    This can be solved by making a command conform to an interface, and shifting the responsibility of registering commands to the code that creates the bot instance.

        main <---
        ^        \
        |          \
        bot ---> command_foo
    

    The bot module would expose the Bot class and a Command instance. The command_foo module would import Bot and export a class implementing Command.

    The main function would import Bot and CommandFoo, and create an instance of the bot with CommandFoo registered:

    // bot module
    export interface Command {
        onRegister(bot: Bot, command: string);
        onCommand(user: User, message: string);
    }
    
    // command_foo module
    import {Bot, Command} from "bot";
    export class CommandFoo implements Command {
        private bot: Bot;
    
        onRegister(bot: Bot, command: string) {
            this.bot = bot;
        }
    
        onCommand(user: User, message: string) {
            this.bot.replyTo(user, "Bar.");
        }
    }
    
    // main
    import {Bot} from "bot";
    import {CommandFoo} from "command_foo";
    
    let bot = new Bot();
    bot.registerCommand("/foo", new CommandFoo());
    bot.start();
    

    It’s a few more lines of code, but it has no circular dependencies, reduced coupling, and more flexibility. It’s easier to write unit tests for, and users are free to extend it with whatever commands they want, without needing to modify the bot module to add them.


  • This may be an unpopular opinion, but I like some of the ideas behind functional programming.

    An excellent example would be where you have a stream of data that you need to process. With streams, filters, maps, and (to a lesser extent) reduction functions, you’re encouraged to write maintainable code. As long as everything isn’t horribly coupled and lambdas are replaced with named functions, you end up with a nicely readable pipeline that describes what happens at each stage. Having a bunch of smaller functions is great for unit testing, too!

    But in Java… yeah, no. Java, the JVM and Java bytecode is not optimized for that style of programming.

    As far as the language itself goes, the lack of suffix functions hurts readability. If we have code to do some specific, common operation over streams, we’re stuck with nesting. For instance,

    var result = sortAndSumEveryNthValue(2, 
                     data.stream()
                         .map(parseData)
                         .filter(ParsedData::isValid)
                         .map(ParsedData::getValue)
                    )
                    .map(value -> value / 2)
                    ...
    

    That would be much easier to read at a glance if we had a pipeline operator or something like Kotlin extension functions.

    var result = data.stream()
                    .map(parseData)
                    .filter(ParsedData::isValid)
                    .map(ParsedData::getValue)
                    .sortAndSumEveryNthValue(2) // suffix form
                    .map(value -> value / 2)
                    ...
    

    Even JavaScript added a pipeline operator to solve this kind of nesting problem.

    And then we have the issues caused by the implementation of the language. Everything except primitives are an object, and only objects can be passed into generic functions.

    Lambda functions? Short-lived instances of anonymous classes that implement some interface.

    Generics over a primitive type (e.g. HashMap<Integer, String>)? Short-lived boxed primitives that automatically desugar to the primitive type.

    If I wanted my functional code to be as fast as writing everything in an imperative style, I would have to trust that the JIT performs appropriate optimizations. Unfortunately, I don’t. There’s a lot that needs to be optimized:

    • Inlining lambdas and small functions.
    • Recognizing boxed primitives and replacing them with raw primitives.
    • Escape analysis and avoiding heap memory allocations for temporary objects.
    • Avoiding unnecessary copying by constructing object fields in-place.
    • Converting the stream to a loop.

    I’m sure some of those are implemented, but as far as benchmarks have shown, Streams are still slower in Java 17. That’s not to say that Java’s functional programming APIs should be avoided at all costs—that’s premature optimization. But in hot loops or places where performance is critical, they are not the optimal choice.

    Outside of Java but still within the JVM ecosystem, Kotlin actually has the capability to inline functions passed to higher-order functions at compile time.

    /rant


  • Back when I was in school, we had typing classes. I’m not sure if that’s because I’m younger than you and they assumed we has basic computer literacy, or older than you and they assumed we couldn’t type at all. In either case, we used Macs.

    It wasn’t until university that we even had an option to use Linux on school computers, and that’s only because they have a big CS program. They’re also heavily locked-down Ubuntu instances that re-image the drive on boot, so it’s not like we could tinker much or learn how to install anything.

    Unfortunately—at least in North America—you really have to go out of your way to learn how to do things in Linux. That’s just something most people don’t have the time for, and there’s not much incentive driving people to switch.


    A small side note: I’m pretty thankful for Valve and the Steam Deck. I feel like it’s been doing a pretty good job teaching people how to approach Linux.

    By going for a polished console-like experience with game mode by default, people are shown that Linux isn’t a big, scary mish-mash of terminal windows and obscure FOSS programs without a consistent design language. And by also making it possible to enter a desktop environment and plug in a keyboard and mouse, people can* explore a more conventional Linux graphical environment if they’re comfortable trying that.




  • It’s a “feature,” in fact…

    Under What to expect on this support page, it says:

    • The phone branding, network configuration, carrier features, and system apps will be different based on the SIM card you insert or the carrier linked to the eSIM.

    • The new carrier’s settings menus will be applied.

    • The previous carrier’s apps will be disabled.

    The correct approach from a UX perspective would have been to display an out-of-box experience wizard that gives the user an option to either use the recommended defaults, or customize what gets installed.

    Unfortunately, many manufacturers don’t do that, and just install the apps unconditionally and with system-level permissions. And even if they did, it’s likely that many of the carrier apps will either have a manifest value that requires them to be installed, be unlabeled (e.g. com.example.carrier.msm.mdm.MDM), or misleadingly named to appear essential (e.g. “Mobile Services Manager”).


  • I bought an unlocked phone directly from the manufacturer and still didn’t get the choice.

    Inserting a SIM card wiped the phone and provisioned it, installing all sorts of carrier-provided apps with system-level permissions.

    As far as I’ve found, there’s a few possible solutions:

    • Unlock the bootloader and install a custom ROM that doesn’t automatically install carrier-provided apps. (Warning: This will blow the E-fuse on Samsung devices, disabling biometrics and other features provided by their proprietary HSM).

    • Manually disable the apps after they’re forcibly installed for you. Install adb on a computer and use pm disable-user --user 0 the.app.package on every app you don’t want. If your OEM ROM is particularly scummy, it might go out of its way to periodically re-enable some of them, though.

    • Find a SIM card for a carrier that doesn’t install any apps, then insert that into a fresh phone and hope that the phone doesn’t adopt the new carrier’s apps (or wipe the phone) when you insert your actual SIM.