• Orbituary@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Yeah. I work in the ransomware response sector of IT Security. Frankly, I don’t need nor want this bloat on my devices. Hopefully I can find ways to remove it.

    • chiisana@lemmy.chiisana.net
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Where you work and what you do hardly matters in this case — unless you choose to send your request to ChatGPT (or whatever future model that gets included in the same model), everything happens on device or in the temporary private compute instance that’s discarded after your request is done. The on device piece only takes Neural Engine resources when you invoke and use it, so the only “bloat” so to speak is disk space; which it wouldn’t surprise me if the models are only pulled from the cloud to your device when you enable them, just like Siri voices in different languages.