• JetpackJackson@feddit.de
    link
    fedilink
    arrow-up
    11
    ·
    3 months ago

    Do you have a source for the ext4 failure stuff? I use ext4 currently and want to see if there’s something I need to do now other than frequent backups

    • kurushimi@lemmyonline.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 months ago

      I used ext4 extensively in an HPC setting a few jobs ago (many petabytes). Some of the server clusters were in areas with very unreliable power grids like Indonesia. Using fsck.ext4 had become our bread and butter, but it was also nerve wracking because in the worst failures that involved power loss or failed RAID cards, we sometimes didn’t get clean fscks. Most often this resulted in loss of file metadata which was a pain to try to recover from. To its credit, as another quote in this thread mentioned, fsck.ext4 has a very high success rate, but honestly you shouldn’t need to manually intervene as a filesystem admin in an ideal world. That’s the sort of thing next gen filesystem attempt to provide.

    • sep@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      3 months ago

      Not seen a fs corruption yet. But i have only run ext4 on around 350 production servers since 2010 ish.
      Have ofcourse seen plenty of hardware failures. But if a disk is doing the clicky, it is not another filesystem that saves you.

      Have regularly tested backups!

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Well a few years ago I actually did some research into that but didn’t find much about it. What I said was my personal experience but now we also have companies like Synology pushing BRTFS for home and business customers and they have analytics on that for sure… since they’re trying to move everything…