Good morning all, in today’s episode of “What I learned during work hours”…

I was playing around with wxHexEditor and realised that if something catastrophic happened, I would really struggle with any data recovery if I lost the inode tables for any drive.

A quick duckle pointed me to e2image, which says in the man:

It is a very good idea to create image files for all file systems on a system and save the partition layout (which can be generated using the fdisk -l command) at regular intervals — at boot time, and/or every week or so.

I couldn’t find any prebuilt solutions for this online, so I wrote a systemd service and timer to do this for me. I save the fdisk to a text file, run e2image on a couple drives, and compress it all together in a dated 7z that can get uploaded via rsync or Mega or Dropbox etc.

The metadata image from a 500gb drive is 8gb, but compresses down to 40mb. Backup takes a couple minutes.

Unfortunately this does not work with my raid drives, but they are RAID1 so already resilient.

Apparently I was being a derp somehow. …Anyways,

My RAID drives are 16TB, e2image of this is 125gb, and 7z’d it comes down to just 63mb.

I’ll post the service, timer, and backup script in a comment, let me know if you can spot anywhere for improvements!

  • Quazatron@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I’m really curious as to why go to all this trouble instead of using a proper file level backup and restore solution.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      instead of using a proper file level backup

      Backups do not solve everything.

      For example once I had a bad cable, and it did a kinda sneaking silent damage. Let’s say 5 or 50 broken files every day. And only after some weeks I noticed some of them, and there was hardly a chance to identify them each day. And sometimes there was damage to the file system, too. It took a while find the root cause.

      Today I use ZFS with redundancy and it does the recovery all by itself and my sleep is so much better :-)

      • Quazatron@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        “Proper backups” imply that you have multiple backups and a backup strategy. That could mean, for instance, that you would do a full backup, then an incremental/differential backup each week and keep one backup for each month. A bad cable would cause you trouble, no doubt, but the impact would be lessened by having multiple backups points spread over months.

        Redundancy is not backup. Read that again.

        Redundancy is important for system resilience, but backup is crucial for continuity. Every filesystem is subject to bugs and ZFS is not special. Here’s an article from a couple of days ago. If you’re comfortable with no backups just because you have redundancy, more power to you. I wouldn’t be.

          • Quazatron@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Sure, all the work you do between the moment of the filesystem failure and the last backup is gone. There’s nothing that can be done to mitigate that fact, other that more frequent backups and/or a synchronized (mirror) system.

            Backups are just a simple way to keep you from having to explain to your partner that you lost all the pictures and videos you took along the years.

    • ∟⊔⊤∦∣≶@lemmy.nzOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      For fun and learning. It’s just another tool to go with file level backup.

      And the backup for this is 40mb and really fast, but backing up files even when compressed would be hundreds of GB, maybe terabytes, and then you’re paying for that amount of storage online somewhere, uploading for hours…

      • Quazatron@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Picture this: you open and edit one of your documents and save it.

        The filesystem promptly allocates some blocks and updates the inodes. Maybe the inode table changed, maybe not. Repeat for some other files. Now your “inode backup” has a completely different picture of what is going on on your disk. If you try to recover the disk using it, all you will achieve is further corruption of the filesystem.