A year ago I built a NAS to reduce my reliance on cloud services, and set up an arr stack. I went with TrueNAS Scale, which was on Bluefin at the time. In the past 12 months, TrueNAS Scale has been through FOUR major OS versions, with a fifth already announced. At least one of those involved a release train switch so, despite diligently checking for updates in the dashboard, I was left in the dust with an obsolete OS, and didn’t find out until it was already a huge hassle to upgrade.

I’ve been really happy with the utility and benefit of having this tool, but holy smokes how is anybody supposed to keep up with all of this? This is far from my only hobby, and I simply do not have the time, patience, or interest for a constant race to keep up with vetting new release versions and fixing what breaks every 3 weeks. I have enough tinkering hobbies as it is.

On top of that, there’s the whole blow up with TrueCharts, which has also left me with an entire suite of obsolete albatrosses around my NAS that I need to deal with. Am I still waiting for them to figure out an upgrade path? I don’t even know anymore.

Sorry for the rant, but I guess what I’m looking for is: how do you keep up with the constant maintenance and updates, and where do I go from here, in February 2025, with a system running Bluefin 22.12, a 32TB ZFS pool (RAIDZ1) that has to remain intact, and a handful of TrueCharts apps that I don’t want to lose the data from (e.g. Jellyfin configs/watch history)?

  • Onomatopoeia@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    In the business world it’s pretty common to do staged or switchover upgrades: test new version in a lab environment, iron out the install/config details. Then upgrade a single production server and do a test with a small group of users. Or, build new servers with the new stuff, have a set of users run on it for a while, in this way you can always just move those users back to a known good server.

    How do you do this at home? VMs for lots of stuff, or duplicate hardware for NAS type stuff (I’ve read of running TrueNAS in a VM).

    To borrow from the preparedness community: if you have 1 you have none, if you have 2 you have 1. As an example, the business world often runs mission-critical systems in a redundant setup in regionally-different data centers, so a storm won’t take them down. The question is how to reproduce this idea in a home lab environment.

    • skilltheamps@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      This is not practical for a home setup. Not because it would be expensive for more hardware or whatever, but because as soon as you have multiple systems doing the same thing, their state diverges and for pretty much anything that is popular for selfhosting you cannot merge them again or mirgrate users between them without loosing anything. Distributed databases alone are a huge pita, and maintaining such redundant setups would be a million times more effort than just making sure that you can easily and quickly atomically roll back failed updates

      • Onomatopoeia@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 minutes ago

        As I said “how to reproduce this in a home setup”.

        I’m running multiple machines, paid little for all of them, and they all run at pretty low power. I replicate stuff on a schedule, I and have a cloud backup I verify quarterly.

        If OP is thinking about how to ensure uptime (however they define it) and prevent downtime due to upgrades, then looking at how Enterprise does things (the people who use research into this very subject performed by universities and organizations like Microsoft and Google), would be useful.

        Nowhere did I tell OP to do things this way, and I’d thank you to not make strawmen of my words.