

So much this.
Why is Signal hosted in one location on AWS, for example? That’s the sort of thing that should be in multiple places around the world with automatic fail over.


So much this.
Why is Signal hosted in one location on AWS, for example? That’s the sort of thing that should be in multiple places around the world with automatic fail over.


Depends on who we’re talking about. Companies like finance orgs are all about legal contracts and would be able to hold their feet to the fire.
You don’t want to go to court against a finance company or any very large org where contract law is their bread and butter (basically any large/multinational corp).
Amazon’s not hosting just small operations.


Much of this stuff is automatic - I’ve worked with such contracted services where uptime is guaranteed. The contracts dictate the terms and conditions for refunds, we see them on a monthly basis when uptime is missed and it’s not done by a person.
I imagine many companies have already seen refunds for outage time, and Amazon scrambled to stop the automation around this.
They’ll have little to stand on in court for something this visible and extensive, and could easily lose their shirt with fines and penalties when a big company sues over breech when they choose to not renew.
Just cause they’re big doesn’t mean all their clients are small or don’t have legal teams of their own.


I just added a 30mm case fan to my SFF. I went with a compressor style given the space constraints and restrictions (they tend to draw more current because of the load).
It increased draw by <1 watt - it barely registers on the meter.
I don’t think fans really make much difference. My 120mm compressor is 4w on the label (which is peak load, like startup). It probably drops back to 1/2 W after startup. And that’s a huge fan in a compressor style, equivalent to a 200mm+ in a conventional fan.


Yea, it’s the end of the world with Signal.
Having such a dependcy just exposes yet another way their story doesn’t add up, like dropping SMS support because of engineering costs. Apparently, SMS is so hard to do there are free SMS apps.
I can’t trust them at this point.
And how does E2E require a middleman?
More like it’s their store-and-forward servers. Why that’s on AWS, or more importantly not distributed with auto fail over is a major fail, as in “get fired” failure.


Ffs Signal went down with AWS??
Fossify calendar can do this, but it’s manual. So you’d have to change the color for the event by editing the event.
Neat idea though, to color events by attendee, though that would only work for a single person, which is why it’s by calendar - the thinking being a calendar can be for a given subject.
I have calendars for myself, for household stuff, shared calendars get a unique color, etc. It’s pretty easy to move events to another calendar in Fossify. Just open the event and touch Calendar (near the bottom) and select the appropriate calendar. Or choose the correct calendar when setting up the event.
I have calendars on multiple services (approx 10 calendars) - yahoo (I know, right?) Gmail, mailbox, etc, with more than one calendar on each service. Colors work fine with DavX and Fossify, even with Thunderbird on my laptop.


You could do this with a PDF editor, then print to PDF so it’s a new file.
Or use a PDF to Word converter (or similar), which would enable removing such things. Though that can be tricky


Can you be more specific?
I first ran Proxmox on it (which ran fine, just overkill for my use-case).
Now it’s Windows server and anything I do on it is done in a VM via VMware Workstation (since it’s free). So the host os doesn’t see much change and any changes that break things can be rolled back via a VM snapshot. Proxmox ZFS would be better for this, but I don’t need it, yet.
You could run any Linux distro on it then use KVM for virtual machines and also docker for things like PiHole and Jellyfin.
There’s a million ways to skin a cat, though I like using VM’s so if I need to move a service I just copy the VM to a new box. Even my docker stuff is in a VM for just this reason.


I’ve run Proxmox on it, but it was overly complex and overkill for my use case.
Right now the host OS is Windows Server running VMware Workstation. Pihole runs in a VM (DietPi), which auto starts on reboot (as does my general purpose VM running Jellyfin). Fast setup, runs as my DC, VM’s as needed with enough performance (though not as much as I’d like for my virtualization goals).
Next box will be my own build since this one is limited on physical space and I have a couple old cases with plenty of room.


For me Quicksync converts videos anywhere from 4x to 10x faster than using the GPU.


For business, ECC is definitely required. I really don’t see it needed for home use.
I’ve never run it for home boxes - I’ve had a Windows domain at home since the 90’s using desktop hardware and it’s as stable as any SMB I’ve seen running on enterprise-grade hardware.


I’m using a 2019 Dell SFF OptiPlex.
With the current 8TB data drive, it idles at 18w, but being Intel can convert or transcode very quickly.
With the previous 2TB drive it idled at 12w, little more than a Pi but far more capable.
I run my PiHole on it plus Jellyfin, HandBrake, etc. It also has 4 VMs using VMware for some other stuff as needed (testing mostly).
Hard to beat the bang for buck, or per watt.


Others have mentioned SFF desktops.
My current server is an old Dell Optiplex SFF desktop. Idles at just under 20w, peaks at 80. Currently has an NVME boot drive, and an 8TB 3.5" drive.
Runs like a champ, easily serves Jellyfin video, with transcoding, while converting videos with handbrake (and with 2 other systems converting videos off that drive over the net).
Cost, internal space, options and power it’s hard to beat an SFF. If you don’t need internal space or conversion power, than a NUC can work (the lack of sufficient cooling limits it’s converting capabilities).


Two ways I like to get files to/from a mobile device: Syncthing (Möbius on iOS) or ResilioSync.
Resilio has a great feature, Selective Sync, that enables arbitrarily syncing files from a remote location. Nice for grabbing specific files when needed.
Unfortunately neither one handles any kind of reading status, they’re just file sync.


I kind of figured, because why wouldn’t you want that?


When you say sync, do you mean just the books, or reading status (what page you’re on?)
What am I looking at here?
Well, TrueNAS is a RAID system, and pretty much any Linux distro can run ZFS.
Google has always been evil. Why else was their byline “Don’t be evil”?
If you have to make such a disclaimer…