Oh, sure. I get that. Sending yourself reminders is absolutely understandable. Sending yourself documented evidence of your plans to defraud someone is entirely different.
Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.
Be a good motherfucker. Peace.
Oh, sure. I get that. Sending yourself reminders is absolutely understandable. Sending yourself documented evidence of your plans to defraud someone is entirely different.
In a 2017 email to himself, Smith calculated that he could stream his songs 661,440 times daily, potentially earning $3,307.20 per day and up to $1.2 million annually.
Great idea, but why would you email yourself about it?
Isn’t the picture from Logan?
Edit: oh, it’s called johntucker.jpg.
The casting bit is the missing piece for me.
I’ve built a RasPi with Kodi for our caravan, to use Plex and stream our free-to-air TV here in Australia (using Musk’s space innernets). I just miss being able to cast from my phone, for the occasional thing I can’t do with a Kodi add-on.
RAID5 and unlimited downloads on my 1Gbps fibre. All I backup is my library metadata itself, using a 2N+C strategy.
Name the band? Maybe some of us can pitch in.
Spread across a couple of NASes, each with 4 x 4TB drives in RAID5.
+1 to everything you just said - I’ve been using Immich for a little less (370 days, thanks to the same button). It’s feature rich and rock solid.
Only thing I hope they add to the mobile app is the Years/Months/Days option, to make it easy to quickly group, then find, your photos. It’s the one thing that keeps me using my phone’s own Photos app (locally - no cloud sync).
Time and time again, we’ve proven the best weapon we have against corporate greed is our ability (and willingness) to share knowledge.
Do yuo have IDP/IPS turned on on pfSense? My OPNsense on my 1Gbps fibre will easily drop from an average of 900Mbps down to around 300Mbps-500Mbps, if I turn on IDS.
I’m still using it via mbasic. It looks like shit, but I can get to my messages and reply, etc.
For channels I want to preserve, Tube Archivist. For individual videos, yt-dlp.
In your mobile browser, instead of m[dot]facebook[dot]com, try mbasic[dot]facebook[dot]com.
Very no frills FB for mobile, that lets you access Messenger. It looks like arse, but it beats using their spyware.
Yes - I do this with Pi-hole. It happens to be the same domain name that I host (very few) public services on too, so those DNS names work both inside and outside my network.
It all depends on how you want to homelab.
I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.
I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.
I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.
Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.
Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via photos.domain
on my home network.
I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.
I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos.domain
hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.
That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.
The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.
I’ve written my wiki so that, if I end up shuffling off this mortal coil, my wife can give access to one of my brothers and they can help her by unpicking all the smart home stuff.
I’m using self hosted wiki.js and draw.io. Works a treat, and trivial to backup with everything in Postgres.
It doesn’t have to be hard - you just need to think methodically through each of your services and assess the cost of creating/storing the backup strategy you want versus the cost (in time, effort, inconvenience, etc) if you had to rebuild it from scratch.
For me, that means my photo and video library (currently Immich) and my digital records (Paperless) are backed up using a 2N+C strategy: a copy on each of 2 NASes locally, and another copy stored in the cloud.
Ditto for backups of my important homelab data. I have some important services (like Home Assistant, Node-RED, etc) that push their configs into a personal Gitlab instance each time there’s a change. So, I simply back that Gitlab instance up using the same strategy. It’s mainly raw text in files and a small database of git metadata, so it all compresses really nicely.
For other services/data that I’m less attached to, I only backup the metadata.
Say, for example, I’m hosting a media library that might replace my personal use of services that rhyme with “GetDicks” and “Slime Video”. I won’t necessarily backup the media files themselves - that would take way more space than I’m prepared to pay for. But I do backup the databases for that service that tells me what media files I had, and even the exact name of the media files when I “found” them.
In a total loss of all local data, even though the inconvenience factor would be quite high, the cost of storing backups would far outweigh that. Using the metadata I do backup, I could theoretically just set about rebuilding the media library from there. If I were hosting something like that, that is…
I pay for Usenet - not my fault if they don’t pass it on.
Joking aside, like some others have said, I support many artists via Bandcamp.
Lol @ “some 20 years ago … ADSL from 2002”. Thanks for making me feel old!