

podman exists and doesn’t force root…
podman exists and doesn’t force root…
It’s yours, no issues trusting a public instance with your searches. Pages full of settings to tweak as you like. Less problems with an algorithm ‘helping’ you. It averages searches over multiple search engines that you choose, you can set up your own (or a curated) block list of crappy AI slop sites, don’t like fandom.com or something, gone. Manage your own bangs, e.g. !aa for annas-archive. Pipe it through a VPN with gluetun for better isolation. If you have your head around docker already it’s more like half an hour to set up, so why not?
Can hook it up to perplexica and a local LLM for a fully local AI search that you define, use it as a MCP server, do deep research with it…
Surprised no-one pitched Elite:Dangerous. Certainly a labour of love to begin with, incredibly talented sound design, first space sim to VR, truly devoted to the original material… Still going pretty strong (had a few weak years) with updates (only 2 DLC-ish major updates of which were paid) a decade later…
Privately owned (no shareholders), small team that seems to love what they’re doing, they can likely go on forever. Strange how a lack of shareholders correlates with good games (/platforms, Valve!) isn’t it ?
I do it with a gluetun container (more versatile) zero issues, but you can just mainline wireguard as an interface if you prefer, also works fine, on bazzite.
know truth from fiction.
You jest, but…
You are aware that Netflix et.al. put compression on their streams (usually quite a bit in regards to bitrate) ? It is often the case that BluRay rips etc. are available better on the high seas…
While I generally agree and consider this insightful, it behooves us to remember the (actual, 1930s) Nazis did it with newspapers, radio and rallies (… in a cave, with a box of scraps).
Not sure I’d use either, but may I suggest both, with a clear caveat that one or both may disappear or change ? Throw stuff at the wall, see what sticks…
Thinkpads have long had first tier linux support, in fact many models have shipped with linux for at least a decade (?), checking that is a really good way to be sure, but you’re going to be fine with W, P, T, X lines, many enthusiasts make light work. They were deployed (might still be) to Red Hat kernel devs for a long time, which helps things along. Fingerprint drivers tend to be proprietary and hit or miss, but passwords work.
Honestly learning to install linux yourself, and configure it to your liking, is actually, imo, a really important path to learning and you’re likely doing yourself a disservice avoiding it. It’s part of the avoidance of vendor lock in you want. Installation is surprisingly easy now, start with something simple, Mint is often recommended these days, find a decent, recent, youtube and you’ll probably be up and running in an hour. Find the apps you need for your workflow (which will take considerably longer). Get familiar with the terminal. Best thing you can do after that is burn it down and install a new distro, leaving any mistakes behind, keeping your list of apps. Arch if you want to get really deep into it, or Fedora / Bazzite are good choices and very stable. Best of luck.
Perhaps not saved, but I’d venture the most significant nail in the coffin of the scientific publishing mafia so far, pursued with integrity and honor. The rise of open publishing that followed is very telling, and in my mind directly attributable to Alexandra’s work and it’s popularity, they know they need to adapt or (probably and) die.
Still need to work on the publish or perish mentality, getting negative results published, and getting corporate propaganda out of the mix, to name a few.
You can cycle the smaller drives to cold backup, that’s not a waste. You do have backups, which RAID is not, right?
Sure, works fine for inference with tensor parallelism, USB4 / thunderbolt 4/5 is a better (40Gbit+ and already there) bet than ethernet (see distributed-llama). Trash for training / fine tuning, that needs higher inter GPU speed, or better a bigger GPU VRAM.
I run a gluetun docker (actually two, one local and one through Singapore) clientside which is generally regarded as pretty damn bulletproof kill switch wise. The arr stack etc uses this network exclusively. This means I can use foxyproxy to switch my browser up on the fly, bind things to tun0/tun1 etc, and still have direct connections as needed, it’s pretty slick.
Don’t sleep on switching to nvme.
The old adage is never use v x.0 of anything, which I’d expect to go double for data integrity. Is there any particular reason ZFS gets a pass here (speaking as someone who really wants this feature). TrueNAS isn’t merging it for a couple of months yet, I believe.
Yup (although minutes seems long and depending on usage weekly might be fine). You can also combine it with updates which require going down anyway.
You’ll be wanting sudo ostree admin pin 1 seeing as 0 was broken. Double check with rpm-ostree status.
Proceed to rpm-ostree update, if that does nothing it means 0 is up to date, personally I’d just wait for a new update using the working deployment, but you can blow away 0 and get it again if you’re keen.
Basically, you want to shut down the database before backing up. Otherwise, your backup might be mid-transaction, i.e. broken. If it’s docker you can just docker-compose down it, backup, and then docker-compose up, or equivalent.
Sounds like a them problem then.