And that, kids, is a great use of RAID: under some other form of data redundancy.
Great story!
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
And that, kids, is a great use of RAID: under some other form of data redundancy.
Great story!
I don’t know that it’ll affect the echo chamber effect; you create that through your subscriptions, and avoid it by browsing “all.” What will be impacted is the amount of simply shit content, both from idiots and from bots. Moderators’ jobs will get harder: the bots follow the people.
RAID 1 is mirroring. If you accidentally delete a file, or it becomes corrupt (for reasons other than drive failure), RAID 1 will faithfully replicate that delete/corruption to both drives. RAID 1 only protects you from drive failure.
Implement backups before RAID. If you have an extra drive, use it for backups first.
There is only one case when it’s smart to use RAID on a machine with no backups, and that’s RAID 0 on a read-only server where the data is being replicated in from somewhere else. All other RAID levels only protect against drive failure, and not against the far more common causes of data loss: user- or application-caused data corruption.
Keeping my eye out for the class action on this one.
And you are right, in all ways!
I misread the title of the post. Hazards of being subbed to both “privacy” and “piracy”.
I have a Kobo as almost exclusively the only way I read books anymore, and I’ve owned a Sony and a Nook; the Kobo has lasted the longest and I like it best. That said, why do you claim it’s the most privacy friendly?
Yeah, I use systemd for the self-host stuff, but you should be able to use docker-compose files with podman-compose with no, or only minor, changes. Theoretically. If you’re comfortable with compose, you may have more luck. I didn’t have a lot of experience with docker-compose, and so when there’s hiccups I tend to just give up and do it manually, because it works just fine that way, too, and it’s easier (for me).
This is great additional information, much of which I didn’t know!
I’m doing the backing-up-twice thing; it’d probably be better if I backed up once and rsync’d - it’d be less computationally intensive and save disk space used by multiple restic caches. OTOH, it’d also have more moving parts and be harder to manage, and IME things that I touch rarely need to be as simple as possible because I forget how to use them in between uses.
Anyway, great response!
I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.
Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.
I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.
2¢
I have no opinion about rsync.net. I’d check which services restic supports; there are several, and it is it supports rsync.net and that’s what you want to use, you’re golden. Or, use another backup tool that has encryption-by-default and does support rsync.net - there are a couple of options.
I would just never store any data that wasn’t meant for public consumption unencrypted on someone else’s servers. I make an exception for my VPS, but that’s only because I’m more paranoid about exposing my LAN that putting my email on a VPS.
restic, and other backup tools, are generally not always on. You run them; they back up. If you run them only one a month, that’s how often they run. The remote mounting is just a nice feature when you want to grab a single file from one of the backups.
What you’re describing is a classic backup use-case. I’m recommending the easiest, cheapest, most reliable offsite solution I’ve used. restic has been around for years, and has a lot of users and a lot of eyeballs look at it, and it’s OSS. There are even GUIs for it, if you’re not comfortable with the CLI. B2 is generally well-regarded, is fairly easy to figure out, and has also been around for ages. Together, they make a solid combo. I also backup with restic to a local disk and use that for accessing history - B2 is just, as you say, in case of a fire, or theft, I suppose.
I wouldn’t.
Use a proper backup tool for this, like restic. BackBlaze has reasonable rates, especially of you’re mostly write-only, and restic has built-in support for B2 and encrypts everything by default. It also supports compression, but you won’t get much out of that on media files. restic is also cross-platform and a single executable, so you can throw binaries for OSX, Linux, and Windows on a USB stick and know you can get to your backups from anywhere. It also allows you to mount a remote repository like a filesystem (on Linux, at least), and browse a backup and get at individual files without having to restore everything. It’s super handy if you screw up a single file or directory.
Location services in Android are in-phone, and they’re definitely accurate and reporting to Google. I only clarified that your cell provider probably can’t locate you using triangulation via your cell Signal. Turn data off, and you’re fine; otherwise, Google is tracking you - and from what I’ve read, even if you have location services turned off.
They can’t, tho. There are two reasons for this.
Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.
An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.
The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.
TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.
Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.
Sure, I could do that, but not everyone can. But you still have the problem that many of these devices don’t function well unless they can phone home; they don’t very firmware upgrades, and they expect to be controlled by a bespoke app. If you filter out all the devices that are HA compatible without running through an external service, you shear the product choices in half.
This is good information. I had a complete failure with flashing Tasmota once, and bricked a $100 device.
I like the project, though. My biggest complaint is that - at least for what I was trying to flash, the Linux support was iffy. I was trying to flash something for HA, and the instructions assumed I had access to the computer running HA (which is a headless device in a closet in the basement - entirely unpractical for doing fiddly pinning while trying to flash) or using a web browser with webUSB - which Firefox on Linux doesn’t. So eventually I found a completely unrelated set of instructions I could run from the CLI from my desktop over a cable connected to said desktop, and while it appeared successful, the device is bricked. I can’t even get it into flash mode anymore.
I don’t think any of this has to do with Tasmota, except that the Linux tooling seems either weak, or make assumes people are running Chrome; and if you’re security conscious enough to be flashing a device to run Tasmota, you’re not running Chrome.
So I’m not doing that again. It’s a hundred bucks and two days of digging around for tooling and instructions I’d like back.
Again, not Tasmota’s fault, but it’s not super accessible.
For my CLI homies, there’s syncedlyrics.
Be advised: several Subsonic servers (including gonic and Navidrome) do not support lyric files unless they’re embedded, and syncedlyrics will only put the lyrics in .lrc files. So getting lyrics in clients can be a two-step process: download the .lrc’s, then run a script to embed them in the song files. I’ve seen a script to do the latter, but I haven’t tried it. I’ll send a patch to gonic to read lrc files, during the Christmas holiday most likely.
I once owned a bunch of WiFi connected devices. One day I inspected my router logs and found out that they were all making calls to a bunch of services that weren’t the vendor - things like Google, and Facebook.
WiFi connected devices require connecting to a router; in most homes, this is going to be one that’s also connected to the internet - most people aren’t going to buy a second router just for their smart home, or set up a disconnected second LAN on their one router. And nearly all of these devices come with an app, which talks to the device through an external service (I’m looking at you, Honeywell, and you, Rainbird). This is a privacy shit-show. WiFi is a terrible option for smart home devices.
ZigBee, well, I haven’t had any luck with it - pairing problems which are certainly just a learning curve in my part and not an issue with the protocol. I chose ZWave myself because I read about the size and range limitations of ZigBee technology, versus ZWave, but honestly I could have gone either way. Back then, there was no appreciable price difference in devices. Most hubs support both, though, and I can’t see why I wouldn’t mix them (other than I need to figure out how to get ZigBee to work).
In any case, low-power BT, ZigBee, or Zwave are all options, whereas I will not allow more WiFi smart devices in my house. I’m stuck with Honeywell and Rainbird, for… reasons… but that’s it. I don’t need to be poking more holes in my LAN security.
Honestly not the weirdest behavior you’d see in campus, and could almost be wholesome. Guy’s down on the ground interacting with his dog; what’s wrong with that? Also: it’s at night - could they see he was actually eating grass, or did it look like he was just playing with his dog?
Also also: college campus… night… couple sitting on a bench… “sitting.” College couples never only sit on secluded benches in the dark. OP probably interrupted a handy.
Seconded. OP, if you can write Markdown, Hugo will turn it into a website.
End to end could still - especially with a company like Google - include data collection on the device. They could even “end to end” encrypt sending it to Google in the side channel. If you want to be generous, they would perform the aggregation in-device and don’t track the content verbatim, but the point stands: e2e is no guarantee of privacy. You have to also trust that the app itself isn’t recording metrics, and I absolutely do not trust Google to not do this.
They make so of their big money from profiling and ads. No way they’re not going to collect analytics. Heck, if you use the stock keyboard, that’s collecting analytics about the texts you’re typing into Signal, much less Google’s RCS.