

Yeah, I saw, but it’s an interesting topic.
Yeah, I saw, but it’s an interesting topic.
Is your concern compromise of your data or loss of the server?
My guess is that most burglaries don’t wind up with people trying to make use of the data on computers.
As to loss, I mean, do an off-site backup of stuff that you can’t handle losing and in the unlikely case that it gets stolen, be prepared to replace hardware.
If you just want to keep the hardware out of sight and create a minimal barrier, you can get locking, ventillated racks. I don’t know how cost-effective that is; I’d think that that might cost more than the expected value of the loss from theft. If a computer costs $1000 and you have a 1% chance of it being stolen, you should not spend more than $10 on prevention in terms of reducing cost of hardware loss, even if that method is 100% effective.
EDIT: My guess is also that
Mantraps that use deadly force are illegal in the United States, and in notable tort law cases the trespasser has successfully sued the property owner for damages caused by the mantrap. There is also the possibility that such traps could endanger emergency service personnel such as firefighters who must forcefully enter such buildings during emergencies. As noted in the important American court case of Katko v. Briney, “the law has always placed a higher value upon human safety than upon mere rights of property”.[5]
EDIT: I’d add that I don’t know about the “life always takes precedence over property” statement; Texas has pretty permissive use of deadly force in defense of property. However, I don’t think that anywhere in the US permits traps that make use of deadly force.
You might want to list the platform you want to use it on. I’m assuming that you’re wanting to access this on a smartphone of some sort?
Mulvad apparently uses Wireguard. Is there an Android Wireguard client that supports multiple VPNs and toggling each independently?
Clothes make the man. Naked people have little or no influence in society.
However, after returning it to the Turo owner and having the suspension damage evaluated by Tesla, the repair job was estimated to be roughly $10,000. I wouldn’t be surprised if there’s a similar situation with this accident.
Hmm. That makes me wonder.
Like, it’s hard for me or for Joe Blow to evaluate how effective a car company’s self-driving functionality is. Requires expertise, and it’s constantly changing. And ideally, I shouldn’t be the one to bear cost, if I can’t evaluate risk, because then I’m taking on some unknown cost when purchasing the car.
And the car manufacturer isn’t in a position to be objective.
But an insurer can do that.
Like, I wonder if it’d be possible to have insurers offer packages that cover cost of accidents that occur while the car is in self-driving mode. That’d make it possible to put a price tag on accidents from self-driving systems.
I mean, I’m listing it because I believe that it’s something that has some value that could be done with the information. But it’s a “are the benefits worth the costs” thing? let’s say that you need to pay $800 and wear a specific set of glasses everywhere. Gotta maintain a charge on them. And while they’re maybe discrete compared to a smartphone, I assume that people in a role where they’re prominent (diplomacy, business deal-cutting, etc) probably know what they look like and do, so I imagine that any relationship-building that might come from showing that you can remember someone’s name and personal details (“how are Margaret and the kids?”) would likely be somewhat undermined if they know that you’re walking around with the equivalent of your Rolodex in front of your eyeballs. Plus, some people might not like others running around with recording gear (especially in some of the roles listed).
I’m sure that there are a nonzero number of people who would wear them, but I’m hesitant to believe that as they exist today, they’d be a major success.
I think that some of the people who are building some of these things grew up with Snow Crash and it was an influence on them. Google went out and made Google Earth; Snow Crash had a piece of software called Earth that did more-or-less the same thing (albeit with more layers and data sources than Google Earth does today). Snow Crash had the Metaverse with VR goggles and such; Zuckerberg very badly wanted to make it real, and made a VR world and VR hardware and called it the Metaverse. Snow Crash predicts people wearing augmented reality gear, but also talks about some of the social issues inherent with doing so; it didn’t expect everyone to start running around with them:
Someone in this overpass, somewhere, is bouncing a laser beam off Hiro’s face. It’s annoying. Without being too obvious about it, he changes his course slightly, wanders over to a point downwind of a trash fire that’s burning in a steel drum. Now he’s standing in the middle of a plume of diluted smoke that he can smell but can’t quite see.
It’s a gargoyle, standing in the dimness next to a shanty. Just in case he’s not already conspicuous enough, he’s wearing a suit. Hiro starts walking toward him. Gargoyles represent the embarrassing side of the Central Intelligence Corporation. Instead of using laptops, they wear their computers on their bodies, broken up into separate modules that hang on the waist, on the back, on the headset. They serve as human surveillance devices, recording everything that happens around them. Nothing looks stupider, these getups are the modern-day equivalent of the slide-rule scabbard or the calculator pouch on the belt, marking the user as belonging to a class that is at once above and far below human society. They are a boon to Hiro because they embody the worst stereotype of the CIC stringer. They draw all of the attention. The payoff for this self-imposed ostracism is that you can be in the Metaverse all the time, and gather intelligence all the time.
The CIC brass can’t stand these guys because they upload staggering quantities of useless information to the database, on the off chance that some of it will eventually be useful. It’s like writing down the license number of every car you see on your way to work each morning, just in case one of them will be involved in a hit-and-run accident. Even the CIC database can only hold so much garbage. So, usually, these habitual gargoyles get kicked out of CIC before too long.
This guy hasn’t been kicked out yet. And to judge from the quality of his equipment – which is very expensive – he’s been at it for a while. So he must be pretty good.
If so, what’s he doing hanging around this place?
“Hiro Protagonist,” the gargoyle says as Hiro finally tracks him down in the darkness beside a shanty. “CIC stringer for eleven months. Specializing in the Industry. Former hacker, security guard, pizza deliverer, concert promoter.” He sort of mumbles it, not wanting Hiro to waste his time reciting a bunch of known facts.
The laser that kept jabbing Hiro in the eye was shot out of this guy’s computer, from a peripheral device that sits above his goggles in the middle of his forehead. A long-range retinal scanner. If you turn toward him with your eyes open, the laser shoots out, penetrates your iris, tenderest of sphincters, and scans your retina. The results are shot back to CIC, which has a database of several tens of millions of scanned retinas. Within a few seconds, if you’re in the database already, the owner finds out who you are. If you’re not already in the database, well, you are now.
Of course, the user has to have access privileges. And once he gets your identity, he has to have more access privileges to find out personal information about you. This guy, apparently, has a lot of access privileges. A lot more than Hiro.
“Name’s Lagos,” the gargoyle says.
So this is the guy. Hiro considers asking him what the hell he’s doing here. He’d love to take him out for a drink, talk to him about how the Librarian was coded. But he’s pissed off. Lagos is being rude to him (gargoyles are rude by definition).
“You here on the Raven thing? Or just that fuzz-grunge tip you’ve been working on for the last, uh, thirty-six days approximately?” Lagos says.
Gargoyles are no fun to talk to. They never finish a sentence. They are adrift in a laser-drawn world, scanning retinas in all directions, doing background checks on everyone within a thousand yards, seeing everything in visual light, infrared, millimeter wave radar, and ultrasound all at once. You think they’re talking to you, but they’re actually poring over the credit record of some stranger on the other side of the room, or identifying the make and model of airplanes flying overhead. For all he knows, Lagos is standing there measuring the length of Hiro’s cock through his trousers while they pretend to make conversation.
I think that Stephenson probably did a reasonable job there of highlighting some of the likely social issues that come with having wearable computers with always-active sensors running.
It’s not clear to me whether-or-not the display is fundamentally different from past versions, but if not, it’s a relatively-low-resolution display on one eye (600x600). That’s not really something you’d use as a general monitor replacement.
The problem is really that what they have to do is come up with software that makes the user want to glance at something frequently (or maybe unobtrusively) enough that they don’t want to have their phone out.
A phone has a generally-more-capable input system, more battery, a display that is for most-purposes superior, and doesn’t require being on your face all the time you use it.
I’m not saying that there aren’t applications. But to me, most applications look like smartwatch things, and smartwatches haven’t really taken the world by storm. Just not enough benefit to having a second computing device strapped onto you when you’re already carrying a phone.
Say someone messages multiple people a lot and can’t afford to have sound playing and they need to be moving around, so can’t have their phone on a desk in front of them with the display visible or something, so that they can get a visual indicator of an incoming message and who it’s from. That could provide some utility, but I think that for the vast majority of people, it’s just not enough of a use case to warrant wearing the thing if you’ve already got a smartphone.
My guess is that the reason that you’d use something like this specific product, which has a camera on the thing and limited (compared to, say, XREAL’s options) display capabilities, so isn’t really geared up for AR applications where you’re overlaying data all over everything you see, is to try to pull up a small amount of information about whoever you’re looking at, like doing facial recognition to remember (avoid a bit of social awkwardness) or obtain someone’s name. Maybe there are people for whom that’s worthwhile, but the market just seems pretty limited to me for that.
I think that maybe there’s a world where we want to have more battery power and/or compute capability with us than an all-in-one smartphone will handle, and so we separate display and input devices and have some sort of wireless commmunication between them. This product has already been split into two components, a wristband and glasses. In theory, you could have a belt-mounted, purse-contained, or backpack-contained computer with a separate display and input device, which could provide for more-capable systems without needing to be holding a heavy system up. I’m willing to believe that the “multi-component wearable computer” could be a thing. We’re already there to a limited degree with Bluetooth headsets/earpieces. But I don’t really think that we’re at that world more-broadly.
For any product, I just have to ask — what’s the benefit it provides me with? What is the use case? Who wants to use it?
If you get one, it’s $800. It provides you with a different input mechanism than a smartphone, which might be useful for certain applications, though I think is less-generally useful. It provides you with a (low-resolution, monocular, unless this generation has changed) HUD that’s always visible, which a user may be able to check more-discretely than a smartphone. It has a camera always out. For it to make sense as a product, I think that there has to be some pretty clear, compelling application that leverages those characteristics.
What the fuck is wrong with that dude?
This doesn’t meet your “human enemies” requirement, but if you’re looking for realistic firearm mechanics, you might want to look at Receiver 2. It does have procedurally-generated layouts, as per your roguelike point, and most of the game is firearm mastery.
I’d also bet against the CMOS battery, if the pre-reboot logs were off by 10 days.
The CMOS battery is used to maintain the clock when the PC is powered off. But he has a discrepancy between current time and pre-reboot logs. He shouldn’t see that if the clock only got messed up during the power loss.
I’d think that the time was off by 10 days prior to power loss.
I don’t know why it’d be off by 10 days. I don’t know uptime of the system, but that seems like an implausible amount of drift for a PC RTC, from what I see online as lilely RTC drift.
It might be that somehow, the system was set up to use some other time source, and that was off.
It looks like chrony is using the Debian NTP pool at boot, though, and I donpt know why it’d change.
Can DHCP serve an NTP server, maybe?
kagis
This says that it can, and at least when the comment was written, 12 years ago, Linux used it.
The ISC DHCP client (which is used in almost any Linux distribution) and its variants accept the NTP field. There isn’t another well known/universal client that accepts this value.
If I have to guess about why OSX nor Windows supports this option, I would say is due the various flaws that the base DHCP protocol has, like no Authentification Method, since mal intentioned DHCP servers could change your systems clocks, etc. Also, there aren’t lots of DHCP clients out there (I only know Windows and ISC-based clients), so that leave little (or no) options where to pick.
Maybe OS X allows you to install another DHCP client, Windows isn’t so easy, but you could be sure that Linux does.
My Debian trixie system has the ISC DHCP client installed in 2025, so might still be a factor. Maybe a consumer broadband router on your network was configured to tell the Proxmox box to use it as a NTP server or something? I mean, bit of a long shot, but nothing else that would change the NTP time source immediately comes to mind, unless you changed NTP config and didn’t restart chrony, and the power loss did it.
I don’t think that the grid frequency is used for PC timekeeping. You have internal timekeeping circuits. AC power stops at the PSU, and I don’t think that there’s any cable over which a time protocol flows from the PSU to the motherboard.
Not sure what you mean but IIRC, he dropped out to start Facebook.
kagis
Yup.
https://en.wikipedia.org/wiki/Mark_Zuckerberg
Harvard University (dropped out)
Incidentally, some other prominent tech founders did the same. Off the top of my head:
Bill Gates (founder, Microsoft):
https://en.wikipedia.org/wiki/Bill_Gates
Harvard University (dropped out)
Michael Dell (founder, Dell Technologies):
https://en.wikipedia.org/wiki/Michael_Dell
University of Texas at Austin (dropped out)
Steve Wozniak (founder, Apple):
https://en.wikipedia.org/wiki/Steve_Wozniak
University of Colorado Boulder (expelled)
He started Apple later, didn’t directly do it from college.
He did go back to university, years later, after he’d retired from Apple, and got his bachelor’s degree.
Yeah, in all honesty, it’s not really my ideal as a quote to capture the idea. Among other things, it’s comparing what is for the quoted person, household tasks and employment, whereas I’d generally prefer employment vs employment for most of these.
And for the quoted person, the issue is that AI is doing work that we tend to think of as potentially-desirable, rather than in the context I’m writing about, where it’s more that science fiction often portrays AI-driven sex robots that perform for humans (think Blade Runner or A.I. Artificial Intelligence (2001)), but doesn’t really examine humans performing for AIs.
Still, it was the closest popular quote I could think of to address the idea that the split between AI and human roles in a world with AIs is not that which we might have anticipated.
Fair enough. I will point out that for the context of my comment, this is probably functionally equivalent — that is, if one has a piece of software to walk the DHT and build a list of torrents on it, it’s probably still going to be done in a fully-automated fashion.
In the broad sense that understanding of spatial relationships and objects is just kind of limited in general with LLMs, sure, nature of the system.
If you mean that models simply don’t have a training corpus that incorporates adequate erotic literature, I suppose that it depends on what one is up to and the bar one has. No generative AI in 2025 is going to match a human author.
If you’re running locally, where many people use a relatively-short context size on systems with limited VRAM, I’d suggest a long context length for generating erotic literature involving bondage implements like chastity cages, as otherwise once information about the “on/off” status of the implement passes out of the context window, the LLM won’t have information about the state of the implement, which can lead to it generating text incompatible with that state. If you can’t afford the VRAM to do that, you might look into altering the story such that a character using such an item does not change state over the lifetime of the story, if that works for you. Or, whenever the status of the item changes, at appropriate points in the story, manually update its status in the system prompt/character info/world info/lorebook/whatever your frontend calls its system to inject static text into the context at each prompt.
My own feeling is that relative to current systems, there’s probably room for considerably more sophisticated frontend processing of objects, and storing state and injecting state about it efficiently into the system prompt. The text of a story is not an efficient representation of world state. Like, maybe use an LLM itself to summarize world state and then inject that summary into the context. Or, for specific games written to run atop an LLM, have some sort of Javascript module that runs in a sandbox, runs on each prompt and response to update its world state, and dynamically generates text to insert into the context.
I expect that game developers will sort a lot of this out and develop conventions, and my guess is that the LLM itself probably isn’t the limiting factor on this today, but rather how well we generate context text for it.
Even if they were wearing a mask, new, more-capable biometric analysis could often identify humans.
I have to say that the basic concept of having Meta pay human adult content performers to perform to teach an AI about sexual performance would be kind of surreal.
“So what do you do for work?”
“I’m an exotic dancer.”
“Straight or gay establishment?”
“Err…I perform for an artificial intelligence.”
You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
— Joanna Maciejewaska
I expect that Joanna would not be enthused about humans stripping for machines.
Ah. Thanks for the context.
Well, after they have product out, third parties will benchmark them, and we’ll see how they actually stack up.