

Finally, the bean counter steps down so that an engineer can take the reins again.
I mean, at least he didn’t fuck things up like so many other bean counters have. But he was only ever a bean counter.


Finally, the bean counter steps down so that an engineer can take the reins again.
I mean, at least he didn’t fuck things up like so many other bean counters have. But he was only ever a bean counter.


Luigi, where are you?


…When mentally ill people are put in charge of the nation’s government…


If betting on Polymarket, you would actually have to stump up that money first, and the other person would have to do the same with whatever bid they wanted to use. Then, in order to get any kind of reasonable payback, you would need thousands of other people to make a bet for or against, using their own money.
The payout isn’t on someone making a bet on themselves, no-one else would bet for or against that as the stakes are so small. The payout is on large-scale events that are - ostensibly - out of the control of the bettor or bettee.
Polymarket is no different than betting on the outcomes of horse races or sports games, it just opens up the thing being betted on to anything and everything. People will still bet. The key is how “un-rigged” it appears to be.


As I pointed out in another root comment, the average - depending on the model being tested - tends to sit between 60% and 80%. But this is with no restriction on source materials… the LLMs are essentially pulling from world+dog in that case
So this opens up an interesting option for users, in that hallucinations/inaccuracies can be controlled for and potentially reduced by as much as ⅔ simply by restricting the model to those documents/resources that the user is absolutely certain contains the correct answer.
I mean, 25% is still stupidly high. In any prior era, even 2.5% would have been an unacceptably high error rate for a business to stomach. But source-restriction seems to be a somewhat promising guardrail to use for the average user doing personal work.


How much do large language models actually hallucinate when answering questions grounded in provided documents?
Okay, this is looking promising, at least in terms of the most important qualifications being plainly stated in the opening line.
Because the amount of hallucinations/inaccuracies “in the wild” - depending on the model being tested - runs about 60-80%. But then again, this would be average use on generalized data sets, not questions focusing on specific documentation. So of course the “in the wild” questions will see a higher rate.
This also helps users, as it shows that hallucinations/inaccuracies can be reduced by as much as ⅔ by simply limiting LLMs to specific documentation that the user is certain contains the desired information, rather than letting them trawl world+dog.
Very interesting!


That may be the case, but the most irritating thing is that thy fill all available spots with the lowest-capacity chips that meet the requested provisioning spec, instead of taking the requested provisioning and using the fewest higher-capacity chips needed to meet the provisioning spec. The latter, at least, would leave spots open for an authorized repair location to manually solder on more approved chips of compatible spec.


Read it again. It occurs even with a full system wipe and re-install from Microsoft-direct media, or even a full hard drive swap. It is wholly independent of what is on the hard drive, the only restriction being that it can only successfully run when injected into Windows.


One example of many.
You must be new to tech to not remember this. Wasn’t all that long ago.


If you have the money and want simplicity, reliability, and interoperability, go for a Mac. Just clench your sphincter and maximize the RAM; min. 32Gb ought to be minimally appropriate for a 7-8yr lifespan of basic duties. And FFS, go for what your current data uses up ×2.5 or 1Tb, whichever is larger (vital performance reasons in that). Don’t get the smallest storage unless third-party upgrade options exist like for the Mac Mini M4. And remember: all RAM and a lot of storage is integrated these days, which is why you should always max it out; there is no upgrade path except wholesale replacement of the machine. CPU is largely immaterial unless you are doing truly heavy lifting like video editing or AI, so that can often be the lowest choice.
If you want freedom and truly unconstrained system, some form of Linux/BSD on a Framework system is the way to go. Or if a desktop, hand-assemble it yourself.
If you are going to stick with Windows, go for a business-class Dell. Trust me, it’ll be almost as $$$$ painful as a Mac, but these little f**kers are built to last. At least you can upgrade the RAM and on-board storage, although I honestly recommend not going under 32Gb for anything other than basic tasks. It’ll be a lot more zippy with 32Gb even if you spend the first week tearing all the AI and built-in spyware out of Windows.


You are correct, however they were malicious in nature and loaded on every boot from the UEFI/BIOS. They required Windows and auto-terminated the install if they already existed.


Goldfish memories by most muggles and normies.
Plus the latest shiny and feature FOMO.
And then you have procurement who are told to get the most at the least cost, allowing state-owned companies to undercut most competition. Without clearly-specified guidelines that exclude dangerous tech, most rank-and-file salarymen will be told by Dilbert bosses to order the hardware or look for a different job.


Yes, but if you are running Windows on them, do they still inject Chinese state-sponsored malware into Windows on every boot from UEFI/BIOS storage?
They were caught doing this on several occasions, to the point where Lenovo products are forbidden across significant swaths of the U.S. government and military.


And even if the Core Storage held everything straight out of the gate, you could do initial storage configured via RAID-10 using only 28× 30Tb drives.
In Canadian Pesos, that’s $34,000 before taxes for those drives. If the operating costs were in USD, that’s only 5 months of operating costs. Get a pair of used 4U 16-bay server boxes, and almost anything built within the last decade will work well as a SAN/NAS, especially if you use a specialized FOSS NAS OS.
A good strategy for migrating to BitTorrent would be to migrate the high value content first, so that bugs and failures ooze out of the woodwork as rapidly as possible. This would also allow you to build the NAS/SAN data storage boxes over time, one at a time, instead of all at once. And you can start with repurposed desktops as the seedbox itself and upgrade to more RAM once the BitTorrent client grows beyond the box’s initial resources. This stepwise growth would also give you the opportunity to work out any kinks and gotchas that you failed to anticipate.
For example, the BitTorrent client you choose to run on the seedbox itself will be a critical importance. I have found, through my own use of multiple clients, that by far the most aggressive BitTorrent client I have ever come across has been BiglyBT. I am able to achieve a ratio in weeks and sometimes even days the most other clients require years or even decades to achieve. For something seeding out, there is literally nothing better.
As an example: when MyAnonamouse banned BiglyBT, I tried an experiment, downloading the same movie file with several different torrent clients. After a full year of seeding, the runner-up was qBittorrent, with a ratio of 0.2. BiglyBT? A ratio of 870.
Same file, same super-seeding, but a massive difference between BiglyBT and pretty much anything else out there.
It’s a shame that so many closed trackers ban BiglyBT. It is absolutely an overall benefit to the ecosystem.


…What is Myrient?
googles name
390Tb of history
…Oh. Oh, no. This loss would be painful.
I mean, not a gamer, but daaaaaamn.
A structured BitTorrent system could keep most high-demand files offline after initial seeding, especially if seeding rules like the ones MyAnonamouse uses were implemented. And the low-demand ones could remain online via a seedbox from anywhere, even from the operator’s basement.
Honestly, while I don’t have funds to take over normal operations or even provide seedbox space, I can see many paths out of this problem.
That’s what I popped in to say. Use Old Reddit exclusively, first found out about this here… two days later. I have /r/all perpetually in a tab to stumble across stuff I normally never would have. Still working 100% fine.


ANYTHING cloud-connected - your doorbell, your security system, even all f**king post-2006 vehicles, regardless of manufacturer - are suspect.
And are highly likely to be actually spying on you.
I’ve been working with computers since 1982, on the Internet since 1988, on the Web since 1992, and in the IT industry since 1997. The proportion of average people who don’t realize how much of their stuff is exposing them, and by how much, is frankly astounding. It’s almost 100% of normies who are woefully ignorant. Even IT people who have no clue is in the majority.
And the security on this stuff that tracks you tends to be - except in rare circumstances - absolute dogshite. Sometimes it comes without any security at all, such as all devices sold having admin creds baked in, or all remote-access credentials being identical and non-user-editable.
This is why almost all of my stuff is hardlined, I have no IoT devices at all, and the wifi for my family’s devices is physically separate from everything else.
Don’t get me wrong, as IT for almost three decades I love all the new shinies. But I’m not blind, and I’m not stupid.


Would love to know when the iOS app is coming out.


Businesses want AI because it solves what they perceive as a problem: how to obtain labour without having to pay said labour.
Remember: AI is meant for wealth to access labour without cost, not for labour to access wealth. It’s a golden gate meant to permanently separate the wealthy from what used to be the working class.
Even Wozniak has said that while he wasn’t a good engineer, he did know enough to be strategically savvy.