• 0 Posts
  • 205 Comments
Joined 3 years ago
cake
Cake day: June 19th, 2023

help-circle











  • If betting on Polymarket, you would actually have to stump up that money first, and the other person would have to do the same with whatever bid they wanted to use. Then, in order to get any kind of reasonable payback, you would need thousands of other people to make a bet for or against, using their own money.

    The payout isn’t on someone making a bet on themselves, no-one else would bet for or against that as the stakes are so small. The payout is on large-scale events that are - ostensibly - out of the control of the bettor or bettee.

    Polymarket is no different than betting on the outcomes of horse races or sports games, it just opens up the thing being betted on to anything and everything. People will still bet. The key is how “un-rigged” it appears to be.



  • How much do large language models actually hallucinate when answering questions grounded in provided documents?

    Okay, this is looking promising, at least in terms of the most important qualifications being plainly stated in the opening line.

    Because the amount of hallucinations/inaccuracies “in the wild” - depending on the model being tested - runs about 60-80%. But then again, this would be average use on generalized data sets, not questions focusing on specific documentation. So of course the “in the wild” questions will see a higher rate.

    This also helps users, as it shows that hallucinations/inaccuracies can be reduced by as much as ⅔ by simply limiting LLMs to specific documentation that the user is certain contains the desired information, rather than letting them trawl world+dog.

    Very interesting!


  • That may be the case, but the most irritating thing is that thy fill all available spots with the lowest-capacity chips that meet the requested provisioning spec, instead of taking the requested provisioning and using the fewest higher-capacity chips needed to meet the provisioning spec. The latter, at least, would leave spots open for an authorized repair location to manually solder on more approved chips of compatible spec.




  • If you have the money and want simplicity, reliability, and interoperability, go for a Mac. Just clench your sphincter and maximize the RAM; min. 32Gb ought to be minimally appropriate for a 7-8yr lifespan of basic duties. And FFS, go for what your current data uses up ×2.5 or 1Tb, whichever is larger (vital performance reasons in that). Don’t get the smallest storage unless third-party upgrade options exist like for the Mac Mini M4. And remember: all RAM and a lot of storage is integrated these days, which is why you should always max it out; there is no upgrade path except wholesale replacement of the machine. CPU is largely immaterial unless you are doing truly heavy lifting like video editing or AI, so that can often be the lowest choice.

    If you want freedom and truly unconstrained system, some form of Linux/BSD on a Framework system is the way to go. Or if a desktop, hand-assemble it yourself.

    If you are going to stick with Windows, go for a business-class Dell. Trust me, it’ll be almost as $$$$ painful as a Mac, but these little f**kers are built to last. At least you can upgrade the RAM and on-board storage, although I honestly recommend not going under 32Gb for anything other than basic tasks. It’ll be a lot more zippy with 32Gb even if you spend the first week tearing all the AI and built-in spyware out of Windows.