My one dark hope is AI will be enough of an impetus for somebody to update DMCA
My one dark hope is AI will be enough of an impetus for somebody to update DMCA
> pay once, get access to everything everywhere
> thinks about Elsevier
OH GOD PLEASE NO
This is interesting but I’ll reserve judgement until I see comparable performance past 8 billion params.
All sub-4 billion parameter models all seem to have the same performance regardless of quantization nowadays, so 3 billion is a little hard to see potential in.
I seriously doubt the viability of this, but I’m looking forward to being proven wrong.
I would recommend instead to use the AI Horde: https://stablehorde.net/ It’s a collection of people hosting stable diffusion/text generation models
There’s also openrouter which can connect to ChatGPT with a token-based system. (They check your prompts for hornyposting though)
Judging by my bank account I’m transitioning to non-profit status as well.
In my experience these open models is where the real work is being done. The large supervised models like DALL-E etc are more flashy but there’s a lot more going on behind the scenes than the model itself so it feels like it’s hard to gauge the real progress being done
You could try Guix! It’s ostensibly source based but you can use precompiled binaries as well (using the substitute system)
It’s a source-first Functional package distro like Nix but uses Scheme to define everything from the packages to the way the init system (Shepherd) works.
It’s very different from other distros but between being functional, source-first, and having shepherd, I personally love it
It’s usually not the water itself but the energy used to “systemize” water from out-of-system sources
Pumping, pressurization, filtering, purifying all take additional energy.
The problem is notably “powerful”, AIs need pretty significant hardware to run well
As an example the snapdragon NPUs I think can barely handle 7B models.
This is a good move for international open source projects, with multiple lawsuits in multiple countries around the globe currently ongoing, the intellectual property nature of code made using AI isn’t really secure enough to open yourself up to the liability.
I’ve done the same internally at our company. You’re free to use whatever tool you want but if the tool you use spits out copyrighted code, and the law eventually has decided that model users instead of model trainers are liable for model output, then that’s on you buddy.
Doing gods work here
Starship was still Elon’s brainchild and it is years behind, and threatens the viability of the entire Artemis program. Their finances are also terribly linked to the success of Starlink, which is also shaky at best.
I would not say SpaceX is “on track.”
I feel like this is going to be where I disconnect in a major way from our childrens’ generation.
They’re likely going to find it completely normal to have an LLM as a friend and I don’t think I’ll ever be able to bring myself around to that.
The irony here is palpable
It doesn’t have to be your searches, it could have just been the fact that your phone recognized you were on a road trip and that people in your ad cohort tend to want to buy shoes while on road trips.
I’ve worked in algorithmic ad space before and I can say that I’ve never seen evidence of phones listening on conversations but I have seen plenty of evidence from years ago where all your other data is used to form a terrifyingly accurate profile.
We used to do dead reckoning and gps speed gait profiling and we would only need about a weeks worth of GPS data to know height, weight, sex, where you live, where you work, where your kids go to school etc.
We would take that data and cross reference that with data broker info to form a profile, put you in an ad cohort bin, and serve you up as a platform for ad matching services to match to ad campaigns, which get even further targeted.
Millions of dollars spent hyper targeting you but 99 times out of 100 the inaccurate campaign is paying more so they get the adspace but the one time the actual low paying hyper focused campaign gets through it’s always scary how accurate it is.
tl;dr: Ad companies don’t need to listen to your conversation to know what you want to buy, ads are usually inaccurate because the inaccurate campaign paid more
It’s usually not a case of the phone listening but, more creepily, that your behavior before and after talking to your wife about new shoes signaled that you want to buy new shoes.
Ad algorithms are surprisingly perceptive about signals that aren’t obvious.
What do you do for file syncing, if you don’t mind me asking
ChatGpt already is multiple smaller models. Most guesses peg chatgpt4 as a 8x220 Billion parameter mixture of experts, or 8 220 billion parameter models squished together