Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
It really depends on how you quantize the model and the K/V cache as well. This is a useful calculator. https://smcleod.net/vram-estimator/ I can comfortably fit most 32b models quantized to 4-bit (usually KVM or IQ4XS) on my 3090’s 24 GB of VRAM with a reasonable context size. If you’re going to be needing a much larger context window to input large documents etc then you’d need to go smaller with the model size (14b, 27b etc) or get a multi GPU set up or something with unified memory and a lot of ram (like the Mac Minis others are mentioning).
I think we can all agree that modifications to these models which remove censorship and propaganda on behalf of one particular country or party is valuable for the sake of accuracy and impartiality, but reading some of the example responses for the new model I honestly find myself wondering if they haven’t gone a bit further than that by replacing some of the old non-responses and positive portrayals of China and the CPC with a highly critical perspective typified by western governments which are hostile to China (in particular the US). Even the name of the model certainly doesn’t make it sound like neutrality and accuracy is their primary aim here.
I used to daily drive Ubuntu some years ago for work/personal use but have been back on Win 10 primarily for the last 4-5 years. I was considering trying to go back due to how much Windows sucks (despite some proprietary software only being available on it) but remembering the trouble I had with some networking/printer drivers and troubleshooting those issues and then seeing this article Is definitely making me reconsider…
It looks like it will support DLSS3 (frame generation) so if you have a 40 series card (or almost any card with a bit of time to install that DLSS to FSR3 mod) that should be a nice boost.
Looks like it now has Docling Content Extraction Support for RAG. Has anyone used Docling much?