Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
Oh and I typically get 16-20 tok/s running a 32b model on Ollama using Open WebUI. Also I have experienced issues with 4-bit quantization for the K/V cache on some models myself so just FYI
It really depends on how you quantize the model and the K/V cache as well. This is a useful calculator. https://smcleod.net/vram-estimator/ I can comfortably fit most 32b models quantized to 4-bit (usually KVM or IQ4XS) on my 3090’s 24 GB of VRAM with a reasonable context size. If you’re going to be needing a much larger context window to input large documents etc then you’d need to go smaller with the model size (14b, 27b etc) or get a multi GPU set up or something with unified memory and a lot of ram (like the Mac Minis others are mentioning).
This would be a great potential improvement in UX for streaming sports feeds for sure - not having to navigate web pages and start / manage streams manually etc. Does anyone know if this is possible for sites serving these streams like FirstRowSports or StreamEast etc?
As a ‘front page of the internet’ it has been a pretty great replacement for me as it’s where I go each day to just see what’s going on. However, due to the smaller size you do lose a lot of the activity in more niche communities and the sheer volume of posts/comments compared to Reddit. That’s the biggest downside. Still, you also lose the incessant ads/bad UI/UX decisions and ever accelerating late stage capitalism driven enshittification so that’s a big plus.
Looks like it now has Docling Content Extraction Support for RAG. Has anyone used Docling much?