

Will do after I get the chance


Will do after I get the chance


Not using zfs at this point, but this sounds like a good thing to keep in mind


The reason I suspected temps was I changed very recently to a define r6 (got it second hand). And since the start I am a bit suspicious of how it performs thermally (terms of sound is actually quite OK).
I do have a fan on the drives but still one of the drives goes up to 40C still (even with front door open).
Also, when you talk about fsck, what could be good options for this to check the drive?


I am also inclining in this direction. I just ordered a new 8tb drive, and will proceed with smart long tests. When you talk about secure erase, are we talking using dd with /Dev/null?


There is certainly a very big amount of fuckery going on right now with nvidia drivers. I simply did not know it was getting this bad. Also, I find very interesting the nvidia “open source” bit got the criticism it deserves (is just not open. There was a transfer of responsibilities, and one small part got open)


I find great the devs of the respective projects took time to actually tell how things work for gn. Sure, they get the attention because it’s a known channel, but on the other hand, Microsoft would never give them this much detail and attention to actually understand what’s happening


And running some commands with sudo, you probably should! (Unless is not your stuff and you don’t care)


What filter are we even talking here? Are they filtering any automatic mentions for Lemmy? If that is the case, that is some petty shit


That is bizarre… What could possibly be the issue with this post on reddit?


I did know about zabbix before, and I actually did try to install it before using the proxmox helper scripts page. Somehow, by the end I got a blank page. Hence I made this post to see more alternatives.
I do know zabbix is super recognized in this area. I just did not install it successfuly on my previous attempt


I will look into this one. First impressions looks interesting, thanks for mentioning it!


I will check how that one works. I was not planning to have another machine to do dadhboarding, bit maybe there are ways to host this as a VM or lxc and make it that way


Those are good questions. I would prefer somewhere in the page of proxmox to have this info. If that is not feasible for any amount of reasons, second best option is to have a service to have a dashboard page. I did forget about graphana, but I can look into it
I do not use models in general online, but my needs are also much smaller. Max I use my local model for ollama is translations. I am always interested in seeing more focused models so we can use on lower end hardware
Did you try to do this workflow with local models? If so, in your experience what are the better models for this?


Fixed on bios, but from what I see, the dbx part is still missing in some models. They are working on it at least
I tried a couple of times with Jen ai and local llama, but somehow does not work that well for me.
But at the same time i have a 9070xt, so, not exactly optimal
So is not on this rack. OK because for a second I was thinking somehow you were able to run ai tasks with some sort of small cluster.
I have nowadays a 9070xt on my system. I just dabbled on this, but until now I havent been that successful. Maybe I will read more into it to understand better.
I have a question about ai usage on this: how do you do this? Every time I see ai usage some sort of 4090 or 5090 is mentioned, so I am curious what kind of ai usage you can do here
Which begs the question: if you, as a company, do not want to support the device on systems not on the short list, why not open source the main driver and let the people figure out how to make it work somewhere else? Is this such a stupid thing to wish for?