Hey everyone! I was just skimming through some inference benchmarks of other people and noticed the driver version is usually mentioned. It made me wonder how relevant this is. My prod server runs Debian 12 so the packaged nvidia drivers are rather old, but I’d prefer not to mess with the drivers if it won’t bring a benefit. Does any of you have any experience or did do some testing?


I see. When I run the inference engine containerized, will the container be able to run its own version of CUDA or use the host’s version?
I am not sure, I have tried to avoid this whole situation in the last few years :-) IIRC it can have its own CUDA version, but double check that.