

I’m not mixing up units, but let me better explain what I mean. The max speed is only in a best case scenario with a single sequential reader, and that speed drops dramatically when adding other simultaneous operations because the read head needs to seek to different locations. Random read speeds regularly test at less than 1MB/s, and even though multiple sequential streams wouldn’t be random, it’d still require plenty of seek time.
I did a little testing on a drive I have here just now to make sure I’m not completely full of shit. Single stream read was about 120MB/s and I was surprised how well it handled multiple read streams. My drive could handle roughly 9 sequential read streams from different locations on the drive while staying above 10MB/s, so while it wasn’t reaching its max speed, it wasn’t horrible, matched your expectations almost exactly. The real killer, though, was writing. If I added in a single write stream, the read speed dropped to about 1.5MB/s because it seemed to strongly prioritize writing over reading. Maybe some configuration could improve this? Interestingly, adding more readers improved this, but only up to about 4.5MB/s.
My results shouldn’t be taken seriously, it’s just one drive and me mucking around with dd, but I think still illustrative of what I was alluding to, that if you are using a single HDD for multiple things simultaneously, the performance can suffer badly. Actual performance will depend on its use, of course, and honestly the results are way better than I expected, so this isn’t likely a realistic concern at all unless you will be constantly writing large amounts of data to the drive.
Thanks for calling me out on this, these are really interesting results, I think.
Sorry, on demand is not a good way to state this, it’s just how my weird mind thinks of things. By “on demand”, I mean, like you are actively using it to store something or view something. If you’re not intentionally doing something with it, the drive should be completely idle. That’s more of a target than a requirement, though. It’s a way to keep storage drives tidy and not littered with temporary cache files, or databases used to store runtime state by various services. It’s just a strategy I like to take, to keep bulk storage separated from the applications and services that use it.
Even if a usb drive is intended to be permanently attached, it should still be treated as a temporary component. The reason is so that if something happens and the drive is disconnected, it limits the disruption to the system. You lose your media and documents until it’s reattached, of course, but the computer keeps chugging along happily.
If you use it for writing log files, then its loss can disrupt those services (and also prevent the problem from being reported). Also it’ll be constantly making noise, which can be annoying.
That’s my reasoning, anyway, you might prefer it done differently.