I once worked on a supercomputer in the olden times - this was before Linux. You basically wrote your calculation application on a front-end system with a cross-compiler. It was then transferred to the target machines’ RAM and ran there. Your application was the only thing running on that machine. No OS, no drivers, no interrupts (at least not that I knew of). Just your application directly on the hardware. Once your program was finished, the RAM was read back, and you could analyze the dump to extract your results.
“Clustering” in this context is a bit different. Do you have a network of some kind? Congratulations, you can do an HPC cluster.
Usually in the context you describe, it’s usually referring to HA clustering or some application specific clustering. HPC clustering is a lot less picky. If someone can port an MPI runtime to it, you can make a go of it.
Wait it’s all Linux? 🔫 Always has been…
I once worked on a supercomputer in the olden times - this was before Linux. You basically wrote your calculation application on a front-end system with a cross-compiler. It was then transferred to the target machines’ RAM and ran there. Your application was the only thing running on that machine. No OS, no drivers, no interrupts (at least not that I knew of). Just your application directly on the hardware. Once your program was finished, the RAM was read back, and you could analyze the dump to extract your results.
OG
No it’s not, other Unix were on the list until 2017, there was even some Windows and macos for a time.
I remember a clustering software for NT, but I never heard of one for the Mac.
“Clustering” in this context is a bit different. Do you have a network of some kind? Congratulations, you can do an HPC cluster.
Usually in the context you describe, it’s usually referring to HA clustering or some application specific clustering. HPC clustering is a lot less picky. If someone can port an MPI runtime to it, you can make a go of it.