• remotelove@lemmy.ca
    link
    fedilink
    English
    arrow-up
    34
    ·
    16 hours ago

    They still probably need a ton of customization and tuning at the driver level and beyond, which open source allows for.

    I am sure there is plenty of existing “super computer”-grade software in the wild already, but a majority of it probably needs quite a bit of hacking to get running smoothly on newer hardware configurations.

    As a matter of speculation, the engineers and scientists that build these things are probably hyper-picky about how some processes execute and need extreme flexibility.

    So, I would say it’s a combination of factors that make Linux a good choice.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 hours ago

      Surprisingly not a lot of ‘exciting tuning’, a lot of these are exceedingly conservative when it comes to tuning. From a software perspective, the most common “weird” thing in these systems is the affinity for diskless boot, and that’s mostly coming from a history of when hard drives used to be a more frequent failure causing downtime (yes, the stateless nature of diskless boot continues to be desired, but the community would have likely never bothered if not for OS HDD failures). They also sometimes like managing the OS kind of like a common chroot to oversimplify, but that’s mostly about running hundreds of thousands of what should be the exact same thing over and over again, rather than any exotic nature of their workload.

      Linux is largely the choice by virtue of this market evolving from largely Unix based but most applications they used were open source, out of necessity to let them bid, say, Sun versus IBM versus SGI and still keep working regardless of who was awarded the business. In that time frame, Windows NT wasn’t even an idea, and most of these institutions wouldn’t touch ‘freeware’ for such important tasks.

      In the 90s Linux happened and critically for this market, Red Hat and SUSE happened. Now they could have a much more vibrant and fungible set of hardware vendors with some credible commercial software vendor that could support all of them. Bonus that you could run the distributions or clones for free to help a lot of the smaller academic institutions get a reasonable shot without diverting money from hardware to software. Sure, some aggressively exotic things might have been possible versus the prior norm of proprietary, but mostly it was about the improved vendor-to-vendor consistency.

      Microsoft tried to get into this market in the late 2000s, but no one asked for them. They had poor compatibility with any existing code, were more expensive, and much worse at managing at scale in the context of headless, multi-user compute nodes.