• rmrf@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      I think a more likely alternative with be a BSD or other Unix derivative

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 hours ago

      Most current TOP500 here

      They don’t actually list OS there, but you can assume it’s Linux. Lots of R&D has been done to get Linux to run well on supercomputers, it’d be cost prohibitive to try some other OS.

  • Luffy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    1
    ·
    17 hours ago

    If you look at it logically, it only makes sense.

    With these Supercomputers, you often run on very specialized Hardware which you have to write costum kernels and drivers for, and if you arent willing to spend millions to get Microsoft to support it, your only other Option is Linux really

    • IphtashuFitz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 hours ago

      I managed a research cluster for a university for about 10 years. The hardware was largely commodity and not specialized. Unless you call nVidia GPU’s or InfiniBand “specialized”. Linux was the obvious choice because many cluster-aware applications, both open source and commercial, run on Linux.

      We even went so far as to integrate the cluster with CERN’s ATLAS grid to share data and compute power for analyzing ATLAS data from the LHC. Virtually all the other grid clusters ran Linux, so that made it much easier to add our cluster to its distributed environment.

    • SlurpingPus@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 hours ago

      The competition wasn’t between Linux and Windows, but rather Linux with some dedicated server OSes like Solaris, HP-UX and whatnot — mostly variants of Unix, but idk which ones exactly.

      P.S. You get much more enjoyment from this thread if you imagine it in one of thick English accents.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        And for those, it’s pretty clear. Solaris, HP-UX, Irix, AIX… They all were proprietary offerings that strove to lock in users to a specific hardware stack with very high prices.

        Linux opened up the competitive field to a much broader set of business concerns, making performance per dollar much more attractive. Also the open source having a great deal of appeal for some of the academic market, a huge participant in the HPC community.

    • Eggymatrix@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      6
      ·
      16 hours ago

      Not really, we are not in the eighties anymore, modern supercomputers are mainly a bunch of off the shelf servers connected together

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        8 hours ago

        I mean, what the first person said is true…

        … and what you have just said is true.

        There is no tension between these concepts.

        Nearly all servers run on linux, nearly all supercomputers are some kind of locally networked cluster… that run linux.

        Theres… theres no conflict here.


        In fact, this kind of multi computer paradigm for Linux is the core of why X11 is weird and fucky, in the context of a modern, self contained PC, and why Wayland is a thing nowadays.

        X11 is built around a paradigm where you have a whole bunch of hardware units doing actual calcs of some kind, and then, some teeny tiny hardware that is basically just a display and input device… well thats the only thing that even needs to load a display or input related code/software/library.

        You also don’t really need to worry so much about security in the display/input framework itself, because your only potential threat is basically a rogue employee at your lab, and everyone working there is some kind of trained expert.

        This makes sense if your scenario is a self contained computer research facility that is only networked to what is in its building…

        … it makes less sense and has massive security problems if you have a single machine that can do all of that, and that single machine is also networked to millions of remote devices (via the modern internet), and in a world where computer viruses, malware, are a multi billion dollar industry… and the average computer user is roughly as intelligent and knowledgeable as a 6th grader.

      • remotelove@lemmy.ca
        link
        fedilink
        English
        arrow-up
        34
        ·
        16 hours ago

        They still probably need a ton of customization and tuning at the driver level and beyond, which open source allows for.

        I am sure there is plenty of existing “super computer”-grade software in the wild already, but a majority of it probably needs quite a bit of hacking to get running smoothly on newer hardware configurations.

        As a matter of speculation, the engineers and scientists that build these things are probably hyper-picky about how some processes execute and need extreme flexibility.

        So, I would say it’s a combination of factors that make Linux a good choice.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 hours ago

          Surprisingly not a lot of ‘exciting tuning’, a lot of these are exceedingly conservative when it comes to tuning. From a software perspective, the most common “weird” thing in these systems is the affinity for diskless boot, and that’s mostly coming from a history of when hard drives used to be a more frequent failure causing downtime (yes, the stateless nature of diskless boot continues to be desired, but the community would have likely never bothered if not for OS HDD failures). They also sometimes like managing the OS kind of like a common chroot to oversimplify, but that’s mostly about running hundreds of thousands of what should be the exact same thing over and over again, rather than any exotic nature of their workload.

          Linux is largely the choice by virtue of this market evolving from largely Unix based but most applications they used were open source, out of necessity to let them bid, say, Sun versus IBM versus SGI and still keep working regardless of who was awarded the business. In that time frame, Windows NT wasn’t even an idea, and most of these institutions wouldn’t touch ‘freeware’ for such important tasks.

          In the 90s Linux happened and critically for this market, Red Hat and SUSE happened. Now they could have a much more vibrant and fungible set of hardware vendors with some credible commercial software vendor that could support all of them. Bonus that you could run the distributions or clones for free to help a lot of the smaller academic institutions get a reasonable shot without diverting money from hardware to software. Sure, some aggressively exotic things might have been possible versus the prior norm of proprietary, but mostly it was about the improved vendor-to-vendor consistency.

          Microsoft tried to get into this market in the late 2000s, but no one asked for them. They had poor compatibility with any existing code, were more expensive, and much worse at managing at scale in the context of headless, multi-user compute nodes.

        • Treczoks@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          14 hours ago

          This is called a “Cluster” and it precedes Linux by a decade or two, but yes.

          And what else would the supercomputers run on? Windows? You won’t get into the tops if half your computers are bluescreening while the other half is busy updating…

          The times when supercomputers were batch-oriented machines where your calculation was the only thing that was running on the hardware, with your software basically including the OS (or at least the parts that you needed) are long over.

        • olosta@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          ·
          16 hours ago

          Some have thousands but yes.on most of these systems :

          • Process launch and scheduling is done by a resource manager (SLURM is common)
          • Inter process communication uses an MPI implementation (like OpenMPI)
          • These inter node communications uses a low latency (and high bandwidth) network. This is dominated by Infiniband from Nvidia (formerly Mellanox)

          What’s really peculiar in modern IT, is that it often use old school Unix multi user management. Users connect to the system through SSH with their own username, use a POSIX filesystem and their processes are executed with their own usernames.

          There is kernel knobs to pay attention to, but generally standard RHEL kernels are used.

        • ji59@hilariouschaos.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          I think that the software is specialized, but the hardware is not. They use some smart algorithms to distribute computation over huge number of workers.

  • Rose56@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    13 hours ago

    Long time now. Linux runs almost on every device, not just computers and phones.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      I think they are surprised that there aren’t any other bespoke or legacy or somewhat exotic OSes being used for any of them. Or maybe BSD or something.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 hours ago

        The hardware is generally more mundane on that front, and even the ‘exotic’ ones are most readily accommodated by Linux (almost all x86 based, with a handful of POWER, ARM, and one that is allegedly DEC Alpha derived).

        Generally speaking, a Top500 is a bunch of x86 servers usually nowadays with some GPUs in them connected by ethernet or infiniband.

    • ryannathans@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Right? Where’s freebsd gone. Would have thought freebsd would squeeze out extra performance from them

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        FreeBSD is unlikely to squeeze performance out of these. Particularly disadvantaged because the high speed networking vendors favored in many of these ignore FreeBSD (Windows is at best an afterthought), only Linux is thoroughly supported.

        Broadly speaking, FreeBSD was left behind in part because of copyleft and in part by doing too good a job of packaging.

        In the 90s, if a company made a go of a commercial operating system sourced from a community, they either went FreeBSD and effectively forked it and kept their variant closed source and didn’t contribute upstream, or went Linux and were generally forced to upstream changes by copyleft.

        Part of it may be due to the fact that a Linux installation is not from a single upstream, but assembled from various disparate projects by a ‘distribution’. There’s no canonical set of kernel+GUI+compilers+utilities for Linux, but FreeBSD owns a much more prescriptive project. I think that’s gotten a bit looser over time, but back in the 90s FreeBSD was a one-stop-shop, batteries included project that included everything the OS needed maintained under a single authority. Linux needed distributions and that created room for entities like RedHat and SUSE to make their mark.

        So ultimately, when those traditionally commercial Unix shops started seeing x86 hardware with a commercially supported Unix-alike, they could pull the trigger. FreeBSD was a tougher pitch since they hadn’t attracted something like a RedHat/SUSE that also opted into open source model of business engagement.

        Looking at the performance of these applications on these systems, it’s hard to imagine an OS doing better. Moving data is generally as close to zero copy as a use case can get, these systems tend to run essentially a single application at a time, so the cpu and io scheduling hardly matter. The community used to sweat ‘jitter’ but at this point those background tasks are such a rounding error in the overall system performance they aren’t worth even thinking about anymore.

      • shane@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Phoronix used to do benchmarking of various Linux flavors and include FreeBSD. It was never the fastest, usually some Intel-optimized distro was, IIRC.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      14 hours ago

      I once worked on a supercomputer in the olden times - this was before Linux. You basically wrote your calculation application on a front-end system with a cross-compiler. It was then transferred to the target machines’ RAM and ran there. Your application was the only thing running on that machine. No OS, no drivers, no interrupts (at least not that I knew of). Just your application directly on the hardware. Once your program was finished, the RAM was read back, and you could analyze the dump to extract your results.

    • olosta@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      15 hours ago

      No it’s not, other Unix were on the list until 2017, there was even some Windows and macos for a time.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          “Clustering” in this context is a bit different. Do you have a network of some kind? Congratulations, you can do an HPC cluster.

          Usually in the context you describe, it’s usually referring to HA clustering or some application specific clustering. HPC clustering is a lot less picky. If someone can port an MPI runtime to it, you can make a go of it.

  • axh@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    16 hours ago

    It is said, that once you install windows on them, you would finally be able to use Internet Explorer!

  • Rooty@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    16 hours ago

    Well yeah, if you have custom or exotic hardware you either customize an existing OS or write one from scratch. First option is much more sensible.

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      14 hours ago

      “Custom” and “Exotic” is a thing of the past. Been there, used that. It didn’t have Linux, either.

      Nowadays, it’s more or less stock PCs (with high-end specs for CPU, RAM, GPU, etc), but nothing that would not run a common OS. They would probably even run Windows.

      What it makes special is clustering.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        You are right, and yes, they could run Windows, but it’d be pretty silly.

        All the applications they run were written with a pure POSIX mindset, the jobs are run headless, and the legacy of much of the application code dates back to before even Windows NT was a thing.

        In the late 2000s, Microsoft actually made a concerted push to try to get into the market, and it was just a laughable failure (they brought nothing to the table, had reduced ecosystem compatibility, and tried to charge more all in the process.

    • axx@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      14 hours ago

      Supercomputers are not made of custom or exotic hardware. They are large clusters of high end servers.

  • CentipedeFarrier@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 hours ago

    Well shit now I want to know if quantum computers have operating systems…

    And it looks like as of earlier this year the answer to that… is yes. Special ones. Well, special one so far.

    Quantum internet seems like a really bad idea, though, if the regular internet is any indication…

    https://www.livescience.com/technology/computing/worlds-first-operating-system-for-quantum-computers-unveiled-it-can-be-used-to-manage-a-future-quantum-internet

    • Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      It’s not exactly the quantum computers having an OS. Like with supercomputers back in my time, there are more or less normal computers running a more or less normal OS, which has the computational engine as a kind of device. You create and compile your “application” on that host processor, and load the “binary” onto the quantum device and execute it.

  • Novamdomum@fedia.io
    link
    fedilink
    arrow-up
    7
    arrow-down
    9
    ·
    16 hours ago

    Wow, you learnt that today RandomGS310 did you? The joy that must have given you that you immediately created an account and this was your first and only post? Good for you my totally authentic friend.

    If we could all just break character for a sec… I’ve been meaning to ask you Linux lovers a question for a while. Why is it so important to you that everyone starts using Linux? It just seems like you all put so much effort into trying to convert/coerce/shame people into using Linux and I’ve just never understood why?

    • ozymandias@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      im fond of linux, i also understand that most people aren’t programmers and find it daunting.
      i want people to be able to use it, and im really happy when it gets mainstream support, like with SteamOS.
      as to why with that? because windows spies on you, is full of viruses, and participates in monopolistic practices… having a corporation control every aspect of your operating system is a bad idea for like, society and humanity….

    • c0dezer0@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      Linux users are like vegans. 90% percent of Linux users (or vegans) are perfectly fine with whatever you use (or eat). It’s only the small 10% who are very vocal and cast a negative light on the entire community.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 hours ago

      We also recommend therapy and hydration. But, y’know. In different communities.

      Same reasons though.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      To the extent it would be nice if the userbase is broader and applications I want to use have to accept that they cater to Linux users.

      To some extent it’s nice to share what you appreciate.

      Another facet is when someone complains for the 100th time about Microsoft making yet another move against their own users it feels weird that they just keep with it.

    • JasonDJ@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 hours ago

      You ever see someone whose clearly abused by their spouse/parent/whatever and you’re like “you gotta get away from them”…but they’re like “no it’s okay I need this, and it’s not that bad, most of the bruises are under my shirt”.

      It’s sorta like that.

      • Novamdomum@fedia.io
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        8 hours ago

        My OS is not abusing me. I’m fine. In fact it’s been incredibly empowering for the last 40 years. I mean it WAS abuse back then, trying to install the damn thing from those 5.25" floppies. Now it’s smooth as butter and I love it because I remember the journey from then to now so clearly.

        • JasonDJ@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          No, it is abusing you. What you’re describing is akin to the conditioning and grooming most abusers are good at.

          You just got used to all the data collection and telemetry and restrictive licenses and not owning your own computer or your software. Your abuser has made you think it’s all okay.

          It’s not okay. This is the consumer abuse that the whole damn industry is getting away with.

          • Novamdomum@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            5 hours ago

            The urge to shout “HE CAN CHANGE! YOU DON’T KNOW HIM LIKE I DO!” is pretty strong right now 🤣

            • JasonDJ@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 hour ago

              I doubt MS could change for the better. People still keep on enabling them no matter how hard they hit the users or admins. Part of that is because they’ve got you reliant on them. All of your apps, all of your files, everything is in their system.

              Does this sound familiar? Sort of like how abusers will control their victims financial and social activity?

              Honestly I never thought of this analogy before today and it seemed to be in bad taste because of the seriousness of abuse…but the similarities are there…

    • otacon239@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      15 hours ago

      The more people that use it, the more companies and individuals see that it’s a viable alternative to Windows. It’s not that we are actively affected by others using Windows, we just know how much better it is on the other side with an OS that isn’t trying to be hostile. You already know all the talking points. We just want others to stop complaining daily about how awful computers are and instead see that they don’t have to be.

      • Novamdomum@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        8 hours ago

        I get it. You’re basically lovable hippies who found a cool thing and can’t imagine why anyone would want to use something less good (in their view). Best answer to my question so far thanks :)

    • Ghoelian@piefed.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 hours ago

      Because the alternatives are just bad for everyone. It’s not just that I don’t like the software, Microsoft is an awful company all around that deserves no one’s business. Apple isn’t that much better either.

      • Novamdomum@fedia.io
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        8 hours ago

        “Microsoft is an awful company all around that deserves no one’s business”

        Wait… all of the 228,000 people at Microsoft don’t deserve anyone’s business? That seems harsh and a little bit of a broad brush.

        • Ghoelian@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 hours ago

          No, Microsoft, the company, deserves no ones business. I don’t have any particular opinion on most people that work there.

    • Kami@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      13 hours ago

      Because if when people use shitty OSes they become angry bitches like yourself.