I recently learned that Britain is spending £36 million to upgrade a supercomputer:

https://www.bbc.com/news/articles/c79rjg3yqn3o

Can’t you buy a very powerful gaming computer for only $6000?

CPU: AMD R9 9950X3D

Graphics: Nvidia RTX 5080 16GB

RAM: 64GB DDR5 6000MHZ RGB

https://skytechgaming.com/product/legacy-4-amd-r9-9950x3d-nvidia-rtx-5090-32gb-64gb-ram-3

This is how this CPU is described by hardware reviewers:

AMD has reinforced its dominance in the CPU market with the 9950X3D, it appears that no competitor will soon be able to challenge that position in the near future.

https://www.techpowerup.com/review/amd-ryzen-9-9950x3d/29.html

If you want to add some brutal CPU horsepower towards your PC, then this 16-core behemoth will certainly get the job done as it is an excellent processor on all fronts, and it has been a while since have been able to say that in a processor review

https://www.guru3d.com/review/ryzen-9-9950x3d-review-a-new-level-of-zen-for-gaming-pcs/page-29/

This is the best high-end CPU on the market.

Why would you spend millions on a supercomputer? Have you guys ever used a supercomputer? What for?

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    An indisputable use-case for supercomputers is the computation of next-day and next-week weather models. By definition, a next-day weather prediction is utterly useless if it takes longer than a day to compute. And is progressively more useful if it can be computed even an hour faster, since that’s more time to warn motorists to stay off the road, more time to plan evacuation routes, more time for farmers to adjust crop management, more time for everything. NOAA in the USA draws in sensor data from all of North America, and since weather is locally-affecting but globally-influenced, this still isn’t enough for a perfect weather model. Even today, there is more data that could be consumed by models, but cannot due to making the predictions take longer. The only solution there is to raise the bar yet again, expanding the supercomputers used.

    Supercomputers are not super because they’re bigger. They are super because they can do gargantuan tasks within the required deadlines.

  • Em Adespoton@lemmy.ca
    link
    fedilink
    arrow-up
    11
    ·
    1 day ago

    Here’s a vehicle analogy:

    If I can get a hypercar for $3 million, why does a freight train cost $32 million? It’s not like it can go faster, and it’s more limited in what it can do.

  • truthfultemporarily@feddit.org
    link
    fedilink
    arrow-up
    22
    ·
    1 day ago

    If your gaming computer can do x computations every month, and you need to run a simulation that requires 1000x computations, you can wait 1000 months, or have 1000 computers work on it in parallel and have it done in one month.

    • Tanoh@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 day ago

      Keep in mind that not all work loads scale perfectly. You might have to add 1100 computers due to overhead and other dcaling issues. It is still pretty good though, and most of those clusters work on highly parallelised tasks, as they are very suited for it.

      There are other work loads do not scale at all. Like the old joke in programming. “A project manager is someone that thinks that 9 women can have a child in one month.”

  • notsosure@sh.itjust.works
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    1 day ago

    In molecular biology, they are used for instance to calculate/ predict protein-folding. This in turn is used to create new drugs.

      • unexposedhazard@discuss.tchncs.de
        link
        fedilink
        arrow-up
        20
        ·
        edit-2
        1 day ago

        A supercomputer is not one computer. Its a big building filled from floor to ceiling with many computers that work together. It wouldnt have 16 cores, it would be more like thousands of cores all acting as one big computer towards a single computational task.

  • blackbelt352@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    1 day ago

    A super computer isn’t just a single computer, it’s a lot of them networked together to greatly expand the calculation scaling. If you can imagine a huge data center, with thousands of racks of hardware, CPUs, GPUs and RAM chips all dedicated to the tasks of managing network traffic for major websites, its very similar to that but instead of being built to handle all the ins and outs and complexities of managing network traffic, it’s purely dedicated to doing as many calculations for a specific task, such as protein folding as someone else mentioned, or something like Pixar’s Render Farm, which is hundreds of GPUs all networked together dedicated solely to the task of rendering frames.

    With how big and complex any given 3d scenes are in any given Pixar film one single GPU might take 10 hours to calculate the light bounces in a scene to render a single frame, assuming a 90 minute run time, that ~130,000 frames, which is potentially 1,300,000 hours (or about 150 years) to complete just 1 full movie render on a single GPU. If you have 2 GPUs working on rendering frames, you’ve now cut that time down to 650,000 hours. Throw 100 GPUs at the render, we’ve cut time to 13,000 hours, or about a year and a half. Pixar is pretty quiet about their numbers but at least according to the Science of Pixar traveling exhibit during the time of Monster University in 2013, their render farm had about 2000 machines with 24,000 processing cores, and it took 2 years worth of rendering time to render that movie out, and I can only imagine how much bigger their render farm has gotten since then.

    Source: https://sciencebehindpixar.org/pipeline/rendering

    You’re not building a super computer to be able to play Crysis, you’re building a super computer to do lots and lots and lots of math that might take centuries of calculation to do on a single 16 core machine.

  • bingrazer@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    1 day ago

    I’m a PhD student and several of my classmates use computing clusters in their work. These types of computers typically have a lot of CPUs, GPUs, or both. The types of simulations they do are essentially putting a bunch of atoms or molecules in a box and seeing what happens in order to get information which is impossible to obtain experimentally. Simulating beyond a few nanoseconds in a reasonable amount of time is extremely difficult and requires a lot of compute time. However, there are plenty of other uses.

    The clusters we have would have dozens of these CPUs or GPUs and users would submit jobs to it which would run simultaneously. AMD CPUs have better performance than Intel and Nvidia GPUs have Cuda, which is incorporated into a lot of the software people use for these.

    I’ve personally never used anything more than a desktop, though I might apply for some time soon because I’ve got some datasets where certain fits take up to two days each. I don’t want to sit around for a month waiting for these to finish

  • LordMayor@piefed.social
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 day ago

    Galaxy collisions, protein folding, mechanical design and much more. Big simulations of real world physics and chemistry that require massively parallel computation, problems that require insane numbers of calculations running on multiple machines that can pass data to each other very quickly.

    Every supercomputing center has a website where you can read about the research being done on their machines.

    No, these can’t be done at similar scale on your desktop. Your PC can’t do that many calculations in a reasonable time. Even on supercomputers, they can take weeks.

  • remon@ani.social
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 day ago

    They are used to solve problems that would take years on a normal gaming PC.

  • Rekorse@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    22 hours ago

    Thats not the best on the market. I’m not sure who sells what else but the threadripper series is far more powerful, and expensive.

  • Ziggurat@jlai.lu
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 day ago

    A huge factor is how much data you can process at a given time. Often, in the end it’s not that complicated per sample of data. But when you need to run on terra bytes of data (let’s say wide angle telescopes or CERN style experiment) you need huge computer to simulate your system accurately (How does the glue layer size impacts the data?) and process the mountain of data coming from it.

    Nowaday, practically speaking it’s just a building full of standard computers and software process dispatching the load between the machines (which isn’t trivial especially when you do mass parallel processing with shared memory)

  • Zwuzelmaus@feddit.org
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    As you might know or not, computers can only count from 0 to 1.

    But super-computers can do that in the best possible way.

    /s

  • SuiXi3D@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    I’ll toss in my two cents.

    It’s mainly about handling and processing vast amounts of data. Many times more than you or I may deal with on a day to day basis. First, you have to have somewhere to put it all. Then, you’ve got to load whatever you’re working with into memory. So you need terabytes of RAM. When you’re dealing with that much data, you need beefy CPUs with crazy fast connections with a ton of bandwidth to move it all around at any kind of reasonable pace.

    Imagine opening a million chrome tabs, having all of them actively playing a video, and needing to make sense of the cacophony of sound. Only instead of sound, it’s all text, and you have to read all of it all at once to do anything meaningful with it.

    If you make a change to any of that data, how does it affect the output? What about a million changes? All that’s gotta be processed by those beefy CPUs or GPUs.

    Part of the reason AI data enters need so much memory is because they’ve got to load increasingly large amounts of training data all at once, and then somehow have it be able to be accessed by thousands of people all at once.

    But if you want to understand every permutation of whatever data you’re working with, it’s gonna take a ton of time to essentially sift through it all.

    And all that’s gotta be hardware? You have to make doubly sure that the results you get are accurate, so redundancies are built in. Extremely precise engineering of parts, how they’re assembled, and how they’re ultimately used is a lot of what makes supercomputers what they are. Special CPUs, RAM with error correction, redundant connections, backups… it all takes a lot of time, space, and money to operate.