I’m kind of new to local AI and wondering what’s the move here? Are they trying to pull off a chrome/android situation? Obviously I don’t trust any of these gafam giants but I would be really interested in running a local LLM on my M1 max (briefly used deepseek last year). My use case would be mostly chat functions to help with academic and text analysis tasks (don’t worry I don’t just blindly trust LLMs, I know what I’m doing), so recommendations are welcome.

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    Random suspicion: It is correlated to how much compute US have. That’s the only reason I can think of. I also wondered why they would do that when US tech are grabbing the global hardware. They’d like to keep control of inference in the cloud but are missing power thus lack compute. But they also benefit from a western AI ecosystem that have much more compute available. Pure guess, but maybe something like that ?