• 0 Posts
  • 136 Comments
Joined 3 years ago
cake
Cake day: July 18th, 2023

help-circle



  • The difference everyone always ignores is that most of Chinese infrastructure is new. For the US they’d need to buy people out of their land, build new tunnels and bridges, and disrupt so many things to implement high speed rail.

    You can’t just leverage the existing rail network because they have curves and grades that are incompatible with high speed rail (northeast corridor has 30-40mph limits on some curves).

    In addition you’re competing with airplanes which are already proven and support current travel demands. And even if you could get the rail implemented there isn’t any guarantee it’ll be any cheaper than flight (meaning low usage). As it stands today going from major hub on Amtrak can be more expensive and takes an equal amount of time (when accounting for security/etc.)


  • I feel like people forget how much infrastructure is required to keep high speed rail running. Not only do you need the stations, but the tracks (bridges, tunnels, etc.) all need to be maintained. Additionally, when doing maintenance you can’t run the line, so you either need extra capacity so you don’t disrupt service or you end up with times you have to shut lines down.

    In comparison, planes just need a strip of flat land at takeoff and landing (you technically don’t even need an airport). You’re primary bottleneck is how fast you can get planes on/off the tarmac.

    One of the other big issues in rail vs plane is that high speed rail only works at certain grade levels and turn radiuses. So for example, I believe you couldn’t convert the existing northeast corridor in the US to 300mph rail from end to end simply due to the geography. You’d need to create a new route. Looking it up there are speed limits around 30-40mph for Amtrak around Baltimore and Wilmington.










  • But chrome, edge, and safari aren’t open source to my knowledge and they make up almost the entire market. Sure chromium is open source, but that’s not the entire browser. Not to mention, it’s basically Internet Explorer all over again, but with Google behind the reigns.

    Looking at android, we get a glimpse of what Google is willing to do to “open source” to keep control.


  • Yeah, that’s fair. I haven’t jumped into the whole agentic side of things as I find LLMs consistently fail at lower level stuff.

    Everyone says it’s great at prototyping or writing documents, etc, but I think that’s just cause people have low standards. When coding I find that it quickly messes things up or lacks good quality control (which you only notice if you’re familiar with the domain). For writing it’s fine, but the tone and language always feels off and certainly doesn’t sound like me.

    Either way, I would suggest playing around with them to see how they fit into how you do things. I think we’re starting to see things finally slow down on new implementations, and they aren’t going away, so it may be a good time to see if all the fuss is worth it to you.


  • The underlying issues, in my opinion, regarding LLMs is their indeterministic nature. Even zeroing out the temperature (randomness of outputs), you can get significantly different results between two almost identical texts.

    However, building out an ecosystem supporting new technology is a fairly common progression. If you compare it to the internet things like browser caches, CDNs (content delivery networks), code minifiers, etc. are all ways to help combat latency (a fundamental problem for the internet).

    As for the effectiveness of these solutions, RAGs do help a lot when generating text against a select corpus. Its what allows the linked sources in things like ChatGPT and Googles AI results. It’s also what a lot of companies are using for searching their support pages/etc. It’s maybe not quite as good as speaking to a person, but is faster.

    Similarly, the reasoning models and managing the models “context” both have shown demonstrable improvements for models in benchmarking.

    I’m not sure I personally believe this makes LLMs a replacement for humans in most situations, but it at least demonstrates forward progress for GenAI.


  • I think you may be mixing a couple of things together, but I’ll take a crack at this.

    When you get an Ai generated response from a search engine, this is usually a modified RAG (retrieval augmented generation) approach. How this works is that the content from web pages are already pre-processed into embeddings (numerical representations of the text). When you perform a search, your search text is turned into an embedding and compared (numerical similarity) to the websites to get the most related content for your search. That means that the LLM only parses and processes a very small subset of the returned websites to generate its response.

    Another element you might be asking about is how can these agentic AI systems handle larger tasks (things like OpenClaw). That is a bit more complicated and dependent on the systems design, but basically boils down to two things. The first is the “reasoning models” first break concepts into smaller tasks meaning the LLM only has to worry about a subset of a larger task. Secondly, a lot of these systems will periodically merge all past context into a compressed state that the LLM can handle (basically summaries of summaries) or add them to a database for future/faster reference.

    At the end of the day, your understanding of the limits of LLM are correct, all the progress we’ve really seen with LLMs (over the past couple of years) has been the creation of systems to work around their limitations. The base technology isn’t getting much better, but the support around it is.