• 3 Posts
  • 337 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • Other commenters correctly describe the cost analysis for using evaporative cooling, but I’ll add one more reason why it’s the preferred method when water is available: evaporating water can dissipate truly outlandish amounts of heat with very few moving parts.

    Harkening back to high school physics class, water – like all other substances – has a certain thermal capacity, meaning the energy needed to increase the temperature of 1 kg of water by 1 degree C. The specific thermal capacity of water is already quite high, at 4184 J/(kg*C), besting all the common metals and only losing to lithium, hydrogen, and ammonia. In nature, this means that large bodies of water are natural moderators of temperature, because water can absorb an entire day’s worth of sunlight energy but not substantially change the water temperature.

    But where water really trounces the competition is its “heat of vaporization”. This is the extra energy needed for liquid water to become vapor; simply bringing water to 100 C is not sufficient to make it airborne. Water has a value of 2146 kJ/kg. Simplifying to where 1 kg of water is 1 liter of water, we can convert this unit into something more familiar: 0.596 kWh/L.

    What these two physical properties of water tell us is that if our city water comes out of the pipe at 20 C, then to get it to 100 C to boil, we need the difference (80) times the thermal capacity (4184 J/kg*C), which is 334,720 J/kg . Using the same simplification from earlier, that comes out to be 0.093 kWh/L. And then to actual make the boiling liquid become a vapor (so that it’ll float away), we then need 0.596 kWh/L on top of that.

    Let that sink in for a moment: the energy to turn water into vapor (0.596 kWh/L) is six times higher than the energy (0.093 kWh/L) to raise liquid water from 20 C to 100 C. That’s truly incredible, for a non-toxic, life-compatible substance that we can (but should we?) safely dump into the environment. If you total the two values, one liter of water can dissipate 0.69 kWh of energy per liter. Nice!

    In the context of a 100 megawatt data center (which apparently is what the industry considers as the smallest “hyperscale data center”), if that facility used only evaporative cooling, the water requirement would be 144,927 L/hour. That is an Olympic-size swimming pool every 6.9 seconds. Not nice!

    And AI datacenters are only getting larger, with some reaching into the low single-digits of gigawatts. But what is the alternative to cooling the more-modest data center from earlier? The reality is that the universe only provides for three forms of heat transfer: conduction, convection, and radiation. The heat from data centers cannot be concentrated into a laser and radiated into space, and we don’t have some sort of underground granite mountain that the data centers can conduct their heat into. Convection is precisely the idea of storing the heat into a substance (eg water, air) and then jettisoning the substance.

    So if we don’t want to use water, then we have to use air. But for the two qualities of water that make it an excellent substance for evaporative cooling, air doesn’t come close – 1003 J/(kg*C) and no heat of vaporization, because air is already gaseous. That means we need to move ungodly amounts of air to dissipate 100 megawatts. But humanity has already invented the means to do this, by a clever structure that naturally encourages air to flow through it.

    The only caveat is that the clever structure is a cooling tower, and is characteristic of nuclear power stations. It’s also used for non-nuclear power station cooling, but it’s most famous in the nuclear context, where generators are well into the gigawatt range. Should AI datacenters use nuclear-sized air cooling towers instead of water evaporation? It would work, but even as someone that’s not anti-nuclear, the optics of raising a cooling tower in rural America just to cool a datacenter would be untenable. And that’s probably why no AI datacenter has done that.

    To be abundantly clear, I’d rather not have AI datacenters at all. But since the question was why water consumption is such a big deal, it might be best to say that it’s a physics problem: there isn’t any other readily-available way to provide cooling for 100+ megawatts, without building a 100+ meter tower. Water is always going to be cheaper and more on-hand than concrete.


  • License is the legal instrument which makes open source software/hardware/silicon possible, describing precisely what rights are granted or retained. The term “open source” usually means the definition propounded by the Open Source Initiative (OSI) but sometimes not in certain contexts. At the very minimum, an OSI-compliant open source license will allow any distribution of the software without having to seek additional permission from the author, must be accompanied with access to the source code, and the software does not come with provisos outright prohibiting its use for certain endeavors.

    That last point is about the “use” of the software, and is a crucial distinction between “open source” and “source available”. To have source available means the source code can be examined, but usually cannot be compiled. An open source license explicitly allows all uses, but possibly with additional obligations. For example, the AGPL license allows software to be used to run a server, but creates an obligation to provide the server source code to all users that connect. Whereas something like the MIT 0-clause license has zero additional obligations, while allowing the broadest use. When a license is both Open Source and allows free use, it is known as a FOSS license.

    The exact verbiage of a license are the domain of lawyers, being a legal document. But the choice of license is down to the software author or corporate owner, and is a multifaceted consideration, including marketability, compatibility with other software, and whether it’s more important that the code gets used or that it forever remains available.

    The latter is the major battleground for advocates of permissive versus copyleft licenses. Some software (eg reference cryptographic algorithms) have the priority that the absolute most number of people should use them, so a permissive license makes sense. While other software (eg desktop 3D rendering suite Blender) have a priority that nobody can ever take it private by adding proprietary-only features.

    Choosing open source is easy, but choosing a license to effect that choice can get tricky. For authors publishing their software, the choice may very well change the course of history (ie Linux GPL-2). For consumers or businesses using software, the license dictates how changes can be distributed.


  • This blog post comes to me at an interesting time, for I’ve been gathering info to rebuild my router using FreeBSD. Specifically, I bought a hard-copy of The Book Of PF, 4th Edition, for configuring PF for routing and firewalling. Like with all good firewalls, the PF rulesets start with blocking all traffic. But unlike the VyOS-based rules used by my outgoing Ubiquiti router, PF does not implicitly include rules for common use-cases, such as enabling hairpin NAT for Legacy IP. Nor does the syntax assume that rules are only for inbound, as the shortest syntax will actually apply a rule in both directions on every interface.

    To that end, one of the tenants for configuring a PF firewall is to also filter outbound traffic, as a matter of: 1) asserting control over the network, and 2) implementing the principle of least privilege. I can reasonably accept that my home’s guest WiFi network should be fairly free flowing for outbound traffic, but that shouldn’t apply to my IoT VLAN. Quite frankly, my IoT VLAN only allows outbound connections to four specific NTP servers hosted by ntp.org, because my thermostat has a badly-designed real-time clock and I refuse to allow network access for devices that historically never needed it.

    Before containers, firewalls implemented the DMZ idea, where any host that runs an externally-accessible service would be within the DMZ, to prevent infiltrating the broader LAN if something goes wrong. Your solution achieves a sort-of DMZ, but does it at the Docker host. Whereas a true DMZ would segment the rest of your network off, so as to further reduce risk, since iptables is the only line of defense.

    That said, zooming out, this caught my attention:

    The breaking point came when I wanted to host Gemini FastAPI, a project that wraps Google’s internal Gemini API into an OpenAI-compatible interface, useful for using your Gemini Pro subscription outside Google’s walled garden. The catch: it needs your browser cookies, which means full access to your Google account.

    The very premise of Gemini FastAPI seems flawed to me, if it’s trying to create a wrapper when Google clearly does not want that to exist. The challenges that you observed, such as the brittleness of IP allowlists, would suggest to me that the overall endeavor is going to be brittle, by Google’s design.

    To be clear, that doesn’t mean you shouldn’t pursue this, in the same way that yt-dlp exists for the legitimate use for accessing YouTube. But what both yt-dlp and Gemini FastAPI will never escape is that they only exist because Google hasn’t cracked down on it further. When every indication is showing that this is the road with even more trouble beyond the next curve, is this what you want to invest time and effort into? There are other platforms and protocols that replace YouTube, or at least minimize one’s dependency on a clearly antagonistic host.

    At bottom, I think the question is whether connecting to Gemini is really worth all of this trouble, when they evidently don’t want you to do this, and it adds yet another dependency upon Google. Even if you believe Google is 100% benevolent and their lack of a built-in support for using Gemini externally is just a minor oversight, you will have to pick which services you will base your own infrastructure upon. This is, after all, c/selfhosted.



  • Detest things like zelle which just feels like a scam to me. I absolutely wish the bank bill pay system was more advanced. Like I could have a qr code and say bill me here and they could have one to say set us up as a place to pay for with this qr code.

    Does your current bank not do this with Zelle QR codes? At least in my bank’s mobile app, it’ll happily generate a QR code for my account as a Zelle destination, which other people can scan and then pay me. Or they can scan and send me a request, which I can then accept and pay them. I use Zelle precisely when everyone else would use Venmo, because I don’t want yet-another institution to have my bank details, and since Zelle is integrated with my bank, I already have to trust them anyway.

    It’s not a bill.com-esque invoicing system, but maybe somebody will build atop Zelle to do exactly that. I will say that tying Zelle to a phone number or email is a bit limiting, though, and maybe one day there will be “usernames” for Zelle, encoded purely in QR codes.




  • Seeing as people can change their own name to whatever they want, including if there is no preceding generation with that name, then no, there’s no particular issue with suffixes on names.

    I’d like to point out that in the English-speaking world, the English (and now British) Monarchy increments the generation number without regard for the immediately preceding generation. As in, Elizabeth II was crowned 300+ years after Elizabeth I. So it is well accepted that ordering doesn’t necessarily matter and there is no hard rule against it.


  • On the titular question, I would say that there’s a realm in English where it’s possible to omit the subject. However, it is generally considered necessarily terse and wouldn’t be suitable for general conversation, due to distilling the language down to what essentially is a series of verbs strung together. That realm would be commands or instructions, with the specific example of highway/motorway signage.

    It’s a unique challenge for English highway signs, to convey exactly what’s important but also not be too long to read. There is a physical limit for why “DO NOT ENTER” is preferred over “Automobiles May Not Turn Right Onto This Road”. In the UK, they even reduce this to “NO ENTRY”, which is in line with the pattern of “no parking” or “no lorries over 3 tonnes”.

    Even more reduced are the words “EXIT ONLY” which means “this lane will soon terminate, vehicles in this lane will be forced to exit the highway, and vehicles should change lanes to remain on the highway”. All of that is from the very context of a road, made common through the context of driver training, signage, and lane markings.

    This is one of the reasons why a language like Japanese gets “lost” in translation

    I would argue that translation is not the exercise of converting words like-for-like, but to convey the same meaning or experience in the target language. As an example, expletives in other languages will reference different things, be it name-calling or dishonorable comparisons in Japanese, genitalia in English, excrement in German, etc. But there is no requirement that a mild expletive in Japanese needs to be perfectly preserved into English. Rather, the overall work when read in English should use an equivalently mild expletive, with proper consideration for what the original audience was. So if the Japanese source was a children’s anime and light high-school insults are in the dialogue, the English translation might render this as minced oaths in English. The character building should be mostly identical for the English audience.

    But done only mechanically and without artistry, such a translation is going to sound very “American” and lose much of the soul of the original. IMO, this is something that older Crunchyroll translations suffered from, and fansubs did a much better job of preserving the dialogue faithfully. Even while doing this, some parts of the language are necessarily untranslatable, since things like post-nominal honorifics don’t exist in English. As a result, some fansubs might stylistically choose to always render the honorific every time – eg spelling out Kami-sama rather than translating as God.

    This is in tandem to other subtitle-specific considerations like keeping the surname-then-given name ordering, so that the subtitles read in the same order as they are spoken in full (eg Kudo Shinichi) and correctly shortening to just the surname when addressed as such (eg Kudo-san or Kudo-sama).

    Even still, it could be acceptable to translate as “Mr Kudo” or “Master Kudo”, if that’s the vibe that the source material was going for. Translation is, as I understand it, a holistic work. And perhaps the best example I can cite to is the English translation of The Three Body Problem by Liu Cixin. Ken Liu did the translation, and made an explicit choice to hew towards Chinese terminology, explained in footnotes, because the ordering of the cosmic velocities (first, second, third) is more clearly a stair step towards space travel, rather than using the typical terms of “orbital velocity” and “escape velocity” in English.

    The English translation intentionally makes itself clear as a translation, but care was taken to make sure it is uniquely from eastern source material yet still perfectly readable in English for someone that knows nothing of Chinese 20th Century history or much of anything about space travel. In that sense, it is accessible sci-fi, where even us Americans will understand the great work that Liu Cixin set into ink.


  • A year ago, I had this fun comment about if necromancy were possible in a courtroom setting. And I think the follow-up is relevant here:

    TL;DR: rules of evidence would still apply to the undead, and judges must take care to balance the probative value of evidence with any prejudicial quality it may carry. (to be abundantly clear, this was a schittpost lol)

    even when you have a live body on the stand about to give testimony, it is essential to lay the foundation as to who they are and their legitimacy.

    If a masked vigilante’s legitimacy as a vigilante cannot be proven independently – a tough act since they would want to maintain their secret identity, coupled with the possibility of copycats or false flag operatives – then a jury or the bench would be reluctant to give their testimony much value. It’d be no different if you or I waltzed into court and proclaimed to be the world’s foremost expert on bitemark analysis and underwater basket weaving, but that you just have to “trust me, bro” on that claim. No reasonable person would believe that claim.

    Now, your question focuses on whether a masked vigilante can testify but with a proviso to protect their interests. To that question, the answer is: maybe. Sometimes a witnesss (eg FBI Special Agent) can have their identity protected being exposed in open court, but still allow all parties (clients and their attorneys) to know the witness’s identity, for the purpose of fact checking. While there’s a fairly strong interest that trials be mostly open, a Special Agent’s identity should generally be protected, since that could be the very essence of their occupation.

    And you could maybe say the same thing for a masked vigilante. But in the opposite, one could argue that the general public has an overwhelmingly strong interest in learning the identity of a masked vigilante, which might suggest that the court deny the vigilant the benefit of a private testimony.

    Still, the vigilante could proceed to give testimony, unmasked if so needed. However, if the question was tweaked to be “can a court force a masked vigilante to testify, and thus reveal their identity?”, the answer in USA law is no.

    Specifically, the Fifth Amendment guarantees that no one can be forced to testify if the testimony can be used against them later. So either the testimony isn’t taken at all, or that they are immunized for anything that they testify about. Those are the only two remaining options that satisfy the 5th Amendment. And so, since a masked vigilante would have potential criminal liability for past crimes, the government (federal or state) cannot force them to answer questions on the stand, without first granting immunity. If they offer no immunity, the vigilante can simply reply that they’re invoking their right from the 5th Amendment, and no harm can come to them for doing so; being harmed for invoking a right would make said right into a meaningless thing.

    But if the prosecutor decides that it’s super important to hear what the vigilante has to say, then they can grant immunity. Unlike a pardon which is a gift that a person can choose to decline, immunity is unilaterally granted, and the recipient cannot reject it. Once that’s done, someone can be forced to testify (eg locked in jail until they will talk), since there is no longer a risk of the testimony coming back to harm themselves later.

    Note: those words could be entirely damning to somebody else, and there’s no such thing as “third party Fifth Amendment” rights. This is almost always the reason why the prosecutor would grant immunity, to get the compelled testimony from underlings or witnesses necessary to convict a bigger target. If a masked vigilante was the only witness to documents about accounting fraud, which were later burned, that could be really useful when trying to pursue white collar crime, even if the cost is to give up any possibility of prosecuting the property damage crimes that the masked vigilante may have committed.


  • The thing is, the Internet routing protocol BGP delivers basically everything that a mesh network requires, except for the physical data links that carry the data. Keeping things short, BGP is a way to declare where certain IP addresses can be found. So an example announcement BGP would be something like “2608:120::/32 can be found at AS721”, where AS stands for Autonomous Network, a subnetwork that is controlled by a single entity. In this case, that IPv6 range belongs to the USA Department of Defense (DoD) and AS721 is the identifier for their network.

    Now, the trick is to figure out how your own AS can reach the AS of your destination, which is no different than a mesh: the DoD’s AS721 is solely connected to AS3356 (the massive ISP named “Level 3”), which is a very likely connected to the upstream AS of your link to the Internet, which means there is a valid path from your AS to the DoD.

    Whenever an intermediate AS disappears from the global Internet, its former peers will reroute through other links to maintain a path to the largest number of AS’s (as in, the Internet). In this sense, having multiple links to different AS’s is important for redundancy, and is no different than a mesh network having multiple RF paths.

    Finally, if multiple link failures occur – say, a Tier 1 ISP goes completely down – then the network becomes fragmented, but traffic within each fragment will still pass. This is akin to a mesh between two cities, where the mountain-top repeater is struck by lightning. Locals in each town can still send messages, but not over the hill to the next town.

    Is BGP perfect? Heavens no. And it has its own issues with maliciously-crafted announcements. But everything that BGP does is analogous to what mesh networks do. It’s merely that the participants are highly commercialized today, whereas in the 80s, it was mostly universities and a few defense contractors experimenting.

    The technology is basically here, but it’s how it gets used that will dictate out how history will be written.


  • Setting aside the Forgejo issues for a moment, I can’t quite see the logic behind the author’s description of a “carrot disclosure”.

    As written, it’s a third option for disclosure, beyond 1) coordinated disclosure (often 90 days for the vendor to fix things) or 2) full disclosure (immediately going public, esp when the vulnerability is believed to be actively exploited). But what the author describes as the carrot is to publish only the output of a proof-of-concept, and then the onus is on the vendor to figure out both the vulnerability and the fixes.

    This seems wildly irresponsible to me, to put the effort into writing a working PoC but then to willfully withhold it, so as to basically force the vendor into a wild goose chase. And that’s the best case scenario, when the PoC is actually legit. At worst, it’s a DoS against a vendor (causing them to re-audit code to find a bug that doesn’t actually exist, eg hallucinated AI slop) or is a form of defamation to scare users away.

    Then there’s the issue of when it’s not a “vendor” per-se but a group of volunteers of an open-source project, which I will distinguish from commercial vendors as “maintainers”. Is it ethical to withhold an already-written PoC from FOSS maintainers, whom often do not have the material capabilities to do a full-scale audit when given basically no clues?

    To be clear, I’m not a security researcher and have done zero disclosures of any form. But if I ever ran a project and received a so-called carrot disclosure, why shouldn’t I immediately call their bluff and treat it as full-disclosure? This situation seems like Schrodinger’s Cat, where the only way to rip away the uncertainty is to throw open the box. Worse case, the project suffers the reputational hit for having a legit vulnerability. But best case, the vulnerability is non-existent. But what this supposed “third way” purports to do is no different than sowing the seeds of fear, uncertainty, and doubt amongst users. Someone tell me how this isn’t one step away from extortion.

    I think game theory would say that any and all recipients of “carrot” disclosures should always call the bluff, immediately and vocally. I don’t see any way for such disclosures to be anything but unnecessarily antagonistic. I refuse to credit the term with any legitimacy.


  • I’m not familiar with cereal bags being accepted for recycling at grocery stores – although I’m aware that grocery store recycling in California has deep issues regarding implementation – but regarding why a chip bag is different than a cereal bag, my guess is that it has to do with the former being air tight.

    Chip bags are intentionally filled with gas (usually nitrogen) in order to preserve the contents for a long shelf life. Rather conveniently, this also helps the chips not smash up against other chip bags in the same box, at the cost of fitting fewer bags into a shipping container. As such, chip bags have to be air tight, and mylar is good at that, as evidenced by mylar balloons that keep helium inside for far longer than a latex balloon (to the sadness of every electricity provider on Earth).

    Whereas I suspect the clear plastic – maybe polyethylene? – bags used for cereal have different requirements, because a cereal box already provides mechanical protection against other boxes, and an expectation that cereals (a bona fide breakfast foodstuff, compared to chips which have always been categorized as a snack food) will be eaten in quantities that make recyclability a priority; this is a guess.

    I also think cereals might historically have been just free-floating inside the box, in the same way that dishwasher power detergent is still packaged within a thick cardstock box, with a pour-out metal spout. That said, this citation seems to indicate that cereal bags are in-fact liners, which would suggest the primary reason is one of food safety, if contact directly with the inside of the box would be a problem.

    And this kinda makes sense to me, since nobody would want to eat soggy cereal if a bit of rainwater seeped through the box and contacted the food.


  • Interest rate: the percent increase per compounding period . Almost totally useless unless the compounding period is also known.

    APR: a metric which extrapolates an interest rate and compounding period out to one year, less any unavoidable fees. Because this metric can be computed for any savings instrument or any loan, it can be used to directly compare rates between different savings or lending institutions.

    APR is still computable even for something which won’t last for 1 year (eg a 6-month Certificate of Deposit), for things with a compounding period longer than 1 year, and can deal with promotional offers, such as a savings account that pays 5% for the first 3 months and then returns to a normal rate of 1% ongoing.

    Whereas before APR came into existence, it would have been possible to trick people with a seemingly “high” interest rate but it would have a longer compounding period, or they would charge an obligatory “exit fee” that takes a haircut off the interest at the end.

    While the law cannot change the mathematical fact that an interest rate must also have a compounding period to be usable, USA law enforces that whenever an APR is given alongside an interest rate, it must have been computed accurately, with large penalties if not.


  • In the best possible scenario, a BIOS/UEFI password lock will prevent an adversary from using the computer as-is. If the adversary has an objective to quickly fence the computer, then this objective is foiled. Note that preventing the computer from physical access would also foil this objective, since that prevents the adversary from even accessing the machine.

    But that’s the best case. In a more-worse case scenario, the adversary wants to steal data from the computer. A firmware password will be useless if the adversary removes the HDD or SSD from the machine. This is, instead, correctly solved with drive-level encryption, using a password or smart card to unlock.

    The reason why open-source firmwares (BIOS/UEFI) might be uninterested in implementing a password is because: 1) preventing physical access is more effective, and 2) because it’s arguably a form of security theatre: commercial firmware vendors include a password feature because some customer once asked for it, but not with security as a well-thought objective. Open-source projects have a habit of not implementing pointless features.

    TL;DR: physical access to a machine is fatal to any and all security protections



  • Like with all things, it’s a matter of degree. Democracy and socialism are not inherently incompatible, but can be mixed together at different ratios. For example, a democratic socialist society could follow in the Swiss model of direct democracy, meaning everyone has a say in the policy decisions. Such policy decisions include the law but also how to utilize the means of production, which the state owns entirely.

    Whereas another democratic socialist society could realize their democracy through a representative model, where citizens elect a local representative that goes to the capital and votes in a state committee on how to amend the law or utilize the means of production, which the state owns entirely. Here, political power is wielded by a committee but the complete socialist ownership is intact.

    Yet another democratic socialist society could be much softer on the state ownership of all the means of production. The state might own the utilities, roads, schools, and all land, but may permit certain collectives to privately own businesses that generate value and to distribute those earnings equally amongst themselves. This could be considered a transitional step, since it allows for a controlled amount of capitalist-style development to occur, while avoiding huge concentrations of private capital. But it could also be a step backwards if the state already fully-owned the means of production but then voted to release some of it to small co-ops.

    While words have to mean something to be useful at all, I wouldn’t spend too much time trying to fit all possibilities into neat categories. Ultimately, socioeconomics are fluid.




  • In California, a U turn is considered a left turn that keeps going. As a result, a U turn is legal anywhere that a left turn is legal, except when signs are posted otherwise. So in a left-turn pocket/lane, it is both reasonable and expected that people will make left turns, some of which will continue into a full 180 degree turn. People who do U turns are doing what is allowed, and they have every right to do so. If this seems like a problem, then talk to your transportation department to restrict U turns.

    I’m not aware of any aspect of a U turn procedure that would be any different than than a standard 90 degree turn: use turn signals, look for oncoming traffic, look for pedestrians, turn slowly as required by the radius, roll out of the turn with careful acceleration.


  • As the other commenters have noted, what sort of adversary are you trying to protect against? There is no such thing as “security for its own sake” but rather security measures like E2EE are to protect against specific types of attacks. Do you believe a ticketing system is vulnerable to attacks that E2EE would mitigate?

    As an aside, please do not consider PGP to be a pinnacle of signing or encryption. I’ve opined in another project before about why Late 20th Century PGP isn’t that good in the 21st Century.

    But even with a modern replacement for PGP, how would E2EE even work for a multi-user ticketing system? If everyone on the support side has the same key, then key management becomes (as usual) the most crucial part of the operation, and remains an unsolved problem at scale. This is no different than physical key management, when every member of the custodial team needs to have the “super key” that opens every door of a university campus.