• 1 Post
  • 250 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • (I’m going to take the question seriously)

    Supposing that you’re asking about a digital clock as a standalone appliance – because doing the 69th second in software would be trivial, and doing it with an analog clock is nigh impossible – I believe it can be done.

    A run-of-the-mill digital clock uses what’s known as a 7-segment display, one for each of the digits of the time. It’s called 7-segment (or 7-seg) because there are seven distinct lines that can be lit up or darkened, which will write out a number between 0 to 9.

    In this way, six 7seg displays and some commas are sufficient to build a digital clock. However, we need to carefully consider whether the 7seg displays have all seven segments. In some commercial applications, where it’s known that some numbers will never appear, they will actually remove some segments, to save cost.

    For example, in the typical American digital clock, the time is displayed in 12-hour time. This means the left digit of the hour will only ever be 0 or 1. So some cheap clocks will actually choose to build that digit using just 2 segments. When the hour is 10 or greater, those 2 segments can display the necessary!number 1. When the hour is less than 10, they just don’t light up that digit at all. This also makes the clock incapable of 24-hour time.

    Fortunately though, to implement your idea of the 69th second, we don’t have this problem. Although it’s true that the left digit of the seconds only goes from 0 to 5 inclusive, the fact remains that those digits do actually require all 7 segments of a 7seg display. So we can display a number six without issue.

    Now, as for how to modify the digital clock circuitry, that’s a bit harder but not impossible. The classic construction of a digital clock is as follows: the 60 Hz AC line frequency (or 50 Hz outside North America) is passed from the high-voltage circuitry to the low-voltage circuitry using an opto-isolator, which turns it into a square wave that oscillates 60 times per second.

    Specifically, there are 120 transitions per second, with 60 of them being a low-to-high transition and the other 60 being a high-to-low transition. Let’s say we only care about the low-to-high. We now send that signal to a counter circuit, which is very similar to a mechanical odometer. For every transition of the oscillating signal, the counter advances by one. The counter counts in binary, and has six bits, because our goal is to count up to 59, to know when a full second has elapsed. We pair the counter with an AND circuit, which is checking for when the counter has the value 111011 (that’s to in decimal). If so, the AND will force the next value of the counter to 000000, and so this counter resets every 1 second. This counter will never actually register a value of 60, because it is cut off after 59.

    Drawing from that AND circuit that triggers once per second, this new signal is a 1 Hz signal, also known as 1PPS (pulse per second). We can now feed this into another similar counter that resets at 59, which gives us a signal when a minute (60 seconds) has elapsed. And from that counter, we can feed it into yet another counter, for when 1 hour (60 minutes) has passed. And yet again, we can feed that too into a counter for either 12 hours or 24 hours.

    In this way, the final three counters are recording the time in seconds, minutes, and hours, which is the whole point of a clock appliance. But these counters are in binary; how do we turn on the 7seg display to show the numbers? This final aspect is handled using dedicated chips for the task, known as 7seg drivers. Although the simplest chips will drive only a single digit, there are variants that handle two adjacent digits, which we will use. Such a chip accepts a 7 bit binary value and has a lookup table to display the correct pair of digits on the 7seg displays. Suppose the input is 0101010 (42 in decimal), then the driver will illuminate four segments on the left (to make the number 4) and five segments on the right (to make the number 5). Note that our counter is 6 bits but the driver accepts 7 bits; this is tolerable because the left-most bit is usually forced to always be zero (more on this later).

    So that’s how a simple digital clock works. Now we modify it for 69th second operation. The first issue is that our 6-bit counter for seconds will only go from 0-59 inclusive. We can fix this by replacing it with a 7 bit counter, and then modifying the AND circuit to keep advancing after 59, but only when the hour=04 and minute=20. This way, the clock works as normal for all times except 4:20. And when it’s actually 4:20, the seconds will advance through 59 and get to 60. And 61, 62, and so on.

    But we must make sure to stop it after 69, so we need another AND circuit to detect when the counter reaches 69. And more importantly, we can’t just zero out the counter; we must force the next counter value to be 10, because otherwise the time is wrong.

    It’s very easy to zero out a counter, but it takes a bit of extra circuitry to load a specific value into the counter. But it can be done. And if we do that, we finally have counters suitable for 69th second operation. Because numbers 64 and higher require 7 bits to represent in binary, we can provide the 7th bit to the 7seg driver, and it will show the numbers correctly on the 7seg display without any further changes.

    TL;DR: it can absolutely be done, with only some small amount of EE work



  • An indisputable use-case for supercomputers is the computation of next-day and next-week weather models. By definition, a next-day weather prediction is utterly useless if it takes longer than a day to compute. And is progressively more useful if it can be computed even an hour faster, since that’s more time to warn motorists to stay off the road, more time to plan evacuation routes, more time for farmers to adjust crop management, more time for everything. NOAA in the USA draws in sensor data from all of North America, and since weather is locally-affecting but globally-influenced, this still isn’t enough for a perfect weather model. Even today, there is more data that could be consumed by models, but cannot due to making the predictions take longer. The only solution there is to raise the bar yet again, expanding the supercomputers used.

    Supercomputers are not super because they’re bigger. They are super because they can do gargantuan tasks within the required deadlines.


  • With 30 years of outlook remaining, I would wonder if it’s even worth doing the Roth contributions. Ostensibly, you’re able to get an income tax deduction if you had more Traditional 401k/403b contributions, so it looks like some tax savings are being left on the table, by paying 22% marginal income tax now, when instead you could be paying tax on it upon withdrawal in 30 years.

    That said, if whether future tax rates will be higher or lower is a question that keeps you up at night, then keeping the contribution as-is is worthwhile. The Boglehead approach to risk tolerance is to optimize but not to the point that it would cause you consternation. That is to say, you are playing the long-game and must make decisions that are sustainable for the long-term.




  • In the spirit of c/nostupidquestion’s Rule 1, asking two unrelated questions does not seem like it would accrue high-quality answers to either. And I see you’ve already added another post focusing on the first question.

    Since it doesn’t cost 50 cents to make an additional post, I would suggest giving each question its own post. It would keep the discussion more focused, and actual answers should result.


  • Consider the following three types of monopolies:

    There are monopolies where a single entity has entrenched their position by having the categorically superior product, so far ahead of any competition and while no barriers are erected to prevent competitors, there simply is no hope and they will all play second fiddle. This type of monopoly doesn’t really exist, except for a transient moment, for if there initially wasn’t a barrier, there soon will be: as market leader, the monopolist accumulates capital that at best is unavailable to the competitors (ie zero sum resources, like land or labor), and at worst stands in the way of free competition (eg brand recognition, legally -recognized intellectual property).

    The second type is the steady-state scenario following the first, which is a monopoly that benefits from or actively enforces barriers against their competitors. Intellectual property (eg Disney) can be viewed as akin to the conventional means of production (land, labor, capital), so the monopolist that controls the usable land or can hire the best labor will cement their position as monopolist. In economic terms, we could say that the cost to overturn the monopolist is very high, and so perhaps it’s economically reasonable to be a second-tier manufacturer rather than going up against the giant. The key ingredient for the monopolist is having that structure in place, to keep everyone else at bay.

    The third type is the oddball, for it’s what we might call a “natural” or “practical” monopoly. While land, labor, and capital are indeed limited, what happens when it’s actually so limited that there’s basically only one? It’s a bit hard to conceptualize having just one plot of land (maybe an island?) or having just one Dollar, but consider a single person who has such specialized knowledge that she is the only such person in the world. Do we say she is a monopolist because she can command whatever price she wants for her labor? Is she a monopolist because she does not share her knowledge-capital? What if she physically can’t, for the knowledge is actually experience, honed over a lifetime? If it took her a lifetime to develop, then she may already lack the remaining lifetime to teach someone else for their lifetime.

    I use this example to segue to the more-customary example of a natural monopoly: the local electricity distribution system, not to be confused with the electric grid at-large, which also includes long-distance power lines. The distinction is as follows: the big, long power lines can compete with each other, taking different routes over terrain, under water, or sometimes even partially conducting through the earth itself. But consider that at a local level, on a residential street, there can practically only be a single distributor circuit for the neighborhood.

    I cannot be served by Provider X’s wires while Co-Op Y’s wires serve my neighbor, and Corpo Z’s wires serve the school down the road. Going back to the convention means of production, we could say there is only one plot of land available to run these distributor circuits. So at most one entity can own and operate those wires.

    Laying all that background, let’s look at your titular question. For monopoly types 1 and 2, it’s entirely feasible to divide and collectivize those monopolies. But it’s the natural monopolies that are problematic: if you divide them up (let’s say geographically) and then collectivize them, there will still only ever be one “owner” of the distribution lines. You cannot have Collective A own a few meters of wire, and then Collective B owns a few meters in between, all while Collective C is connected at the end of the street. The movement of electric power is not amenable to such granular collectivization.

    To that end, the practical result is the same no matter how you examine it: a natural monopoly is one which cannot feasibly be split up, even when there’s the will to do so. Generalizing quite a lot, capitalists would approach a natural monopoly with intent to exploit it for pure profit, while social democrats would seek to regulate natural monopolies (eg US State’s public utilities commissions), and democratic socialists would want to push for state ownership of all natural monopolies, while communists would seek the dissolution of the state and have the natural monopoly serve everyone “according to their need”. But the monopoly still exists in all these scenarios, for it can’t be done any other way.

    Other natural monopolies exist, but even things like radio spectrum are relatively plentiful compared to local power lines, for which there really is just one place to build them. We don’t have wireless power yet.


  • The full-blown solution would be to have your own recursive DNS server on your local network, and to block or redirect any other DNS server to your own, and possibly blocking all know DoH servers.

    This would solve the DNS leakage issue, since your recursive server would learn the authoritative NS for your domain, and so would contact that NS directly when processing any queries for any of your subdomains. This cuts out the possibility of any espionage by your ISP/Google/Quad9’s DNS servers, because they’re now uninvolved. That said, your ISP could still spy in the raw traffic to the authoritative NS, but from your experiment, they don’t seem to be doing that.

    Is a recursive DNS server at home a tad extreme? I used to think so, but we now have people running Pi-hole and similar software, which can run in recursive mode (being built atop Unbound, the DNS server software).

    /<minor nitpick>

    “It was DNS” typically means that name resolution failed or did not propagate per its specification. Whereas I’m of the opinion that if DNS is working as expected, then it’s hard to pin the blame on DNS. For example, forgetting to renew a domain is not a DNS problem. And setting a bad TTL or a bad record is not a DNS problem (but may be a problem with your DNS software). And so too do I think that DNS leakage is a DNS problem, because the protocol itself is functioning as documented.

    It’s just that the operators of the upstream servers see dollar-signs by selling their user’s data. Not DNS, but rather a capitalism problem, IMO.

    /</minor nitpick>


  • I grew up in a suburban neighborhood that was built to only encourage driving and discouraged everything else, so my parents also took me most places during my teenage years. The cul-de-sacs made it particularly hard to walk to anything interesting, even though such destinations were actually fairly close by, as the crow flies.

    What I would suggest is that if there aren’t many interesting destinations to start with, perhaps the walk itself can be of interest. Unless the walk to the mall is along a surface freeway with no soundwall – an actual occurrence in my hometown – you might start with an out-and-back trek to the mall, observing whatever architecture, people, or activities are visible and audible, and then return home. Think of it like people-watching, but less awkward because you’re just passing by, not stopping to stare.

    As another commenter wrote, getting comfortable with something is a matter of doing it, first in a controlled manner and then gradually broadening your horizons.

    But if this still isn’t a workable plan, then perhaps plan a day out to the 1-hour-away park, taking some time to explore what’s just outside that park. It’s not cheating to use a car to get to a more walkable area. But the walk should be the adventure.

    I wish you the best of luck!

    P.S. One other thought: could you go walking with someone else besides your parents? They may already have their own walking paths that you may also find workable, places that you can then explore on your own.



  • I loaded True Nas onto the internal SSD and swapped out the HDD drive that came with it for a 10tb drive.

    Do I understand that you currently have a SATA SSD and a 10TB SATA HDD plugged into this machine?

    If so, it seems like a SATA power splitter that divides the power to the SSD would suffice, in spite of the computer store’s admonition. The reason for splitting power from the SSD is because an SSD draws much less power than spinning rust.

    Can it still go wrong? Yes, but that’s the inherent risk when pushing beyond the design criteria of what this machine was originally built for. That said, “going wrong” typically means “won’t turn on”, not “halt and catch fire”.



  • Steganography is one possible way to store a message “hidden in plain sight”, and video does often make a seemingly innocuous manner to store a steganographic payload, but in that endeavor, the point is to have two messages: a video that raises no suspicions whatsoever, and a hidden text document with instructions for the secret agent.

    Encoding only the hidden message as a video would: 1) make it really obvious that there’s an encoded message, and 2) would not be compatible with modern video compression, which would destroy the hidden message anyway, if encoded directly as black and white pixels.

    When video compression is being used, the available bandwidth to store steganographic messages is much lower, due to having to be “coarse” enough to survive the compression scheme. And video compression is designed around how human vision works, so shades of color are the least likely to be faithfully reproduced – most people wouldn’t notice if a light green is portrayed slightly darker than it ought to be. The good news is that with today’s high resolution video streams, the raw video bandwidth is huge and so having even just one-thousandth of that available for encoding hidden data is probably sufficient.

    That said, hidden messages != encrypted messages: anyone who notices that there may be a hidden message can try to analyze the suspicious video and retrieve the payload. Encoding, say, English text in a video would still leave patterns, because some English letters (and thus ASCII-encoded bit patterns) will show up more frequently. But fortunately, one can encrypt data and then hide it using steganography. Encrypted data tends to approximate random noise, making it much harder to notice when hidden within the naturally-noisy video data. But bandwidth will be cut some more due to encryption.

    TL;DR: it’s very real to hide messages in plain sight, in places people wouldn’t even think of looking closely at. Have you thought about the Roman Empire today?



  • For the benefit of non-Google users, here is the unshortened URL for that Bank of England article: https://www.bankofengland.co.uk/-/media/boe/files/quarterly-bulletin/2014/money-creation-in-the-modern-economy

    With that said, while this comment does correctly describe what the USA federal government does with tax revenues, it is mixing up the separate roles of the government (via the US Treasury) and the Federal Reserve.

    The Federal Reserve is the central bank in the USA, and is equivalent to the Bank of England (despite the name, the BoE serves the entire UK). The Federal Reserve is often shortened to “the Fed” by finance people, which only adds to the confusion between the Fed and the federal government. The central bank is responsible for keeping the currency healthy, such as preventing runaway inflation and preventing banking destabilization.

    Whereas the US Treasury is the equivalent to the UK’s HM Treasury, and is the government’s agent that can go to the Federal Reserve to get cash. The Treasury does this by giving the Federal Reserve some bonds, and in turn receives cash that can be spent for employee salaries, capital expenditures, or whatever else Congress has authorized. We have not created any new money yet; this is an equal exchange of bonds for dollars, no different than what you or I can do by going to treasurydirect.gov and buying USA bonds: we give them money, they give us a bond. Such government bonds are an obligation that the government must pay in the future.

    The Federal Reserve is the entity that can creates dollars out of thin air, bevause they control the interest rate of the dollar. But outside of major financial crisis, they only permit the dollar to inflate around 2% per year. That’s 2% new money being created from nothing, and that money can be swapped with the Treasury, thus the Federal Reserve ends up holding a large quantity of federal government bonds.

    Drawing the distinction between the Federal Reserve and the government is important, because their goals can sometimes be at odds: in the late 1970s, the Iranian oil crisis caused horrific inflation, approaching 20%. Such unsustainable inflation threatened to spiral out of control, but also disincentivized investment and business opportunities: why start a new risky venture when a savings account would pay 15% interest? Knowing that this would be the fate of the economy if left unchecked, the Federal Reserve began to sell off huge quantities of its government bonds, thus pulling cash out of the economy. This curbed inflatable, but also created a recession in 1982, because any new venture needs cash but the Feds sucked it all up. Meanwhile, the Reagan administration would not have been pleased about this, because no government likes a recession. In the end, the recession subsided, as did inflation and unemployment levels, thus the economy escaped a doom spiral with only minor bruising.

    To be abundantly clear, the Federal Reserve did indeed cause a recession. But the worse alternative was a recession that also came with a collapsed US dollar, unemployment that would run so deep that whole industries lose the workers needed to restart post-recession, and the wholesale emptying of the Federal Reserve and Treasury’s coffers. In that alternate scenario, we would have fired all our guns and have lost anyway.


  • From a biology perspective, it may not be totally advantageous to grow in all three dimensions at once. Certainly, as life forms become larger, they also require more energy to sustain, and also become harder to cool (at least for the warm blooded ones). Generally speaking, keeping cool is a matter of surface area (aka skin). But growing double in each of the three dimensions would be 4x more skin than before, but would be 8x more mass/muscle. That’s now harder to keep cool.

    So growing needs to be done with intention: growing taller nets some survival benefits, such as having longer legs to run. Whereas growing wider or deeper doesn’t do very much.

    But idk mang, I’m in a food coma from holiday dinner, just shooting from the hip lol


  • I recently learned about Obsidian from a friend, but haven’t started using it yet, so perhaps I can offer a perspective that differs from current users of Obsidian or any of the other apps you listed.

    To start, I currently keep a hodge-podge of personal notes, some digitally and some in handwriting, covering different topics, using different formats, and there’s not really much that is common between any of these, except that I am the author. For example, I keep a financial diary, where I intermittently document the thinking behind certain medium/long-term financial decisions, which are retained only as PDFs. I also keep README.md files for each of the code repos that I have for electronics and Kubernetes-related projects. Some of my legacy notes are in plain-text .txt file format, where I’m free-form record what I’ve working on, relevant links, and lists of items outstanding or that are TODOs. And then there is the handwritten TODO and receivables list that I keep on my fridge.

    Amongst all of this chaos, what I would really like to have the most is the ability to organize each “entry” in each of their respective domains, and then cross-reference them. That is, I’m not looking to perform processing on this data, but I need to organize this data so that it is more easily referenced. For example, if I outline a plan to buy myself a new server to last 10 years, then that’s a financial diary entry, but it would also manifest itself with TODO list items like “search for cheap DDR5 DIMMs” (heaven help me) and “find 10 GbE NIC in pile”. It may also spawn an entry in my infrastructure-as-code repo for my network, because I track my home network router and switch configurations in Git and will need to add new addresses for this server. What I really need is to be able to refer to each of these separate documents, not unlike how DOIs uniquely identify research papers in academic journals.

    It is precisely because my notes are near-totally unstructured and disparate that I want a powerful organization system to help sort it, even if it cannot process or ingest those notes. I look at Obsidian – based on what little I know of it – like a “super filing cabinet” – or maybe even a “card catalog” but that might be too old of a concept lol – or like a librarian. After all, one asks the librarian for help finding some sort of book or novel. One does not ask the librarian to rehash or summarize or extract quotes from those books; that’s on me.


  • In the English-speaking world, you can always shorten the year from 4 to 2 digits. But whether: 1) this causes confusion or 2) do you/anyone care if it does, are the points of contention. The first is context-dependent: if a customer service agent over the phone is trying to confirm your date of birth, there’s no real security issue if you only say the 2 digit year, because other info would have to match as well.

    If instead you are presenting ID as proof of age to buy alcohol, there’s a massive difference between 2010 and 1910. An ID card and equivalent documentation must use a four digit year, when there is no other available indicator of the century.

    For casual use, like signing your name and date on a holiday card, the ambiguity of the century is basically negligible, since a card like that is enjoyed at the time that it’s read, and isn’t typically stashed away as a 100-year old memento.

    That said, I personally find that in spoken and written English, the inconvenience of the 4 digit year is outweighed by the benefit of properly communicating with non-American English users. This is because us American speak and write the date in a non-intuitive fashion, which is an avoidable point of confusion.

    Typical Americans might write “7/1/25” and say “July first, twenty five”. British folks might read that as 7 January, or (incorrectly) 25 January 2007. But then for the special holiday of “7/4/25”, Americans optionally might say “fourth of July, twenty five”. This is slightly less confusing, but a plausible mishearing by the British over a scratchy long-distance telephone call would be “before July 25”, which is just wrong.

    The confusion is minimized by a full 4 digit year, which would leave only the whole day/month ordering as ambiguous. That is, “7/1/2025” or “1/7/2025”.

    Though I personally prefer RFC3339 dates, which are strictly YYYY-mm-dd, using 4 digit years, 2 digit months, and 2 digit days. This is always unambiguous, and I sign all paperwork like this, unless it explicitly wants a specific format for the date.


  • For the objective of posting photos to an Instagram account while preserving as much privacy as possible, your approach of a separate machine and uploading using its web browser should be sufficient. That said, Instagram for web could also be sandboxed using a private browsing tab on your existing desktop. Certainly, avoiding an installed app – such as the mobile app – will prevent the most obtuse forms of espionage/tracking.

    That said, your titular question was about how to maintain an Instagram account, not just post images. And I would say that as a social media platform, this would include engagement with other accounts and with comments. For that objective, having a separate machine is more unwieldy. But even using a private browsing tab on your existing machine is still subject to the limits that Instagram intentionally imposes on their desktop app: they save all the crucial value-add features for the mobile app, where their privacy invasion is greatest.

    To use Instagram in the least obtuse manner means to play the game by their rules, which isn’t really compatible with privacy preservation. To that end, if you did want a full Instagram experience, I would suggest getting a separate, cheap mobile phone (aka a “YOLO phone”) to dedicate to this task. If IG doesn’t need a mobile number, then you won’t even need a working SIM account. Then load your intended images using USB file transfer, and use an app like Imagepipe (available on F-Droid) to strip image metadata. Turn off all location and permissions on this phone, and when not in-use, turn the phone off or in airplane mode.