• 1 Post
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • You seem to be obsessed with optimising one resource at the expense of others.

    If you want to push it and paint me as obsessed about something, then let it be this: providing this community with on-topic and reasonable advice

    you’re only going to save a few MB of RAM.

    This is false, and you should read once again my previous message illustrating why: on a decent “self-host”-friendly machine, the same software may work very well, or not at all, depending on whether the user would engage with very basic configuration. This goes beyond RAM (memory isn’t the sole shared resource), and I’m adamant that the alternative (which was “pretending that the problem doesn’t exist” turned into “throwing money at the problem”) is unreasonable.

    On the other hand, if you’re doing it to learn more about computers then it might be worthwhile. This is a community of hobbiests, after all…

    Or more importantly: the extent to which you can self-host out of sheer luck and ignorance like you suggest is very limited. If you don’t want to engage with a minimum amount of configuration, you might bump into security issues (a much broader and complex subject) long before any of the above has a material impact.


  • I’m saying this based on real world experience

    And do you think I would spend my time engaging if that wasn’t from my own very “real world experience” of lessons learned the hard way?

    Bringing-up “diminishing returns” as if this was an optimisation game also doesn’t do this justice. Take the typical “household FOSS package” with software names often brought up in here: a nextcloud instance, a photo-sharing service like immich, private instant messaging, a software forge, a subsonic-compatible audio/video streaming server, a couple php websites like wallabag and RSS aggregators.

    An Intel Atom CPU and 4GB of RAM is plenty sufficient for all that, and will cost you single digit USD a month, granted you put the (one-time) effort to tune and balance those services. Would you run all the above from upstream’s docker files, I can guarantee you that you would deem this (perfectly fine otherwise) server underpowered for the task at hand (and would probably go for a 10th gen or so Intel Core CPU, quadruple the RAM and 3-6× the energy cost in the process).

    And that’s the point I’m making here: a self-hosting community of tinkerers should (ideally) know better, for the ethics’ sake of keeping the process environmentally friendly, and not wasting other people’s money.


  • Do you have the data to back that up?

    I mean, you are the one making the exceptional claim that unnecessarily running multiple instances of programs on a device with finite resources has no practical adverse effect. Of course, the effects can be more or less drastic depending on the many variables at play (hardware, software, memory pressure, thread starvation, cache misses, …) and can indeed be negligible in some lucky circumstances. The point is that you don’t call that shot, and especially not by burying your head in the sand and pretending it’s never gonna be a problem.

    Effective use of computing resources requires tuning. Introduction of a new service creates imbalance. Ensuring that the server performs nominally and predictably for all intended services is a balancing act and a sysadmin’s job. Services whose deployment settings are set by someone with no prior knowledge of the deployment constraints can’t be trusted to do a good job at it (that’s the nature of the physical world we live in, not my opinion), and promoting this attitude promote the kind of wasteful and irresponsible computing I was on about.

    Now, I’ll give you the link to this basic helper for tuning a PostgreSQL server: https://pgtune.leopard.in.ua/
    Will you tell me what are the correct inputs for my homelab (I won’t tell you the hardware, the set-up, the other services running on it, the state of the system, etc)?
    And later, when you will distribute your successful container to millions of users, what will you respond to the angry ones that will complain that your software is slow, to no fault of your coding, because they happen to pile up multiple DBs, web servers, application servers, reverse proxies, … on their banana SoCs?





  • Why would you think so? Can you give examples of specific tools that wouldn’t be available to mail clients? On the other hand, there are many things available on most email clients which are missing on GitHub, like tagging automation from custom and flexible rules, Turing-complete filtering, instant searching, saved searches, managing the lifecycle of issues, linking with the VCS etc. all in context and in one place.
    How people generally go about re-implementing those on GitHub is with bots, and you are left at the mercy of what the bot can do/its admin wants you to do, and each project is its own silo and possibly breaks your workflow.

    I’m fine with GitHub because these days I’m mostly a casual contributor, but there’s a lost appreciation for the sheer power and universality of email-based workflows. That the largest projects (including the Linux kernel) run on that should speak for itself.


  • I figured they’d be juggling a lot of mails

    Yeah, but organized into as many threads as there are issues/PRs, so it’s exactly as daunting as the same list as viewed on GitHub/project/issues (because it is exactly the same content).

    and I guess it is possible for some people to stay on top of that

    It’s the crux of being a maintainer, it’s your job “to stay on top of that”, with, on larger projects, ad-hoc tooling and automation being the only way. Email is infinitely more flexible than the one-size-fits-all take by GitHub on that.


  • As someone who’s been using ttrss for decades but would be open to trying something new, what would you say is FreshRSS’ killer feature (and missing killer feature) compared to ttrss?

    (Not trying to start a flame war, ttrss feels like a finished project, which is not a bad thing, but I think it’s healthy to wish for more innovation in this space)






  • Matrix finances are running dry, and this is just the beginning.
    It’s bad omen to criticize an opensource project, I know, but in my eyes Matrix is a big technical and organizational failure, for not having succeeded in stabilizing the protocol after a whole decade of unsuccessful explorations, and for having its leadership consistently fail to define clear goals and steer the project towards them (just get it done and working well before trying to make it “peer to peer” or “in the metaverse”).

    If this is the electroshock that Matrix needs to reconsider its design and directions? good for them. If that kills them? Well too bad, but it’s not like they are the only cool kid in town.



  • I’m selfhosting a Matrix server and have all my Chats from other apps also bridged to there.

    Same here, but with XMPP in place of Matrix. For historical context, XMPP was invented about 25 years ago on the premise that people were already tired of having their instant messaging scattered over multiple protocols (rather than Signal, Telegram, WhatsApp, Discord, iMessage now, it was Yahoo, MSN, AIM, ICQ, … then), so bridging is very much front and center in the XMPP world. Over time, people also realized that bridging sucks in general (you either dumb down your client to the lowest common denominator which sucks for yourself, or your client isolates itself from the source protocol enough that it sucks for everyone else).
    To add insult to injury, most modern protocols also forbid, by their ToS, the use of alternative clients (which very much includes bridges), and to the best of my knowledge WhatsApp, Signal and Discord will eventually suspend your account on this basis.
    Matrix is still trying to carve a niche for itself in this space, and is failing IMO (judging by the quality/security of the bridges they have come-up with, and the recent libera.chat fiasco). I’d say that the situation in this regard in XMPP is only marginally better due to the fact that XMPP had a decade headstart to fail and try over, and I would not recommend using bridges on either of them if that can be avoided.

    It XMPP better for group VC?

    I’d say “it depends”. Fun fact, Matrix uses jitsi-meet under the hood (which is XMPP + a media transcoding/multicasting component that doubles as a relay), and jitsi-meet is my recommendation for this use-case: as long as the central server has good bandwidth, you can really scale up your VC to many attendees. On top of that, XMPP has support for peer-to-peer group VC, with the benefit that hosting is simpler, it doesn’t require any central component/relay (but the bandwidth cost is incurred on all participants and you won’t go beyond a handful of attendees that way).