

Almost everything is Debian - my servers, my desktop and laptops, my family member’s computers, the living room media player. Only exceptions are my router (OpenWRT) and my Steam Deck (SteamOS).
hi :)


Almost everything is Debian - my servers, my desktop and laptops, my family member’s computers, the living room media player. Only exceptions are my router (OpenWRT) and my Steam Deck (SteamOS).


Sync was a fantastic Reddit client (I started using it back in 2016), then during the API debacle the dev turned it into a Lemmy app and frankly it was the best on the market by a wide margin. But then he just vanished, and various things have gradually stopped working as it’s not keeping up with the latest Lemmy updates. When upvotes stopped working a few months ago, I bit the bullet and have now moved to Summit, which has the closest user experience to Sync of all the Lemmy apps I’ve found (although it doesn’t have anywhere close to the same level of polish as Sync did).
This didn’t age well.


I’m on ml and I can see it just fine.


Let b denote the average number of boobs per person. It can be shown that 1 < b < 21.
1 it is known


I want to give you a belated “thank you” for writing this up! I had kinda been feeling like I’d reached a dead end with my whole gender exploration adventure, but you have not only inspired me to do a second (longer) trial run using EV instead of EEn (which I have just started), but I’m also gonna go to a local meetup for trans/enby/questioning people later this week. Something about your comment here really helped me get out of the rut I was in. Thanks for all your support!!! ❤️


Re-reading your original question, it should have been pretty obvious in retrospect that I am not really in the target audience. welp, my bad :P
I didn’t get any blood work done unfortunately, since my doctor’s office refuses to do it without a specific request from my GP (and the whole reason I wanted to do a trial run DIY was because I can’t realistically do this kind of stuff the legit way at the moment), so I just went with a dose a bit higher than the dosages I’d seen recommended online for “most” people and figured it was unlikely that that wouldn’t be enough. Since I saw nipple changes almost immediately I assumed that it was doing the trick, but the expected other effects just never came and I stopped when my nipples had become large enough that I was about to start needing a bra to stop them visibly poking through my shirt.
I didn’t really consider that the longer half-life was super relevant to the “startup delay”, most resources I found online seemed to show it nearly reaching the steady state level after only one or two doses. If that was actually the problem that’s a pretty big derp on my part, but I’m already planning to give it another shot once I’m not living at home.


I did estrogen monotherapy for about 2 months earlier this year. Quite frankly, the only changes I noticed was an immediate and significant increase in nipple sensitivity+size, and a reduction of nighttime erections. Other than that I didn’t notice any of the early changes which I had been lead to expect within the first few weeks: no emotional differences, no reduction in skin oiliness, no changes in body temperature, etc.
For what it’s worth, I was taking 1.4ml/week of 40% estradiol enanthate without any antiandrogens, am in my early 20s and have a very low body mass.


I wish I’d known this was a thing before I spent 15 minutes searching the manpages and manually upgrading my sources…


I will never touch flatpak for this reason, I’d rather deal with compiling software myself and faffing around with dependency issues than have 8 copies of every system library sitting around.


Thinking of a modern GPU as a “graphics processor” is a bit misleading. GPUs haven’t been purely graphics processors for 15 years or so, they’ve morphed into general-purpose parallel compute processors with a few graphics-specific things implemented in hardware as separate components (e.g. rasterization, fragment blending).
Those hardware stages generally take so little time compared to the rest of the graphics pipeline that it normally makes the most sense to have far more silicon dedicated to general-purpose shader cores than the fixed-function graphics hardware. A single rasterizer unit might be able to produce up to 16 shader threads worth of fragments per cycle, so even if your fragment shader is very simple and only takes 8 cycles per pixel, you can keep 8x16 cores busy with only one rasterizer in this example.
The result is that GPUs are basically just a chip packed full of a staggering number of fully programmable floating-point and integer ALUs, with only a little bit of fixed hardware dedicated to graphics squeezed in between. Any application which doesn’t need the graphics stuff and just wants to run a program on thousands of threads in parallel can simply ignore the graphics hardware and stick to the programmable shader cores, and still be able to leverage nearly all of the chip’s computational power. Heck, a growing number of games are bypassing the fixed-function hardware for some parts of rendering (e.g. compositing with compute shaders instead of drawing screen-sized rectangles, etc.) because it’s faster to simply start a bunch of threads and read+write a bunch of pixels in software.


KDE user here, I still use X11 to play old Minecraft versions. LWJGL2 uses xrandr to read (and sometimes modify? wtf) display configurations on Linux, and the last few times I’ve tried it on Wayland it kept screwing the whole desktop up.


It takes like half a second on my Fairphone 3, and the CPU in this thing is absolute dogshit. I also doubt that the power consumption is particularly significant compared to the overhead of parsing, executing and JIT-compiling the 14MiB of JavaScript frameworks on the actual website.


Nouveau is dead, it’s been replaced with Zink on NVK.


True, but there are also some legitimate applications for 100s of gigabytes of RAM. I’ve been working on a thing for processing historical OpenStreetMap data and it is quite a few orders of magnitude faster to fill the database by loading the 300GiB or so of point data into memory, sorting it in memory, and then partitioning and compressing it into pre-sorted table files which RocksDB can ingest directly without additional processing. I had to get 24x16GiB of RAM in order to do that, though.


In my experience, nouveau is painfully slow and crashes constantly to the point of being virtually unusable for anything. The developers agree, as in the last couple months nouveau has been phased out of Mesa entirely. More recent Mesa versions now implement OpenGL on Nvidia using Zink on NVK, and the result is quite a bit faster and FAR more stable.
If your distribution currently still ships a Mesa version which uses nouveau, I would personally recommend you just stick with the Intel graphics for now.


Aside from checking the kernel log (sudo dmesg) and system log (sudo journalctl -xe) for any interesting messages, I might suggest simply watching for any processes which are abnormally high while the system is running slow. My initial approach would be to use htop (disable “Hide Kernel Threads” and enable “Detailed CPU Time”), and seeing which processes, if any, are eating up your CPU time. The colored core utilization bars at the top show how much CPU time is being spent on what: gray for disk wait, red for kernel, green for regular user process, etc. That information will be a good starting point.
Jenkins has fairly solid Gitea/Forgejo integration :)