

Thanks. I’ll keep this in mind in case my new stack causes issues again


Thanks. I’ll keep this in mind in case my new stack causes issues again


Hey, just wanted to let you know that my updated stack has been running perfectly since I changed it based on your setup. Thanks


I guess I missed that.
Anyway, I updated my stack to be similar to what you pasted and so far it seems to be working. I’ll have to check tomorrow if the reboot issue persists.


I know that the port forwarding command can be simplified. In my case its this complex because the way it is listed in the gluetun wiki did not work even though I disabled authentication for my local network. The largest part of the script is authenticating with the username and password before actually sending the port forwarding command.
I’ll definitely try adjusting my stack to your variant though. I’ve also tried the healthcheck option before but I must have configured it wrong because that caused my gluetun container to get stuck.
One question regarding your stack though, is there a specific reason for binding /dev/net/tun to gluetun?


As far as I am aware, Mullvad has removed port forwarding support a while ago. While I am not sure which VPN providers except proton still support it, I kind of remember seeing a small list of them some time ago which listed Proton among one of the few trustworthy ones left.


Good thing I decided against switching to it, even though my main reason is that my weird book organisation scheme isn’t feasible with anything but calibre or manual organisation currently as far as I know
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
That wasy exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason


Maybe they added new ones since I last looked


Extra hard drive space is good and all but the last time I looked at their lists, all decently sized torrents had a good enough amount of peers while the large torrents which aren’t as manageable were the issue.


I typically use EndeavorOS because I enjoy how well documented and organized the arch wiki is.
I tried switching to fedora on my laptop recently but actually had some issues with software that was apparently only distributed through the AUR or AppImage (which I could have used, I know).
When I also had issues setting up my VPN to my home network again, I caved and restored the disk to a backup I took before attempting the switch. The VPN thing almost definitely wasn’t Fedoras fault since I remember running into the same issue on EndeavorOS but after my fix from last time didn’t work I was out of patience.
My servers runs either on debian or Ubuntu LTS though.


Why not skip ahead in time a little and call it farmarr?


I sometimes prefer light mode, for example on my laptop in bright environments because I find that it gives me better contrast and keeps more of the screen viewable


I know you didn’t mention video but if you think you might want to host jellyfin in the future, make sure your CPU supports hardware decoding for modern formats.
For example, my lenovo mini pc with an i5-6500 has support for h265 but not h265 10bit or AV1, which makes playing those formats on some devices basically impossible without re-encoding the files.


I can’t think of a great historical photo right now but I’m loving this thread


Mainly kernel level anticheat, though that is obviously not really linux fault.
My other personal gripe is probably stumbling across a GTK based app that works for what I want it to do but clashes extremely badly with my Plasma DE.
For example, I wanted to set up automatic file backups to an SFTP server using borg. The two common UI interfaces I found are vorta and pika-backup. Vorta only supports SSH and local backup repositories while pika allows SFTP through some kind of compatibility layer with gvfs.
Seems like pika is the right choice for me but the UI felt incredibly dumbed down and really did not match with anything else on my PC. Since both programs were kind of out, I found another backup tool in Kopia.
The reason I was looking for a backup tool at all? I was previously using synology active backup for business, which is available on all linux distros except arch.


The one thing I can’t get set up on Kate is leaving temporary text files open between sessions.
Probably a bad habit of mine but I sometimes end up pasting some info into a notepad++ file without saving it and then come back much later to check it out again


Heh, I guess I was one of those downloads. I wanted to set up an old PC I had lying around for gaming over the holidays at my parents place.
In the end I forgot that I maybe would need an internet connection and didn’t have a long enough ethernet cable to actually use it but I did install the distro at least. No idea how well it works though since the PC has a GTX 1050 ti and officially the image only supports RTX cards and the GTX 16xx series.
I remember building something vaguely related in a university course on AI before ChatGPT was released and the whole LLM thing hadn’t taken off.
The user had the option to enter a couple movies (so long as they were present in the weird semantic database thing our professor told us to use) and we calculated a similarity matrix between them and all other movies in the database based on their tags and by putting the description through a natural language processing pipeline.
The result was the user getting a couple surprisingly accurate recommendations.
Considering we had to calculate this similarity score for every movie in the database it was obviously not very efficient but I wonder how it would scale up against current LLM models, both in terms of accuracy and energy efficiency.
One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that