

It’s fine, this is healthy discourse we all need to move forward. If we kick out all the vibe coders instead of discussing with them, we will never get them to adhere to any kind of pattern of behaviour.


It’s fine, this is healthy discourse we all need to move forward. If we kick out all the vibe coders instead of discussing with them, we will never get them to adhere to any kind of pattern of behaviour.


I wish you’d come in to the comments outside my emotional response to someone else :P
I’m 50 yrs old now, but I used to react almost the same way you did, I understand where you’re coming from.
I personally believe LLMs (and AI in general) can be great tools to help along with coding and similar tasks, we just don’t have a very good culture of their use yet.


Now on that last point, there will indeed come a time when simply using the engineering technique of “making things bigger” won’t work if the attacks become sophisticated enough, but at that point networking will have fully become geopolitical tools (more than they are now).


Nice.
The issues to look for are unnecessary logic (evaluating variables and conditions for no reason), and double sets of variables.
One of the seasoned devs I work with said she encourages coders to transpose work at major inflection points, and this helps all devs gain an understanding of their own code. The technique is simply to rewrite/refactor the code in a new project manually, changing the names of the variables and arrays. The process forces one to identify where variables and actions are being used and how. It’s not very practical for very big projects, but anything under 1000 lines would benefit from it.
Good luck.


Again, get off your high horse.
They just came out swinging, for no reason.
You already know how most self-hosted folks feel about vibe coding, or you wouldn’t have taken immediate offence to the initial comment (which ia valid, btw. You did not mark the project as vibe-coded or ai-assisted.) MARK YOUR PROJECT AS AI-ASSISTED.
Explain where you expect inefficiency and how I can fix it, and I will.
I’m looking to replace my cron-timed ffmpeg bash and ash scripts for encoding. Three of the four projects I looked at have double- and triple-work loops for work that should be done once. This seems to be a theme in vibe-coded projects.
And incidentally, the fact that this is a personal project I shared in case someone might find it useful is another reason that coming in here and throwing shade is a shitty thing to do.
Once again, I’m interested in the project, but I have my own thresholds of quality and security. If you can’t handle questions about your project, personal or not, then maybe don’t share it.
But why try to make me feel bad about it, because you don’t like the way I built it?
Sir/Madam, your feeling are your responsibility, not mine. I did not utter any pejoratives your way. Grow up.


No one is being a jerk here, stop being defensive.
What fixes did you apply. That’s what we want to know. It’s not a trick question.
If you want to present your project, be prepared to explain it. That is completely above board for us to ask.


My main concern is also boat.

i just have unpopular opinions
Sure, but you chose a weird place to strut around yelling them loudly.
How is your experience with screen sharing?


I used to do that until about 2015.
Even private trackers don’t come close to the coverage of newsgroups. Plus, nzb has the concept of releases, so you don’t have to guess at the quality.
I don’t have an issue with paying, I have an issue with paying for something I don’t want.


Indexers and downloaders are distinct for newsgroups.
Public indexers are not good for Linux isos, you need a paid service now. They’re cheap and well worth it. Easynews and nzbgeek are good ones.
I know this is the preferred way to do it now, but I sometimes worry that abstracting where things are configured in an is that configures everything in a file.
You used to only have to check two places to change a hostname.
Oldmanyellsatsky.jpg


Cgroups is not a really a security feature (from what I understand). It is about controlling process priority, hierarchy, and resources limiting (among other things).
With respect, I think you misunderstand what gvisor does and containerization in general. cgroups2 is the isolation mechanism used by most modern Linux containers, including docker and lxc both. It is similar to the jail concept in BSD, and loosely to chroot. It limits child process access to files, devices, memory, and is the basis for how subprocesses are secured against accessing host resources without the permission to do so.
Gvisor adds more layers of control over this system by adding a syscall control plane to prevent a container from accessing functions in the host’s kernel that might not be protected by cgroups2 policy. This lessens the security risk of the host running a cutting-edge or custom kernel with more predictable results, but it comes with caveats.
Gvisor is not a universally “better” option, especially for homelab, where environment workloads vary a lot. Gvisor comes with an IO performance penalty, incompatibility with selinux, and its very strength can prevent containers from accessing newer syscalls on a cutting edge host kernel.
My original comment was that ultimately, there is no blanket answer for “how secure is my virtualization stack”, because such a decision should be made on a case-by-case basis. And any choice made by a homelabber or anyone else should involve some understanding of the differences between each type.


Subjective to security practice. There are more appropriate factors than blanket statements on a technology’s inherent “security” when deciding the format and shape of virtual software spaces.
in a memory safe language
Ultimately, the implementation is more important than the underlying code when it comes to containers. cgroups2 works the same for gvisor as it does for LXC.


I’ve tried it. It performs poorly.
For context, I’ve also been using ZFS since Solaris.
I was wrong about compression on datasets vs pools, my apologies.
By “almost no impact” (for compression), I meant well under 1% penalty for zstd, and almost unmeasurable for lz4 fast, with compression efficiency being roughly the same for both lz4 and zstd. Here is some data on that.
Lz4 compression on modern (post-haswell) CPUs is actually so fast, that lz4 can beat non-compressed writes in some workloads (see this). And that is from 2015.
Today, there is no reason to turn off compression.
I will definitely look into the NFS integrations for ZFS, I use NFS (exports and mounts) extensively, I wonder what I’ve been missing.
Anyway, thanks for this.
With respect, most of this comment is wrong.
Also remember that many permissions like nfs export settings are done on a per filesystem basis
OK, well it’s not harming anything, so if you’re game to learn, by all means.
When you look at traffic on a public interface, besides learning what to filter out that is just normal (probes, crawls, etc from legit sources), but you also will run into badly-formed TCP traffic:
Martian packets: https://en.wikipedia.org/wiki/Martian_packet IP spoofing: https://en.wikipedia.org/wiki/IP_address_spoofing (I used to have a better resource for this,I’ll try to find it)
How RPC works: https://pentest.co.uk/labs/research/researching-remote-procedure-call-rpc-vulnerabilities/
That should help clarify a lot of what you’ll see in traffic on your segment.
You may also want to briefly read about how CDNs work, you’ll see a lot of akamai and cloudflare traffic too.
Running suricata on your wan interface is just generating a ton of noise and will be really confusing for you if you haven’t reviewed packet inspection alerts before. Not a lot of value in it unless you have many users “phoning home”.
Just run it on the lan interface.
Your approach of deny all until something complains is pretty much the most solid way to get a grip on security.
I assess and recommend security practices for a living, and I would say the most important first step is understanding where your data lives and where it goes. Once you know that, the rest is relatively easy with the tools available to us.
I think this is great. Everyone came to this result better for the exchange.