This is my first real dive into hosting a server beyond a few Docker containers in my NAS. I’ve been learning a lot over the past 5 days, first thing I learned is that Proxmox isn’t for me:
https://sh.itjust.works/post/49441546 https://sh.itjust.works/post/49272492 https://sh.itjust.works/post/49264890
So now I’m running headless Ubuntu and having a much better time! I migrated all of my Docker stuff to my new server, keeping my media on the NAS. I originally set up an NFS share (NAS->Server) so my Jellyfin container could snag the data. This worked at first, quickly crumbled without warning, and HWA may or may not be working.
Enter the Jellyfin issue: transcoded playback (and direct, doesn’t matter) either give “fatal player error” or **extremely **slow, stuttery playback (basically unusable). Many Discord exchanges later, I added an SMB share (same source folder, same destination folder) to troubleshoot to no avail, and Jellyfin-specific problems have been ruled out.
After about 12hrs of ‘sudo nano /etc/fstab’ and ‘dd if=/path/to/nfs_mount/testfile of=/dev/null bs=1M count=4096 status=progress’, I’ve found some weird results from transferring the same 65GB file between different drives:
NAS’s HDD (designated media drive) to NAS’s SSD = 160MB/s NAS’s SSD to Ubuntu’s SSD = 160MB/s NAS’s HDD to Ubuntu’s SSD = .5MB/s
Both machines are cat7a ethernet straight to the router. I built the cables myself, tested them many times (including yesterday), and my reader says all cables involved are perfectly fine. I’ve rebooted them probably a fifty times by now.
NAS (Synology DS923+): -32GB RAM -Seagate EXOS X24 -Samsung SSD 990 EVO
Ubuntu: -Intel i5-13500 -Crucial DDR5-4800 2x32GB -WD SN850X NVMe
If you were tasked with troubleshooting a slow mount bind between these two machines, what would you do to improve the transfer speeds? Please note that I cannot SSH into the NAS, I just opened a ticket with Synology about it.
Here’s the current /etc/fstab after extensive Q&A from different online communities
NFS mount: 192.168.0.4:/volume1/data /mnt/hermes nfs4 rw,nosuid,relatime,vers=4.1,rsize=13>
SMB mount: //192.168.0.4/data /mnt/hermes cifs username=_____,password=_______,vers=3.>
Run iperf between the client and server, what’s the network speed and packet loss? What’s the latency?

That is horrific, that’s ya problem
Right, but how to solve it?
You’ll have to troubleshoot to see why your local network is worse than avian mail
Know any Linux magic to try out?
Can you run
sudo ethtool <interface>Should tell you what the NIC is physically seeing on your Ubuntu machine. Also maybe just do a generic speed test from your Ubuntu machine to see if its everything on the NIC or just lateral traffic being impacted
“-bash: syntax error near unexpected token `newline’” I’m not familiar with ethtool, but I looked up some commands related to ethtool. Unfortunately, everything I tried give me “bad command line argument(s)”
Could be anything from shit cable, to failing network equipment, to bad driver. Please tell me it’s hardwired and not on wifi
Of course, cat7a just tested all the cables too
That doesn’t look right. What are the two IP’s of the machines on your network?
Edit: you must be using containers or something. Don’t use bridge networking if you’re unsure of the performance issues there.
192.168.0.4 and 192.168.0.44 for NAS and server, respectively. Currently just an idle Jellyfin container. I’m not sure what bridge networking is without looking it up, so I’m assuming that’s not happening here
Why is your iperf run referencing a local 100.X address then?
Oh yeah, Tailscale. I’ll run iperf without it to compare, but I’ve never had an issue with my tailnet before

still not great. And I think ‘sudo tailscale up --accept-routes’ broke my shit. Now SSH is failing. I’m calling it a night, I’ll report back tomorrow
Well a 6-7X improvement is something, but you still see the Tailnet running there.
Honestly, if you don’t know networking and routing, don’t mess with Tailscale. It breaks shit like this for all these people who don’t know what they’re doing who are doing things like installing it on all their local machines because they have no idea how it’s used or it’s purpose, and it’s clearly your problem right here because both you, and your tailnet are confused.
I know for a fact your containers are ALSO running Tailscale or something equally not good, because you’ve definitely got a polluted routing table from local route loops, and you’re confused as to what that is, how to prevent it, and why it’s broken.
- Shut it down EVERYWHERE ON YOUR LOCAL NETWORK.
- Make sure your default routes only point to LOCAL ADDRESSES
- Recheck your transfer speeds which should be 100MBytes/s+
Interesting. I’ve been using Tailscale for years, this is the first I’ve heard of it causing LAN networking problems. I thought the purpose of Tailscale was to establish a low maintenance VPN for people who won’t/can’t set up a reverse proxy, especially for beginners like myself. Later today I’ll try to clear it out and report back
This is incredibly confusing and formatted oddly, so let me get some clarification:
- What protocol are you using to mount the NAS to the Ubuntu machine?
- What did you use to transfer this slow file over the network? The disk transfer rate would be much faster than the network in any case, so 160MB/s may just be the network max.
- Have you tried other files and methods to transfer, like SSH, Rsync…etc? Try those and post the speeds.
- NFS4 at first, I’ve also tried SMB. I don’t care which one I end up with, as long as it works efficiently and consistently
- A few things, but my memory is blurry. I definitely used the cli in the post, but I started by trying ‘cp’ then ‘rsync’
- I have to test speeds, but the real issue here is how this performance is impacting my containers’ performance


