I installed fedora server. I set up opnsense in the VM. I told networkmanager on the host to treat the dual port NIC as unmanaged. I have the container grabbing up that NIC and routing traffic to WAN and LAN/vLANs. that part is done.

Now I want to add a virtual network between the OPNsense guest and the fedora host so that traffic from the host can reach the guest without having to pass through the physical network outside the box. I see no reason to make it leave the box just to come right back into the box again. This is where I’m stuck.

talking with friends, arguing with chatGPT hallucinations, I finally just tried winging it visually in the cockpit UI and got it working, but it didnt survive reboots and broke again. From what I can tell, networkmanager and libvirt had a dispute over who managed a bridge. A lot of the stuff I’m finding on this was vague or meant for traffic flowing in the other direction. and I’m struggling to wrap my head around this.

How do I create a virtual network between my OPNsense guest and Fedora server host for the host and containers to reach the internet, LAN and VLANs through?

  • non_burglar@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    10 hours ago

    Then you will have to do it with bridging.

    Create a second bridge and bind only the wan interface and the physical interface of opnsense to it. Then create a 2nd interface for opnsense, but bind it to the brIdge libvirt uses to connect its guests.

    • muusemuuse@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Looking deeper into it, this is kinda what its been falling back to since the passthru wasnt working even though it was offered as an option in cockpit and it threw no errors trying to do it. I put the lan side in a bridge and tied the hosts stuff in that way, then put the wan on a macvtap to the wan. that working but performance isnt great. I ran some tests today against bare metal and while direct access to the NIC certainly improves things, its still not keeping up.

      direct macbook to cable modem: 916/40 opnsense virtualized (with vlans and rules): 699/41 opnsense bare metal (with vlans and rules): 816/39 opnsense bare metal (with vlans and rules and hardware offload fully enabled): 824/40

      the only rules in place were the defaults, the rule to block vlans from talking to eachother, and the rule to pass traffic to WAN. when virtualized, I cannot get PCI passthru so I was using macvtap interfaces and virtuio drivers with 4 threads and 4 pinned CPU threads.

      CPU is a ryzen 5800XT NIC is a dual port intel I226V when virtualized, it was running under fedora server with QEMU/KVM q35 and given 8gigs of ram with hugepage memory and tested in both 2 and 4 thread resource allocation (all confirmed to be on the same 1 or 2 physical cores as the threads) and eventually even giving 4 threads to the virtuio driver (it was only claiming 1 thread before).

      Bare metal IS definitely helping, so it looks like I need to swap out for a motherboard that can do proper PCI passthru of the NIC (now that I understand the limitations of IOMMU groups they specs of the board dont tell you about I hate them all the more.) but it still cant hit line rates. Theres no IDS or suricata or any of the fanciness turned on yet though, so I just dont understand why its this slow even bare metal.