I’m currently in the long process of rebuilding my declarative homelab using k3s, ArgoCD and NixOS.

I had previously used Keycloak but that always seemed massively overqualified and way too complex for my purposes. With this rebuild I saw my chance to try out Authentik which appears to be in good standing with the homelab community.
They have tons of documentation for pretty much anything which was encouraging to me. Well except for the documentation for their Helm Charts maybe…

Started off with version 2025.12.x, am now onto 2026.02.x and have spent most weekends in between that on getting Authentik to even just deploy to the cluster.
It’s partially my fault for attempting to use Secrets initially but even now with hardcoded keys in my git repo the default example chart doesn’t work:

values.yaml
authentik:
  existingSecret:
    secretName: authentik-secret

  postgresql: # None of this gets applied at all so I do it manually below...
    password: "somepasswd"

server:
  replicas: 1

  env: # Manually apply all the configuration values. Why am I using Helm charts again?
    - name: AUTHENTIK_POSTGRESQL__HOST
      value: authentik-postgresql
    - name: AUTHENTIK_POSTGRESQL__USER
      value: authentik
    - name: AUTHENTIK_POSTGRESQL__PASSWORD
      value: "somepasswd"
    - name: AUTHENTIK_POSTGRESQL__NAME
      value: authentik

  route:
    main:
      # ...

postgresql:
  enabled: true

  auth: # And set everything here once again
    username: authentik
    password: "somepasswd"
    postgresPassword: "somepasswd"
    usePasswordFiles: false
    database: authentik

  primary:
    persistence:
      size: 4Gi

I started off with the official example and after all these undocumented changes it still only deploys-ish:

With the defaults authentik-server would always try to reach the DB under localhost which doesn’t work in the context of this chart/k8s.
So after a while I figured out that the authentik: configuration block doesn’t actually do anything and I set all the values the chart should set by hand.

Now the DB connects but the liveliness probe on the authentik-server pod fails. It logs the incoming probe requests but apparently doesn’t answer them (correctly) leading to k8s killing the pod.

Sorry for the ramble but I’ve hit my motivational breaking point with Authentik.
Since the community seems to like it a bit I am left wondering what I’m doing wrong to have this many issues with it.

Did you people have this much trouble with Authentik and what have you switched to instead?

  • jrgd@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    In my case I’m running an external Postgres DB and external cache plus a handful of other settings. As such, I have a decently sized values file. All of the env vars I was looking for in my case are provided in the chart, so I didn’t need to set any directly, but just through their counterparts in the values file.

    I don’t use ArgoCD in my case, so I couldn’t really say if it would affect your deployment strategy in any way.

    • Starfighter@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      18 hours ago

      Got it working thanks to your troubleshooting tips now. Also found a very neat way to handle secrets from another comment.

      I tend to run a DB instance per service as that makes backup restoration much easier for me. An idle postgres sits at around 50MB which is a cost I’m willing to pay.

      Thank you again for your help :)