Admiral Patrick

I’m surprisingly level-headed for being a walking knot of anxiety.

Ask me anything.

Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.

I also develop Tesseract UI for Lemmy/Sublinks

Avatar by @SatyrSack@feddit.org

  • 26 Posts
  • 296 Comments
Joined 3 years ago
cake
Cake day: June 6th, 2023

help-circle

  • Just about anything as long as you don’t need to serve it to hundreds of people simultaneously. Hell, I once hosted Jellyfin over a 3G hotpot and it managed.

    Pretty much any web-based app will work fine. Streaming servers (Emby, Plex, Jellyfin, etc) work fine for a few simultaneous people as long as you’re not trying to push 4K or something. 1080p can work fine at 4 Mbps or less (transcoding is your friend here). Chat servers (Matrix, XMPP, etc) are also a good candidate.

    I hosted everything I wanted with 30 Mbps upload before I got symmetric fiber.





  • Thanks!

    Mostly there’s three steps involved:

    1. Setup Nepenthes to receive the traffic
    2. Perform bot detection on inbound requests (I use a regex list and one is provided below)
    3. Configure traffic rules in your load balancer / reverse proxy to send the detected bot traffic to Nepenthes instead of the actual backend for the service(s) you run.

    Here’s a rough guide I commented a while back: https://dubvee.org/comment/5198738

    Here’s the post link at lemmy.world which should have that comment visible: https://lemmy.world/post/40374746

    You’ll have to resolve my comment link on your instance since my instance is set to private now, but in case that doesn’t work, here’s the text of it:

    So, I set this up recently and agree with all of your points about the actual integration being glossed over.

    I already had bot detection setup in my Nginx config, so adding Nepenthes was just changing the behavior of that. Previously, I had just returned either 404 or 444 to those requests but now it redirects them to Nepenthes.

    Rather than trying to do rewrites and pretend the Nepenthes content is under my app’s URL namespace, I just do a redirect which the bot crawlers tend to follow just fine.

    There’s several parts to this to keep my config sane. Each of those are in include files.

    • An include file that looks at the user agent, compares it to a list of bot UA regexes, and sets a variable to either 0 or 1. By itself, that include file doesn’t do anything more than set that variable. This allows me to have it as a global config without having it apply to every virtual host.

    • An include file that performs the action if a variable is set to true. This has to be included in the server portion of each virtual host where I want the bot traffic to go to Nepenthes. If this isn’t included in a virtual host’s server block, then bot traffic is allowed.

    • A virtual host where the Nepenthes content is presented. I run a subdomain (content.mydomain.xyz). You could also do this as a path off of your protected domain, but this works for me and keeps my already complex config from getting any worse. Plus, it was easier to integrate into my existing bot config. Had I not already had that, I would have run it off of a path (and may go back and do that when I have time to mess with it again).

    The map-bot-user-agents.conf is included in the http section of Nginx and applies to all virtual hosts. You can either include this in the main nginx.conf or at the top (above the server section) in your individual virtual host config file(s).

    The deny-disallowed.conf is included individually in each virtual hosts’s server section. Even though the bot detection is global, if the virtual host’s server section does not include the action file, then nothing is done.

    Files

    map-bot-user-agents.conf

    Note that I’m treating Google’s crawler the same as an AI bot because…well, it is. They’re abusing their search position by double-dipping on the crawler so you can’t opt out of being crawled for AI training without also preventing it from crawling you for search engine indexing. Depending on your needs, you may need to comment that out. I’ve also commented out the Python requests user agent. And forgive the mess at the bottom of the file. I inherited the seed list of user agents and haven’t cleaned up that massive regex one-liner.

    # Map bot user agents
    ## Sets the $ua_disallowed variable to 0 or 1 depending on the user agent. Non-bot UAs are 0, bots are 1
    
    map $http_user_agent $ua_disallowed {
        default 		0;
        "~PerplexityBot"	1;
        "~PetalBot"		1;
        "~applebot"		1;
        "~compatible; zot"	1;
        "~Meta"		1;
        "~SurdotlyBot"	1;
        "~zgrab"		1;
        "~OAI-SearchBot"	1;
        "~Protopage"	1;
        "~Google-Test"	1;
        "~BacklinksExtendedBot" 1;
        "~microsoft-for-startups" 1;
        "~CCBot"		1;
        "~ClaudeBot"	1;
        "~VelenPublicWebCrawler"	1;
        "~WellKnownBot"	1;
        #"~python-requests"	1;
        "~bitdiscovery"	1;
        "~bingbot"		1;
        "~SemrushBot" 	1;
        "~Bytespider" 	1;
        "~AhrefsBot" 	1;
        "~AwarioBot"	1;
    #    "~Poduptime" 	1;
        "~GPTBot" 		1;
        "~DotBot"	 	1;
        "~ImagesiftBot"	1;
        "~Amazonbot"	1;
        "~GuzzleHttp" 	1;
        "~DataForSeoBot" 	1;
        "~StractBot"	1;
        "~Googlebot"	1;
        "~Barkrowler"	1;
        "~SeznamBot"	1;
        "~FriendlyCrawler"	1;
        "~facebookexternalhit" 1;
        "~*(?i)(80legs|360Spider|Aboundex|Abonti|Acunetix|^AIBOT|^Alexibot|Alligator|AllSubmitter|Apexoo|^asterias|^attach|^BackDoorBot|^BackStreet|^BackWeb|Badass|Bandit|Baid|Baiduspider|^BatchFTP|^Bigfoot|^Black.Hole|^BlackWidow|BlackWidow|^BlowFish|Blow|^BotALot|Buddy|^BuiltBotTough|
    ^Bullseye|^BunnySlippers|BBBike|^Cegbfeieh|^CheeseBot|^CherryPicker|^ChinaClaw|^Cogentbot|CPython|Collector|cognitiveseo|Copier|^CopyRightCheck|^cosmos|^Crescent|CSHttp|^Custo|^Demon|^Devil|^DISCo|^DIIbot|discobot|^DittoSpyder|Download.Demon|Download.Devil|Download.Wonder|^dragonfl
    y|^Drip|^eCatch|^EasyDL|^ebingbong|^EirGrabber|^EmailCollector|^EmailSiphon|^EmailWolf|^EroCrawler|^Exabot|^Express|Extractor|^EyeNetIE|FHscan|^FHscan|^flunky|^Foobot|^FrontPage|GalaxyBot|^gotit|Grabber|^GrabNet|^Grafula|^Harvest|^HEADMasterSEO|^hloader|^HMView|^HTTrack|httrack|HTT
    rack|htmlparser|^humanlinks|^IlseBot|Image.Stripper|Image.Sucker|imagefetch|^InfoNaviRobot|^InfoTekies|^Intelliseek|^InterGET|^Iria|^Jakarta|^JennyBot|^JetCar|JikeSpider|^JOC|^JustView|^Jyxobot|^Kenjin.Spider|^Keyword.Density|libwww|^larbin|LeechFTP|LeechGet|^LexiBot|^lftp|^libWeb|
    ^likse|^LinkextractorPro|^LinkScan|^LNSpiderguy|^LinkWalker|msnbot|MSIECrawler|MJ12bot|MegaIndex|^Magnet|^Mag-Net|^MarkWatch|Mass.Downloader|masscan|^Mata.Hari|^Memo|^MIIxpc|^NAMEPROTECT|^Navroad|^NearSite|^NetAnts|^Netcraft|^NetMechanic|^NetSpider|^NetZIP|^NextGenSearchBot|^NICErs
    PRO|^niki-bot|^NimbleCrawler|^Nimbostratus-Bot|^Ninja|^Nmap|nmap|^NPbot|Offline.Explorer|Offline.Navigator|OpenLinkProfiler|^Octopus|^Openfind|^OutfoxBot|Pixray|probethenet|proximic|^PageGrabber|^pavuk|^pcBrowser|^Pockey|^ProPowerBot|^ProWebWalker|^psbot|^Pump|python-requests\/|^Qu
    eryN.Metasearch|^RealDownload|Reaper|^Reaper|^Ripper|Ripper|Recorder|^ReGet|^RepoMonkey|^RMA|scanbot|SEOkicks-Robot|seoscanners|^Stripper|^Sucker|Siphon|Siteimprove|^SiteSnagger|SiteSucker|^SlySearch|^SmartDownload|^Snake|^Snapbot|^Snoopy|Sosospider|^sogou|spbot|^SpaceBison|^spanne
    r|^SpankBot|Spinn4r|^Sqworm|Sqworm|Stripper|Sucker|^SuperBot|SuperHTTP|^SuperHTTP|^Surfbot|^suzuran|^Szukacz|^tAkeOut|^Teleport|^Telesoft|^TurnitinBot|^The.Intraformant|^TheNomad|^TightTwatBot|^Titan|^True_Robot|^turingos|^TurnitinBot|^URLy.Warning|^Vacuum|^VCI|VidibleScraper|^Void
    EYE|^WebAuto|^WebBandit|^WebCopier|^WebEnhancer|^WebFetch|^Web.Image.Collector|^WebLeacher|^WebmasterWorldForumBot|WebPix|^WebReaper|^WebSauger|Website.eXtractor|^Webster|WebShag|^WebStripper|WebSucker|^WebWhacker|^WebZIP|Whack|Whacker|^Widow|Widow|WinHTTrack|^WISENutbot|WWWOFFLE|^
    WWWOFFLE|^WWW-Collector-E|^Xaldon|^Xenu|^Zade|^Zeus|ZmEu|^Zyborg|SemrushBot|^WebFuck|^MJ12bot|^majestic12|^WallpapersHD)" 1;
    
    }
    
    
    deny-disallowed.conf
    # Deny disallowed user agents
    if ($ua_disallowed) { 
        # This redirects them to the Nepenthes domain. So far, pretty much all the bot crawlers have been happy to accept the redirect and crawl the tarpit continuously 
    	return 301 https://content.mydomain.xyz/;
    }
    



  • Yep, been there. Thankfully that particular boondoggle fizzled out at the “Expect me to fix everything for them (despite not being involve in any of the above)” step because I refused. Normally me refusing wouldn’t fly, but vendor’s instructions required configuring Remote Desktop Services in a way that clearly and blatantly violated Microsoft’s licensing terms and non-IT group did not want to pay for the requisite number of license seats and vendor insisted you did not need RDS licenses for this scenario (spoiler: you totally do).

    I think the non-IT group still has a contract with [shitty robotic process automation vendor], though, but we just washed our hands of it and the non-IT group uses their cloud version.

    I’ve debated reporting that company to Microsoft for license violations because I just hate them (I have their deployment instructions and emails as proof) but I’ve just stopped caring and am not quite petty enough to do so lol.


  • Most of the requirements are going to be for the database, and that depends on:

    1. How many active users you expect
    2. How many large rooms you or your users join

    I left many of the large Matrix spaces I was in, and mine is now mostly just 1:1 chats or a group chat with a handful of friends. Given that low-usage case, I can run my server on a Pi 3 with 4 GB of RAM quite comfortably. I don’t do that in practice, but I do have that setup as a backup server - it periodically syncs the database from my main server - and works fine. The bottleneck there, really, is the SD card storage since I didn’t want an external SSD hanging off of it.

    Even when I was active in several large Matrix spaces/rooms, a USFF Optiplex with a quad core i5, 8 GB of RAM, and a 500GB SSD was more than enough to run it comfortably alongside some other services like LibreTranslate.



  • I disabled local thumbnail generation almost a year ago, and things mostly work the same.

    Instead of a local thumbnail image URL for things like news articles that get posted, it will be the direct URL value from the og:image metadata from the source. Usually those load fine, but sometimes they don’t due to CORS settings on their side. Probably only 1-2% of posts have issues, though.

    For image posts that come in via federation, (memes, pics, etc), the thumbnail image URL is the same as the post URL. In other words, you’re loading the full res version in the feed. Since I use a web client that has “card view”, this actually works out better, visually. YMMV whether that’s a drawback for you.

    The only pitfall is that you will lose thumbnails for image posts if an instance goes offline or shuts down.

    I’m sure that does increase load slightly on other instances, but no more than if the remote instance had image proxying turned on. And the full-res version always has to load from the remote instance (even if you have local thumbnail generation enabled). All in all, I’d say the additional load is acceptable given the benefits of disabling local thumbnail generation.

    To mitigate that, in my case anyway, I have my own image proxy/cache in place. My default UI is Tesseract and it’s configured with the image proxy/cache on by default… (I think I saw that Photon is also working on something similar). In this configuration, the first person to scroll past a remote image fetches it directly (via the proxy/cache) and it’s now available locally in the cache for everyone else (unless they’re connecting with a different client that doesn’t use Tesseract’s proxy). Granted, I shutdown my instance last year and it’s just now a private testbed for development, but when I did have daily active users (plural), the proxy cache helped.

    Now the only images my instance stores are ones that are uploaded locally.

    Why did I disable local thumbnails?

    • I closed up my instance and didn’t want potentially problematic thumbnails being generated while I wasn’t actively modding it
    • Generated thumbnails go in, but they don’t go out. There’s no way to clean them up later other than the ephemerally generated ones (if someone requests a version in a custom size, for example)
    • Increasing storage costs. Like, I’d be scrolling the feed and see some of the dumbest shitposts while constantly thinking “Ugh, this is costing me money to store a copy”.



  • Basically the only thing you want to present with a challenge is the paths/virtual hosts for the web frontends.

    Anything /api/v3/ is client-to-server API (i.e. how your client talk to your instance) and needs to be obstruction-free. Otherwise, clients/apps won’t be able to use the API. Same for /pictrs since that proxies through Lemmy and is a de-facto API endpoint (even though it’s a separate component).

    Federation traffic also needs to be exempt, but it’s not based on routes but by the HTTP Accept request header and request method.

    Looking at the Nginx proxy config, there’s this mapping which tells Nginx how to route inbound requests:

    nginx_internal.conf: https://raw.githubusercontent.com/LemmyNet/lemmy-ansible/main/templates/nginx_internal.conf

        map "$request_method:$http_accept" $proxpass {
            # If no explicit matches exists below, send traffic to lemmy-ui
            default "http://lemmy-ui:1234/";
    
            # GET/HEAD requests that accepts ActivityPub or Linked Data JSON should go to lemmy.
            #
            # These requests are used by Mastodon and other fediverse instances to look up profile information,
            # discover site information and so on.
            "~^(?:GET|HEAD):.*?application\/(?:activity|ld)\+json" "http://lemmy:8536/";
    
            # All non-GET/HEAD requests should go to lemmy
            #
            # Rather than calling out POST, PUT, DELETE, PATCH, CONNECT and all the verbs manually
            # we simply negate the GET|HEAD pattern from above and accept all possibly $http_accept values
            "~^(?!(GET|HEAD)).*:" "http://lemmy:8536/";
    



  • Granted, I don’t think the instance level URL filters were meant to be used for the domains of other instances like I was doing here. They’re more for blocking spam domains, etc.

    e.g. I also have those spam sites you see in c/News every so often in that block list (e.g. dvdfab [dot] cn, digital-escape-tools [dot] phi [dot] vercel [dot] app, etc) , so I never see/report them because they’re rejected immediately.

    During one of the many, many spam storms here, it was desired by admins for those filters to stop anything that matched them from federating-in instead of just changing the text to removed on the frontend. So it is a good feature to have. Just maybe applied too widely.

    Though I think if a user edited their own description to include a widely-blocked URL (no URLs are blocked by default), they’d just be soft-banning themselves from everywhere that has that domain blocked.

    If a malicious community mod edited their communities’ descriptions to a include a widely-blocked URL, then yeah, that could cut off new posts coming in to any instance that has that domain blocked (old posts and the community itself would still be available).

    All of those would require instances to have certain URLs blocked. The list of blocked URLs for an instance is publicly available from the info in getSite API call, so it wouldn’t be hard to game if someone really wanted to. Fortunately, most people are too busy gaming the “delete account” feature right now 🙄.


  • Admiral Patrick@dubvee.orgtoFediverse@lemmy.worldGhost of Lemm.ee?
    link
    fedilink
    English
    arrow-up
    70
    ·
    edit-2
    12 days ago

    The person who cross-posted it was probably definitely from your local instance.

    You only ever interact with your local instance’s copy of any community, even remote ones. If the community is to a remote instance that is either offline or since de-federated, there’s nothing that prohibits you from interacting with it*. Because lemm.ee is no longer there to federate out the post/comments to any of the community’s subscribers, only people local to your instance will see it.

    *Admins can remove the community and, prior to it going offline, mods can lock it. But if an instance just disappears, you can still locally interact with any of its communities on your instance; the content just won’t federate outside your instance.