Got a warning for my blog going over 100GB in bandwidth this month… which sounded incredibly unusual. My blog is text and a couple images and I haven’t posted anything to it in ages… like how would that even be possible?

Turns out it’s possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for ‘Unknown robot’? This is actually bonkers.

Edit: As Thunraz points out below, there’s a footnote that reads “Numbers after + are successful hits on ‘robots.txt’ files” and not scientific notation.

Edit 2: After doing more digging, the culprit is a post where I shared a few wallpapers for download. The bots have been downloading these wallpapers over and over, using 100GB of bandwidth usage in the first 12 days of November. That’s when my account was suspended for exceeding bandwidth (it’s an artificial limit I put on there awhile back and forgot about…) that’s also why the ‘last visit’ for all the bots is November 12th.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    1 day ago

    Check out Anubis. If you have a reverse proxy it is very easy to add, and for the bots stopped spamming after I added it to mine

    • K3CAN@lemmy.radio
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      I recently added Anubis and its validation rate is under 40%. In other words, 60% of the incoming requests are likely bots and are now getting blocked. Definitely recommend.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        I was a single server with only me and 2 others or so, and then saw that I had thousands of requests per minutes at times! Absolutely nuts! My cloud bill was way higher. Adding anubis and it dropped down to just our requests, and bills dropped too. Very very strong proponent now.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        13
        ·
        17 hours ago

        This dance to get access is just a minor annoyance for me, but I question how it proves I’m not a bot. These steps can be trivially and cheaply automated.

        I don’t think the author understands the point of Anubis. The point isn’t to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it’s meant to be solved by a phone or any consumer device.

        The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren’t one single host or machine, they’re thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let’s say that’s 1000 scrapes of your site, that’s now 5000 seconds to scrape, roughly an hour and a half. Now we’re talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It’s that it’s not worth it for them to scrape the site anymore.

        So for Anubis they’re “judging your value” by saying “Are you willing to put your money where your mouth is to access this site?” For consumer it’s a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it’s real dollars wasted on my little lemmy instance/blog, and thankfully they’ve stopped caring.

        • deffard@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 hours ago

          The author demonstrated that the challenge can be solved in 17ms however, and that is only necessary once every 7 days per site. They need less than a second of compute time, per site, to be able to send unlimited requests 365 days a year.

          The deterrent might work temporarily until the challenge pattern is recognised, but there’s no actual protection here, just obscurity. The downside is real however for the user on an old phone that must wait 30 seconds, or like the blogger, a user of a text browser not running JavaScript. The very need to support an old phone is what defeats this approach based on compute power, as it’s always a trivial amount for the data center.

          • [object Object]@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 hours ago

            The deterrent might work temporarily until the challenge pattern is recognised, but there’s no actual protection here, just obscurity.

            Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums. Anubis has a customizable difficulty for this proof-of-work challenge, but defaults to 5 leading zeroes.

            Please tell me how you’re gonna un-obscure a proof-of-work challenge requiring calculation of hashes.

            And since the challenge is adjustable, you can make it take as long as you want.