• jagged_circle
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    6
    ·
    edit-2
    3 hours ago

    This is fine. I support archiving the Internet.

    It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

    The only bots we need to worry about are the ones that POST, not the ones that GET

    • Max-P@lemmy.max-p.me
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 hour ago

      I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.

      The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.

      Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.

      My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.

    • zod000@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      Bullshit. This bot doesn’t identify itself as a bot and doesn’t rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.