Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

Some AI vendors, including Google, OpenAI and Apple, allow website owners to block the bots they use for data scraping and model training by amending their site’s robots.txt, the text file that tells bots which pages they can access on a website. But, as Cloudflare points out in a post announcing its bot-combating tool, not all AI scrapers respect this.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I know we hate Cloudflare, but that’s a good feature addition.

    Went to turn it on on the domain covering some of my stuff, and they also directed me to their Radar site, which shows the volume of and which bots are making the most noise, and not the least bit shockingly, it’s AI bots all the way down.

    If nothing breaks I’m totally leaving this on and Amazon, Google, and OpenAI can all go screw themselves.

    • MSids@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 months ago

      Can you educate me on the negatives of Cloudflare?

      My company is on Akamai, who has a pretty solid combined offering of WAF, DNS, and CDN, and yet I still feel like their platform is antiquated and well overdue for a refresh.

      Thinking back to log4j, it was cloudflare who had the automatic protections in place well ahead of Akamai, who we had to ask for custom filters. Cloudflare also puts out many articles on Internet events and increase adoption of emerging best practices, sometimes through heavy shaming.

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        Cloudflare’s free CDN offering is a MiTM (you use their certificates ONLY to be able to go through their network). Adding to this, they control a lot of Internet infrastructure (comparable to Microsoft and Google). I hate all of these companies and specifically use Quad9 till I get my own DNS running. It probably doesn’t matter to the end-user but I’m happy to see a technical crowd who maintains my ideals on big tech on Lemmy

        • 𝕸𝖔𝖘𝖘@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          It can matter to the end user. I had to spoof my user agent, because I was using a beta version of Firefox, and cloudflare thought I was a bot. Sites still don’t load sometimes at work (just keeps cycling through the “checking to make sure you’re a human” bullshit), regardless of browser. It’s a single point of failure for much of the web. Not that long ago (last year, I think), cloudflare had some bad config files pushed to prod, and about half the web broke. Cloudflare can arbitrarily block (and has done so) websites, since they’re serving the content. In theory, cf is a great service. In practice, they’ve abused it enough that we really shouldn’t trust them again.

  • Malcolm@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    I’m not much of a programmer and I don’t host any public sites, but how feasible would it be to build an equivalent of Night Shade but for LLMs that site operators could run?

    I’m thinking strategies akin to embedding loads of unrendered links to pages full of junk text. Possibly have the junk text generated by LLMs and worsened via creative scripting.

    It would certainly cost more bandwidth but might also reveal more bad actors. Are modern scrapers sophisticated enough to not be fooled into pulling in that sort of junk data? Are there any existing projects doing this sort of thing?

    • GBU_28@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Sure, but it adds cost. An OCR scrape then a matching with the html parse.

      Regarding ideas of IP banning, proxies are already heavily leveraged.

      This is an ugly fight

    • Wirlocke@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      To get more direct to the point you could use those unrendered dummy links to ban whatever IPs click them.

      With the vast amounts of training data and how curated they’re becoming (Llama and Claude are going that direction) it’s infeasible to actually poison a large model to this degree.