
Cloudflare and GoDaddy partner to help websites fight back against AI crawlers
Cloudflare and GoDaddy have announced a partnership to address how AI crawlers from Big Tech companies are affecting website traffic and publisher revenue. The move focuses on the growing problem of AI answer engines reusing web content without sending users back to the original source. As part of the deal, Cloudflare’s AI Crawl Control will be integrated into GoDaddy’s hosting platform, letting site owners allow, block, or require payment from AI bots.
The partnership is a response to the rise of AI answer engines like Google's AI Overview, which increasingly give users direct responses instead of linking them back to websites. That shift has increased tensions between content creators and AI companies, as publishers see less traffic and significantly weaker returns while their content continues to be used, and often not even credited.
Cloudflare and GoDaddy are also backing standards such as Agent Name Service and Web Bot Auth to verify AI agent identity through cryptographically signed credentials. Alongside audit logs that give site owners more visibility into bot activity, the broader goal is to build a permission based system where identity, access control, and compensation help protect human created content online, which in my personal opinion feels more necessary than ever right now.

Comments
It's basically the end of the free internet, and the ubiquitous Cloudflare captcha is already a nail in the coffin.
But putting internet behind a paywall is not news, and even since Google search engine inception, many websites are complaining about crawlers. Google has even been sued for Google News because it listed articles from different sources in a single feed (and making money from it, even if publishers were also getting money from Google News visibility).
Now some publishers are already suing Internet Archive to prevent their content to train AI, which also prevent website edits history from being accessible for analysis, journalists, or if the website disappears.
Since the largest current use of AI is as natural search engine, crawling websites to get up to date content is obviously important. Half of the problem would be fixed if AI would be consistently able to cite its sources, even if AI is just a probabilistic black box that doesn't understand what it says.
But there will always be companies getting enormous profit from others works (like app stores, video game mods, ads, trackers), and we no longer live in the golden age of internet where everyone could get rich by creating a simple website (i.e. Facebook), it doesn't mean we have to give up on a decentralized and free to access internet, because at the current era of age verification, we'll soon be forced to have to reveal our true identity (through, at best, cryptographic challenges) to prove we are not bots, minors, foreigners and using a VPN.
cautiously optimistic about this
(UserPower's comment above is actually more thoughtful ^)
Seems Great Because The AI Is TO Assist, Not Replace Everything