It is not CloudFlare that is ruining the Internet, but the spammers and attackers. On the second level, that catching and punishing them is impractical or even impossible depending on their location.
Businesses were perfectly fine to accept the low security of 1990s email, webserver, and all the other configurations and software. They did not suddenly out of nowhere ask for more restrictions (such as email sending restricted to using the email server "officially responsible" for that domain- it used to be you could do the same as with physical mail, where you can drop letters into mailboxes writing a "From" address that was not in the same city as the mailbox location). They certainly did not volunteer to make everything much more difficult -- and expensive -- to set up and use. It also leads to a lot more work for their IT staff and a lot more user problems to respond to.
All these annoying restrictions were forced to be implemented by attacks of all kinds.
Because it is so difficult, compromises needed to be made. CFs methods are of course full of them, such as taking country and IP ranges into account. Feel free to make practical and implementable and affordable suggestions for alternative solutions. You may even get a reward from CF if you can come up with something good that allows them to cut back on restrictive policies while at least maintaining the current level of security. It is in the interest of CFs customers to be as accessible as possible, after all.
I run a little wiki and, holy shit, the bots have gotten advanced. Like, it used to be no big deal, then like a year and a half ago, it exploded. It's basically quadrupled the costs for me to run the site. It's not even worth it to ban IPs anymore, because they are routing their traffic through different IPs -- in different countries -- just to scan my site, repeatedly, every damn day. It's an obscure golf course wiki, but no... better check every other day to see if anything has changed!
At the same time, the bots are dumb as hell. I have honey pots that basically are as simple as "if you visit this obscure, hidden URL you're banned" or "if the same obscure page for a course is visited for four different courses in a row, then ban all the IPs that were part of that." But they keep coming... like, an infinite number of IPs. I genuinely don't want to use Cloudflare, but I understand why people do. It's absolutely crazy out there.
> It is not CloudFlare that is ruining the Internet, but the spammers and attackers.
Spammers have been around since forever and it used to be the webmaster/sysadmin's responsibility to deal with spam in a way that would not hinder user experience. With Cloudflare all that responsibility is aggressively passed on to the user, cumulatively wasting _years_.
As for attackers, I wonder if Cloudflare publishes data showing how many of the billions of websites it "protects" have experienced a significant attack. They don't offer free protection to save the internet, but rather for control -- and no single company should have this much control.
Is the fallacy here not obvious? Yes, spammers have been around since forever, but it's not the same amount of spammers. Whether it's two spammers or two million spammers does make a difference.
I think we're long past peak spam. A lot of them seem to have given up due to the rise of SPF and DKIM, and also because people don't really use email so much anymore as a serious form of communication.
I remember some clients in the mid 2000s. They got several spam emails per minute on some accounts. Not kidding. I haven't seen anything like that in recent years.
I've been on the Internet since 1992, directly connected at home (student dorms at the time) vial ethernet cable and university ATM backbone since 1994.
At that time I was an admin of said student network, and at the same time built TCP/IP based network and email infrastructure at a subsidiary of a large German company as a side job.
So I was an admin of routers, switches, various services (email, Usenet server, webservers, fax server).
Funny enough - we only added a firewall in front of the student network to protect against our own student's experiments rather than against outside intrusions, at least initially (for example, one person setting up their own Usenet server brought down DNS by flooding it with queries)!
We never had any problems with spam or attackers. "you just didn't notice the attacks!" - NO. When you go online today you get an eternal stream of automated intrusion attempts, visible in all your log files.
Today does not even remotely compare with the easy-going Internet of the 1990s.
Usenet, forums, email - they were all very much usable with minimal or zero spam, and very basic user management. Today, with such a basic setup like we used to have, you would be 100% and chock-full of spam shortly after putting such a server online.
The responsibility now lies on the user, who has to click through confirmations to prove they are human, thus making their experience a lot worse. It has been my experience the last ten years.
> It is in the interest of CFs customers to be as accessible as possible, after all.
But since in reality there is friction, there is no magic mechanism to make those interest force CF to implement a better system as, for example, the customers might not have enough knowledge / tech expertise to understand they're losing 1% due to crude CF filters and ask for a fix
The data Cloudflare shows people in the number of requests it "protected" you from and the number of requests it thought legit. There is no indication of the number of false positives, and IIRC of the number of people asked to pass a captcha. The wording implies zero false positives and I think many people simply assume its negligible.
> It is in the interest of CFs customers to be as accessible as possible, after all.
Well this is where your argument goes a little wrong IMO. When you're on something more niche (eg Firefox on Linux) they just don't care as much about making it work for you because there's so few of us blocked in the process.
And this problem should really be solved with a proper solution, not this fiddly black magic ruleset stuff. The email thing you mention is a good example. DKIM and SPF are good things that makes things more secure in an understandable way. Specifying your legit mail handlers is not a workaround, it's good security. In some ways Altman has a good idea with his WorldCoin eyeballs. But I don't support it for obvious reasons. I don't want my internet identity tied to a single tech bro and some crypto. If we do this kind of thing it has to be a proper government or NGO effort with proper oversight and appeals process.
I've tried to make my Linux Firefox identify as edge on windows and that makes it a lot better on some sites (especially Microsoft breaks a lot of M365 functions on purpose if you're not using the "invented here" browser). And many sites don't give me captchas then. But in some cases Cloudflare goes even more nasty and blocks me outright which is really annoying. If I use Linux a lot more sites break but Cloudflare sticks with captchas.
Anyway I think the age of the captcha is soon over anyway. AI will make it unviable.
> All these annoying restrictions were forced to be implemented by attacks of all kinds.
Ps it's not always attacks but also to block things that are good for consumers but bad for the sites' business model. Like preventing screen scraping which can legit help price comparison sites.
>It is not CloudFlare that is ruining the Internet, but the spammers and attackers
That's unaccountability thinking. If I have pests in my rosegarden and as a reaction I napalm the backyard of everyone in my neighbourhood, that is not the bugs' fault.
I have almost the same experience. I'm not running my own ISP and I'm not in a country known for originating DDoS attacks (Sweden), yet just using Firefox on Linux seems to be enough to be forced to click on traffic lights many times an hour. If I'm using Mullvad VPN that accelerates to almost every minute. CloudFlare claims to support privacy pass, but their extension implementing it seems to do absolutely nothing.
> I'm not running my own ISP and I'm not in a country known for originating DDoS attacks (Sweden), yet just using Firefox on Linux seems to be enough to be forced to click on traffic lights many times an hour.
I'm in the same situation. Linux, Firefox, Sweden, with a residential IP that has been mine for weeks/months. Who's massively DDoS'ing with residential Telia IPs?!
You know, after reading your comment I decided to install and try chromium for few minutes and you're absolutely right. It did not ask captcha once. I opened the same websites where cloudflare always asks me for captcha on firefox so I thought this was common, after finding this out, I am feeling annoyed.
Chrome browsers don't send a specific handshake. But while browsing other sites they help gather enough evidence for this being a human operated piece of software.
Same situation except for Linux: in Sweden, macOS, and Firefox.
The difference is: I can't get past Cloudflare's captcha for the past 2-3 years (on Firefox), have to use Chrome for the few sites I do need/want to see behind this stupid wall.
By now I've sent hundreds of feedback through their captcha feedback link, I keep doing it in the hopes at some point someone will see those...
Agreed. Same with Firefox on FreeBSD. Constant captchas. It identifies as Linux by the way (it seems to be compiled that way by the maintainers) which is probably better (a 2% desktop marketshare OS vs a 0.01% one is probably better here)
I fully agree. It's not only the waste of time when you have to confirm you're a human (that adds up to multiple hours per month).
It's also the entire blockage of older or less mainstream systems that no longer can access, sometimes critical, websites at all when the Cloudflare check blocks things entirely because the "browser is out of date" or not on their whitelist. Therefore causing excessive discrimination of poorer folks that can't afford upgrading to never/ other systems that still are legible to pass Cloudflare's "grace".
Moreover, the most hilarious thing here is that Turnstile is easily bypassed by "patchright" (patched playwright runtime) + xvfb + good residential IP pool.
So it's hurting real users and not protecting against bots.
Is there any data that’s supports this suggestion users with older devices are actually being discriminated? (% of users actually using older devices incapable of upgrading to browser versions supported by cloud flare)
I just find it hard to believe users are actually getting denied access because their device are old. Surely you can still run new versions of Chrome and Firefox on most things [1].
——————
[1] Don’t get me wrong I use Safari and I find it inflammatory when a site tells me to use a modern browser because they doesn’t support safari (the language more so). But I wouldn’t call it discrimination seeing as I have an opinion to run firefox/chrome from time to time.
What are the symptoms of being shadowbanned? I see an awful lot of "click here to prove you are human" boxes, click then, the page reloads, and I'm left with the captcha again. It's been very very frustrating.
I wish there were more information and guides being spread of free and open source systems worldwide, there is tremendous potential in upgrading "end of life" systems to use a Linux-based operating system. That way we could avoid unfathomable amounts of e-waste being dumped for no other reason than them not being commercially viable anymore and poor people could keep using their computers
Without a market CloudFlare wouldn't be able to ruin the internet. You can thank all of the incessant AI bots for that. I can't even browse GitHub anymore without logging in.
Don’t forget that the main issue isn’t the initial data scrape but rather the fact that prompting an llm agent like claude code can amplify into 10 requests as it tries to answer your question.
OP didn’t put this in the title but the article is from 2016. Turns out a lot has changed in the last decade and I think it’s likely that the article should be updated on what it’s like right now.
Oh this is still very true!
I am from Banglaore, India. There are sites that outright block me. And in a day, I at least encounter 20-25 times where I need to click on "human checkbox" due to my region or IP.
In mobile it's worse. All sites that have "strict" mode on, will either block or show the "human checkbox".
Even sites that I manage with Cloudflare, I see the same. Even if I use relaxed mode on, If I visit the site via mobile, it can trigger the Cloudflare human validation.
> Turns out a lot has changed in the last decade and I think it’s likely that the article should be updated on what it’s like right now.
Yes, every one-pager running on Vercel/Netlify sits behind Cloudflare now because no one wants to risk an insane cloud bill in case of an attack. People have become hopelessly dependent on managed cloud services.
However, to be fair, there's less captcha solving nowadays, since they introduced their one-click challenge (but that's not always the case).
> CloudFlare is a very helpful service if you are a website owner and don’t want to deal with separate services for CDN, DNS, *basic DDOS protection* and other (superficial) security needs
A lot of people try to downplay cloudflare's ddos protection but it saved me so many times over the years. I honestly don't think that there exists a good solution that someone with without a lot of money and resources can leverage besides cloudflare.
It's almost dismissed in the first sentence as "basic DDOS protection" as if there is any other company that provides an ironclad solution besides cloudflare, especially free for a tiny niche community. There is none that I am aware of.
This article is from 2016, and it should be noted that things have significantly increased since then. I no longer have to click traffic lights and motorcycles, I just have to wait a few seconds. I'm in Vietnam so my IP gets flagged for checks a lot, but they all pass automatically in a few seconds.
The only time I still get asked to click motorcycles is not Cloudflare, it's Google. They absolutely hate when you try to do a search in an incognito window and will give you an unpassable captcha until you give up and use DDG instead.
Today I looked up a word definition on tfd.com (usually a lightning-fast website), it took 3 cloudflare screens each 10 seconds to load. Why?. Because I'm in South East Asia? I aborted mission and pasted the word in a search engine and had a definition in <5 seconds. Sad to see some of my favourite sites becoming unusable.
The shorter the average interaction with the site, the worse the burden cloudflare becomes. E.g. looking up a definition is usually a 5-10 second job, Cloudflare can make it take almost an order of magnitude longer.
In defence of motorcylces and traffic lights, those captchas are annoying too, but they do help humanity in a tiny, tiny way. By contrast, watching a cloudflare loading spinner is stupefyingly useless.
(apologies for ranting. I find it disproportionately irritating even though it's only a few minutes per day, possibly due to the sheer repetition involved).
From Germany: Two redirects, one for tfd.com, one for the redirected www.thefreedictionary.com. That's the choice -- and fault -- of the domain and webserver owners to have this full redirect instead of serving from the short domain directly.
>That's the choice -- and fault -- of the domain and webserver owners to have this full redirect instead of serving from the short domain directly.
You should choose one so things like caching will work properly, also search engines really want you to keep to a single domain hostname for the same content.
Yup, the CloudFlare CAPTCHAs are a nuisance, but they're so much better than the Google reCAPTCHAs they replaced. Those things must be designed to make you give up.
I didn't realize the article was from 2016. I still have to click motorcycles and traffic lights sometimes. So nothing has changed at all! I didn't know they were using those motorcycles and traffic lights for almost 10 years already
Blame the devs as well, lots of useless junk code and libraries for things that could've been couple lines of code that ends up bloating a site and make it slow - then they need a CDN like and caching solutions like cloudflare. 4/5 of the web wouldn't need any of it (if their sites were optimized from the start).
DDOS attacks do happen, but considering the size of the web, the chance it will affect you is very low. AI bots are more of a stress test than a DDOS, figure out your bottlenecks and fix them.
E.g. for a frontend yourself a budget of 1MB for a static site and 2MB for dynamic one and go from there.
All of these problems would go away if we had micropayments. So that the user could pay for the resources they use.
The user would know that each pageview is $0.001.
The website owner would know each pageview pays for itself.
We probably could get there with some type of crypto approach. Probably one that is already invented, but not popular yet. I don't know too much about crypto payments, but maybe the Bitcoin Lightning network or a similar technology.
Processing a micropayment takes more resources than serving a simple Web page, even if the payment fails. You would have to put in similar filters, or you would get DoSed using the micropayment service.
The incentive to send http requests is that data comes back. That's why the storm of scrapers hurts website owners. They gather the data and give nothing back.
What would be the incentive to send failing payment requests?
To break the site. But you're right that a lot fewer people will probably want to break it than to scrape it, and that stuff like CAPTCHAs is usually more about the "scraping" case. So basically a mistake on my part.
Genuinely curious, is there a way of tuning how this protection is triggered? If there is, perhaps filter out those ISPs/countries from which most attack originate? Not a cloudflare user myself -- for me it's mostly been a nuisance.
Cloudflare offers more options to paid customers than to free customers, but with basic Cloudflare Rules you can usually configure your firewall and presents to your liking.
So, guilty by default, right?
We developed human rights and laws through centuries of debates, pain, suffer, revolutions, fights etc just to get mega-corps doing whatever they feel right, externalizing the trade-offs on innocent peoples.
So what's to be done? Some people are being adversely (and probably unfairly, maybe even unjustly) affected by the actions of people that they share a geographic location with. I'm not convinced that many people desire that, but nobody much wants systematic disruption to a shared resource that has (perhaps regrettably) become economically and socially important, either. And, at its basic level, the net is geography: wires and data centres and peering points, as well as national laws, companies, agreements, etc.
What's the better solution? I certainly don't know.
I can't use DigiKey without getting silent CF reCAPTCHA challenges for their CDN that break images in a non-obvious manner. Also, I get reCAPTCHA and CF ray challenges almost constantly for every website.
It is not CloudFlare that is ruining the Internet, but the spammers and attackers. On the second level, that catching and punishing them is impractical or even impossible depending on their location.
Businesses were perfectly fine to accept the low security of 1990s email, webserver, and all the other configurations and software. They did not suddenly out of nowhere ask for more restrictions (such as email sending restricted to using the email server "officially responsible" for that domain- it used to be you could do the same as with physical mail, where you can drop letters into mailboxes writing a "From" address that was not in the same city as the mailbox location). They certainly did not volunteer to make everything much more difficult -- and expensive -- to set up and use. It also leads to a lot more work for their IT staff and a lot more user problems to respond to.
All these annoying restrictions were forced to be implemented by attacks of all kinds.
Because it is so difficult, compromises needed to be made. CFs methods are of course full of them, such as taking country and IP ranges into account. Feel free to make practical and implementable and affordable suggestions for alternative solutions. You may even get a reward from CF if you can come up with something good that allows them to cut back on restrictive policies while at least maintaining the current level of security. It is in the interest of CFs customers to be as accessible as possible, after all.
I run a little wiki and, holy shit, the bots have gotten advanced. Like, it used to be no big deal, then like a year and a half ago, it exploded. It's basically quadrupled the costs for me to run the site. It's not even worth it to ban IPs anymore, because they are routing their traffic through different IPs -- in different countries -- just to scan my site, repeatedly, every damn day. It's an obscure golf course wiki, but no... better check every other day to see if anything has changed!
At the same time, the bots are dumb as hell. I have honey pots that basically are as simple as "if you visit this obscure, hidden URL you're banned" or "if the same obscure page for a course is visited for four different courses in a row, then ban all the IPs that were part of that." But they keep coming... like, an infinite number of IPs. I genuinely don't want to use Cloudflare, but I understand why people do. It's absolutely crazy out there.
I have heard great things about Anubis, although I have not needed to use it myself:
https://github.com/TecharoHQ/anubis
Thank you for this. I will definitely look into it. Anything that will help stem the fire hose of bullshit is a good thing.
> It is not CloudFlare that is ruining the Internet, but the spammers and attackers.
Spammers have been around since forever and it used to be the webmaster/sysadmin's responsibility to deal with spam in a way that would not hinder user experience. With Cloudflare all that responsibility is aggressively passed on to the user, cumulatively wasting _years_.
As for attackers, I wonder if Cloudflare publishes data showing how many of the billions of websites it "protects" have experienced a significant attack. They don't offer free protection to save the internet, but rather for control -- and no single company should have this much control.
> Spammers have been around since forever
Is the fallacy here not obvious? Yes, spammers have been around since forever, but it's not the same amount of spammers. Whether it's two spammers or two million spammers does make a difference.
I think we're long past peak spam. A lot of them seem to have given up due to the rise of SPF and DKIM, and also because people don't really use email so much anymore as a serious form of communication.
I remember some clients in the mid 2000s. They got several spam emails per minute on some accounts. Not kidding. I haven't seen anything like that in recent years.
I've been on the Internet since 1992, directly connected at home (student dorms at the time) vial ethernet cable and university ATM backbone since 1994.
At that time I was an admin of said student network, and at the same time built TCP/IP based network and email infrastructure at a subsidiary of a large German company as a side job.
So I was an admin of routers, switches, various services (email, Usenet server, webservers, fax server).
Funny enough - we only added a firewall in front of the student network to protect against our own student's experiments rather than against outside intrusions, at least initially (for example, one person setting up their own Usenet server brought down DNS by flooding it with queries)!
We never had any problems with spam or attackers. "you just didn't notice the attacks!" - NO. When you go online today you get an eternal stream of automated intrusion attempts, visible in all your log files.
Today does not even remotely compare with the easy-going Internet of the 1990s.
Usenet, forums, email - they were all very much usable with minimal or zero spam, and very basic user management. Today, with such a basic setup like we used to have, you would be 100% and chock-full of spam shortly after putting such a server online.
The responsibility is passed to Cloudflare, and that's the point. Not every site can make a capable solution by themselves.
The responsibility now lies on the user, who has to click through confirmations to prove they are human, thus making their experience a lot worse. It has been my experience the last ten years.
> It is in the interest of CFs customers to be as accessible as possible, after all.
But since in reality there is friction, there is no magic mechanism to make those interest force CF to implement a better system as, for example, the customers might not have enough knowledge / tech expertise to understand they're losing 1% due to crude CF filters and ask for a fix
It is, but do they even know about the problems?
The data Cloudflare shows people in the number of requests it "protected" you from and the number of requests it thought legit. There is no indication of the number of false positives, and IIRC of the number of people asked to pass a captcha. The wording implies zero false positives and I think many people simply assume its negligible.
> It is, but do they even know about the problems?
No, that's what I said - they may lack knowledge.
> It is not CloudFlare that is ruining the Internet, but the spammers and attackers.
"Your face ran into my fist!"
> It is in the interest of CFs customers to be as accessible as possible, after all.
Well this is where your argument goes a little wrong IMO. When you're on something more niche (eg Firefox on Linux) they just don't care as much about making it work for you because there's so few of us blocked in the process.
And this problem should really be solved with a proper solution, not this fiddly black magic ruleset stuff. The email thing you mention is a good example. DKIM and SPF are good things that makes things more secure in an understandable way. Specifying your legit mail handlers is not a workaround, it's good security. In some ways Altman has a good idea with his WorldCoin eyeballs. But I don't support it for obvious reasons. I don't want my internet identity tied to a single tech bro and some crypto. If we do this kind of thing it has to be a proper government or NGO effort with proper oversight and appeals process.
I've tried to make my Linux Firefox identify as edge on windows and that makes it a lot better on some sites (especially Microsoft breaks a lot of M365 functions on purpose if you're not using the "invented here" browser). And many sites don't give me captchas then. But in some cases Cloudflare goes even more nasty and blocks me outright which is really annoying. If I use Linux a lot more sites break but Cloudflare sticks with captchas.
Anyway I think the age of the captcha is soon over anyway. AI will make it unviable.
> All these annoying restrictions were forced to be implemented by attacks of all kinds.
Ps it's not always attacks but also to block things that are good for consumers but bad for the sites' business model. Like preventing screen scraping which can legit help price comparison sites.
>It is not CloudFlare that is ruining the Internet, but the spammers and attackers
That's unaccountability thinking. If I have pests in my rosegarden and as a reaction I napalm the backyard of everyone in my neighbourhood, that is not the bugs' fault.
I have almost the same experience. I'm not running my own ISP and I'm not in a country known for originating DDoS attacks (Sweden), yet just using Firefox on Linux seems to be enough to be forced to click on traffic lights many times an hour. If I'm using Mullvad VPN that accelerates to almost every minute. CloudFlare claims to support privacy pass, but their extension implementing it seems to do absolutely nothing.
> I'm not running my own ISP and I'm not in a country known for originating DDoS attacks (Sweden), yet just using Firefox on Linux seems to be enough to be forced to click on traffic lights many times an hour.
I'm in the same situation. Linux, Firefox, Sweden, with a residential IP that has been mine for weeks/months. Who's massively DDoS'ing with residential Telia IPs?!
You know, after reading your comment I decided to install and try chromium for few minutes and you're absolutely right. It did not ask captcha once. I opened the same websites where cloudflare always asks me for captcha on firefox so I thought this was common, after finding this out, I am feeling annoyed.
While Chrome users should feel a shiver going down their spine.
Why would they? They're obviously on the 'right side of history'. \hj
Chrome browsers don't send a specific handshake. But while browsing other sites they help gather enough evidence for this being a human operated piece of software.
Same situation except for Linux: in Sweden, macOS, and Firefox.
The difference is: I can't get past Cloudflare's captcha for the past 2-3 years (on Firefox), have to use Chrome for the few sites I do need/want to see behind this stupid wall.
By now I've sent hundreds of feedback through their captcha feedback link, I keep doing it in the hopes at some point someone will see those...
Agreed. Same with Firefox on FreeBSD. Constant captchas. It identifies as Linux by the way (it seems to be compiled that way by the maintainers) which is probably better (a 2% desktop marketshare OS vs a 0.01% one is probably better here)
I fully agree. It's not only the waste of time when you have to confirm you're a human (that adds up to multiple hours per month).
It's also the entire blockage of older or less mainstream systems that no longer can access, sometimes critical, websites at all when the Cloudflare check blocks things entirely because the "browser is out of date" or not on their whitelist. Therefore causing excessive discrimination of poorer folks that can't afford upgrading to never/ other systems that still are legible to pass Cloudflare's "grace".
Moreover, the most hilarious thing here is that Turnstile is easily bypassed by "patchright" (patched playwright runtime) + xvfb + good residential IP pool. So it's hurting real users and not protecting against bots.
That is still more of an ask than what most IoT volumetric attacks can do. It’s like saying Turnstile is bypassed by paying a human to do the captcha.
(Note I share your sentiment, however)
Is there any data that’s supports this suggestion users with older devices are actually being discriminated? (% of users actually using older devices incapable of upgrading to browser versions supported by cloud flare)
I just find it hard to believe users are actually getting denied access because their device are old. Surely you can still run new versions of Chrome and Firefox on most things [1].
——————
[1] Don’t get me wrong I use Safari and I find it inflammatory when a site tells me to use a modern browser because they doesn’t support safari (the language more so). But I wouldn’t call it discrimination seeing as I have an opinion to run firefox/chrome from time to time.
What are the symptoms of being shadowbanned? I see an awful lot of "click here to prove you are human" boxes, click then, the page reloads, and I'm left with the captcha again. It's been very very frustrating.
I wish there were more information and guides being spread of free and open source systems worldwide, there is tremendous potential in upgrading "end of life" systems to use a Linux-based operating system. That way we could avoid unfathomable amounts of e-waste being dumped for no other reason than them not being commercially viable anymore and poor people could keep using their computers
Without a market CloudFlare wouldn't be able to ruin the internet. You can thank all of the incessant AI bots for that. I can't even browse GitHub anymore without logging in.
Cloudflare doesn't need AI to survive as a business. There are more than enough DDoS attacks to protect from.
Yet it wasn't until AI grifters started aggressively scraping everything that sites started turning to CloudFlare (and the like) en masse.
Don’t forget that the main issue isn’t the initial data scrape but rather the fact that prompting an llm agent like claude code can amplify into 10 requests as it tries to answer your question.
Cloudflare was huge before AI bots became a problem.
OP didn’t put this in the title but the article is from 2016. Turns out a lot has changed in the last decade and I think it’s likely that the article should be updated on what it’s like right now.
Oh this is still very true! I am from Banglaore, India. There are sites that outright block me. And in a day, I at least encounter 20-25 times where I need to click on "human checkbox" due to my region or IP. In mobile it's worse. All sites that have "strict" mode on, will either block or show the "human checkbox".
Even sites that I manage with Cloudflare, I see the same. Even if I use relaxed mode on, If I visit the site via mobile, it can trigger the Cloudflare human validation.
> Turns out a lot has changed in the last decade and I think it’s likely that the article should be updated on what it’s like right now.
Yes, every one-pager running on Vercel/Netlify sits behind Cloudflare now because no one wants to risk an insane cloud bill in case of an attack. People have become hopelessly dependent on managed cloud services.
However, to be fair, there's less captcha solving nowadays, since they introduced their one-click challenge (but that's not always the case).
s/is ruining/has ruined/ ?
g
More recently: https://news.ycombinator.com/item?id=42953508
> CloudFlare is a very helpful service if you are a website owner and don’t want to deal with separate services for CDN, DNS, *basic DDOS protection* and other (superficial) security needs
A lot of people try to downplay cloudflare's ddos protection but it saved me so many times over the years. I honestly don't think that there exists a good solution that someone with without a lot of money and resources can leverage besides cloudflare.
It's almost dismissed in the first sentence as "basic DDOS protection" as if there is any other company that provides an ironclad solution besides cloudflare, especially free for a tiny niche community. There is none that I am aware of.
One of worst issues is when Cloudflare starts asking RSS reader to verify that it is human.
This article is from 2016, and it should be noted that things have significantly increased since then. I no longer have to click traffic lights and motorcycles, I just have to wait a few seconds. I'm in Vietnam so my IP gets flagged for checks a lot, but they all pass automatically in a few seconds.
The only time I still get asked to click motorcycles is not Cloudflare, it's Google. They absolutely hate when you try to do a search in an incognito window and will give you an unpassable captcha until you give up and use DDG instead.
Today I looked up a word definition on tfd.com (usually a lightning-fast website), it took 3 cloudflare screens each 10 seconds to load. Why?. Because I'm in South East Asia? I aborted mission and pasted the word in a search engine and had a definition in <5 seconds. Sad to see some of my favourite sites becoming unusable.
The shorter the average interaction with the site, the worse the burden cloudflare becomes. E.g. looking up a definition is usually a 5-10 second job, Cloudflare can make it take almost an order of magnitude longer.
In defence of motorcylces and traffic lights, those captchas are annoying too, but they do help humanity in a tiny, tiny way. By contrast, watching a cloudflare loading spinner is stupefyingly useless.
(apologies for ranting. I find it disproportionately irritating even though it's only a few minutes per day, possibly due to the sheer repetition involved).
Archive.org seems to work well as proxy for tfd.
add tfd = http://web.archive.org/web/2026if_/https://www.thefreedictio... as a search engine. (This is our internet now.)
Woah, yeah just tried tfd.com and got 10 CF redirects, still not loading.
From Germany: Two redirects, one for tfd.com, one for the redirected www.thefreedictionary.com. That's the choice -- and fault -- of the domain and webserver owners to have this full redirect instead of serving from the short domain directly.
>That's the choice -- and fault -- of the domain and webserver owners to have this full redirect instead of serving from the short domain directly.
You should choose one so things like caching will work properly, also search engines really want you to keep to a single domain hostname for the same content.
Yup, the CloudFlare CAPTCHAs are a nuisance, but they're so much better than the Google reCAPTCHAs they replaced. Those things must be designed to make you give up.
I didn't realize the article was from 2016. I still have to click motorcycles and traffic lights sometimes. So nothing has changed at all! I didn't know they were using those motorcycles and traffic lights for almost 10 years already
Blame the devs as well, lots of useless junk code and libraries for things that could've been couple lines of code that ends up bloating a site and make it slow - then they need a CDN like and caching solutions like cloudflare. 4/5 of the web wouldn't need any of it (if their sites were optimized from the start). DDOS attacks do happen, but considering the size of the web, the chance it will affect you is very low. AI bots are more of a stress test than a DDOS, figure out your bottlenecks and fix them.
E.g. for a frontend yourself a budget of 1MB for a static site and 2MB for dynamic one and go from there.
All of these problems would go away if we had micropayments. So that the user could pay for the resources they use.
The user would know that each pageview is $0.001.
The website owner would know each pageview pays for itself.
We probably could get there with some type of crypto approach. Probably one that is already invented, but not popular yet. I don't know too much about crypto payments, but maybe the Bitcoin Lightning network or a similar technology.
Processing a micropayment takes more resources than serving a simple Web page, even if the payment fails. You would have to put in similar filters, or you would get DoSed using the micropayment service.
The incentive to send http requests is that data comes back. That's why the storm of scrapers hurts website owners. They gather the data and give nothing back.
What would be the incentive to send failing payment requests?
To break the site. But you're right that a lot fewer people will probably want to break it than to scrape it, and that stuff like CAPTCHAs is usually more about the "scraping" case. So basically a mistake on my part.
I believe the problem is not really Cloudflare or another cloud provider.
The fact is internet was built around the idea that everything should be decentralized in order to be resilient.
Resilient to attacks, resilient to outages or any form of censorship.
So, each time amazon, cloudflare (...) fails, it reminds us that nobody like SPOFs.
What is a better option for website owners? We don't want to keep people out, we want to keep attackers out.
Genuinely curious, is there a way of tuning how this protection is triggered? If there is, perhaps filter out those ISPs/countries from which most attack originate? Not a cloudflare user myself -- for me it's mostly been a nuisance.
I use cloudflare and I block whole countries because the bots and spam are so bad from places like Singapore.
Cloudflare makes region blocking very easy.
At least to me, cloudflare has never shown any fine tuning options.
Cloudflare offers more options to paid customers than to free customers, but with basic Cloudflare Rules you can usually configure your firewall and presents to your liking.
Should probably have (2016)
(2016)
Some previous discussions:
2019: https://news.ycombinator.com/item?id=21169798
2016: https://news.ycombinator.com/item?id=11711995
(The author runs a small ISP in an unspecified country in SEA and complains about many CF captchas.)
This may be one reason:
https://blog.cloudflare.com/ddos-threat-report-2025-q3/
Top 10 largest sources of DDoS attacks: 2025 Q3
1. Indonesia
2. Thailand
3. Bangladesh
Vietnam and Singapore also make it into the top 10. The latter is a bit of an outlier being rich and having a small population.
So, guilty by default, right? We developed human rights and laws through centuries of debates, pain, suffer, revolutions, fights etc just to get mega-corps doing whatever they feel right, externalizing the trade-offs on innocent peoples.
No, just pointing out that it's relatively likely that their ISP's IP ranges have been flagged as DDoS sources.
I know, and I was pointing out that CF outsources the trade-offs of their solutions to innocent users.
So what's to be done? Some people are being adversely (and probably unfairly, maybe even unjustly) affected by the actions of people that they share a geographic location with. I'm not convinced that many people desire that, but nobody much wants systematic disruption to a shared resource that has (perhaps regrettably) become economically and socially important, either. And, at its basic level, the net is geography: wires and data centres and peering points, as well as national laws, companies, agreements, etc.
What's the better solution? I certainly don't know.
I can't use DigiKey without getting silent CF reCAPTCHA challenges for their CDN that break images in a non-obvious manner. Also, I get reCAPTCHA and CF ray challenges almost constantly for every website.
it's ruining the internet for everybody else
What, because it keeps breaking?