in

Tell HN: Impassable Cloudflare challenges are ruining my browsing experience

> The “unsubscribe” button in Indeed’s job notification emails leads me to an impassable Cloudflare challenge.

That’s a CAN-SPAM act violation.

FTC: “Tell recipients how to opt out of receiving future marketing email from you. Your message must include a clear and conspicuous explanation of how the recipient can opt out of getting marketing email from you in the future. Craft the notice in a way that’s easy for an ordinary person to recognize, read, and understand. Creative use of type size, color, and location can improve clarity. Give a return email address or another easy Internet-based way to allow people to communicate their choice to you. You may create a menu to allow a recipient to opt out of certain types of messages, but you must include the option to stop all marketing messages from you. Make sure your spam filter doesn’t block these opt-out requests.”[1]

Experian was recently fined for making it hard to opt out of their marketing emails.

The actual regulation text:

§ 316.5 Prohibition on charging a fee or imposing other requirements on recipients who wish to opt out.

Neither a sender nor any person acting on behalf of a sender may require that any recipient pay any fee, provide any information other than the recipient’s electronic mail address and opt-out preferences, or take any other steps except sending a reply electronic mail message or visiting a single Internet Web page, in order to:

(a) Use a return electronic mail address or other Internet-based mechanism, required by 15 U.S.C. 7704(a)(3), to submit a request not to receive future commercial electronic mail messages from a sender; or

(b) Have such a request honored as required by 15 U.S.C. 7704(a)(3)(B) and (a)(4).

That seems to cover it. File a CAN-SPAM act complaint ([email protected]). Send a copy to the legal department of the sender.

[1] https://www.ftc.gov/business-guidance/resources/can-spam-act…

“Visiting a single Internet Web page” is considerably more involved than that. In practice, it means making a request to the DNS servers and running Javascript that’s injected by the CDN/proxy which “verifies” (runs some heuristics) that you’re allowed to load that page.

It’s like a restaurant that complies with a local food access requirement to be open at a certain time… but only by having a drive-through that requires you to not just be a human being, but also to drive a car to get to the restaurant.

You’re collateral damage in the web’s war against bots 🙁

Unfortunately, I think the Cloudflare challenges are designed to filter out users similar to your profile… once you stray far enough from the norm, it just looks like a bot / suspicious traffic to them. Statistically there’s not enough users like you (privacy-conscious Linux users on nonstandard browsers) for them to really care enough to do anything about it. Site owners don’t care either since you’re usually like 1-2% of users at most, and typically also the same ones who block ads, etc., so they don’t mind blocking you… it’s sad, but I don’t think there is really anything you can do about it except conform. It’s an ongoing arms race and you’re caught in the middle.

The sad part is that it’s trivial to get around CF’s bot protection if you’re writing a bot (just use curl-impersonate and buy residential IPs), but it’s pretty much impossible to bypass as a human if their magical black box doesn’t like your browser and/or IP address.

> it’s pretty much impossible to bypass as a human if their magical black box doesn’t like your browser and/or IP address

There are residential-IP-backed VPN services that you can use just like commercial VPN services — but they’re mostly built on the backs of botnets, so it’s ethically questionable to use them.

Surprisingly, it still works as intended. Yes, it won’t keep professionals and dedicated bot-fabricators out, but that’s like 5% of the botters out there; the rest are the bot equivalent of script kiddies who can’t be bothered, and it filters them great. Meanwhile, the script kiddies have a process that still works on non-CF sites, so they don’t need to improve their process.

If they don’t think you’re suspicious they don’t make you do the captchas, and as others have mentioned you can always outsource it to captcha farms. There are also AI models which do a fairly decent amount, and since most captchas let you repeat attempts with new patterns you can have a pretty high error rate to get past them. Then there’s the ADA, which requires accessibility- many captchas have an audio component as a backup and those are easy to interpret by models.

Cloudflare turnstile isn’t even a captcha. The user just has to tick a box. Behind the scenes there’s a javascript challenge to make sure you’re vaguely a browser and not some script a bazillion requests per minute.

curl-impersonate doesn’t solve CAPTCHAs, but the goal is to look enough like a human that Cloudflare doesn’t present a CAPTCHA in the first place.

You pay contract workers in a third world country a tiny amount of money per day, to spend all day clicking boxes.

While you hit the nail on the head, I am still surprised that so many tools targeted at people like me (web hosting, developer tools, etc.) are protected that way.

Its not only about protection, most web developers would use Cloudflare since its a free CDN and would increase the app load time considerably.

Because if such hosting and developer tools are not protected against bots, the tools end up used for phishing, spamming, etc.

I’m convinced that’s mostly incompetence on the side of the companies that implement that protection.

“We have a problem with bots” – “Just create a firewall rule, whatever”

What other way would you suggest to protect a free service from bots? Cloudflare is often the easiest to implement and has a generous limit on their free plan.

Oh, they absolutely are, I don’t disagree — I use them too.

But the immediate response to bots shouldn’t be “make everyone go through a captcha”. There’s lots of nuance that you can tune to deal with your particular situation, but the first thing I’d do is block known bots or ASNs, set up a limit to trigger (bots usually don’t make 1 document request a minute), set up higher limits for users who (seem to) have a valid cookie indicating that they are logged in, set up different thresholds for certain countries that are more risky etc etc.

What you need to protect your service depends on your situation, it’s not a one-size-fits-all solution. E.g. I find that I have no automated contact form spam once I add a simple JS to add some data that isn’t standard, but I’m sure that wouldn’t hold up if there was enough incentive to try to get past it.

But the OP mentioned not just free services, but e.g. webhosting logins. That’s just sad, as is Cloudflare’s community being behind an aggressive captcha. I’m a user, I’m logged in, I’ve posted before, I’m in good standing, yet when I go there, I need to solve a captcha. When I then go there again an hour later, guess what, another captcha.

Either there’s another reason I’m not seeing or it’s just lazyness as in “we need to have a forum but we really don’t want to spend any resources on it, just put up an aggressive captcha that’ll filter out most bots and everyone but the determined users”.

Fwiw, Cloudflare does do a multivariate confidence check which is why it has multiple tiers: no captcha, a one-click captcha, the annoying puzzle captcha once, the annoying puzzle captcha six times in a row.

> I’m a user, I’m logged in, I’ve posted before, I’m in good standing, yet when I go there, I need to solve a captcha.

Though consider the fact that taking over someone’s account shouldn’t give you (a spammer) unlimited access either. The spambots you see on Twitter are mostly cred-stuffed accounts. It’s a hard problem. Existing accounts are more dangerous than fresh accounts.

Imo, “write your own password” should be a thing of the past. Services should just auto-gen a password or there should be a way to require the OS (like a password manager) to generate one to avoid cred-stuffing. We’re letting down the average person by making them come up with unique passwords for every service instead of just helping them. Though I’m way off topic.

Most developers I’ve met were actually similarly lazy… we just use Chrome on Mac, and don’t really want to deal with VPNs unless our employers force us to. The last few Firefox holdouts also switched after running into various WebGL/Canvas/etc issues. The same attitude that leads us to focus on “happy path” users and ignore edge cases often also causes us to sheeple into that same basic dev group. Long gone are the days where most devs custom build Linux boxen from scratch and compile custom kernels to our liking…

Anyway, I know the “Cloudflare’s monopoly gating is killing web openness!” meme is common online, especially on HN, but in real life I’ve never actually heard anyone else complain about it (either a fellow dev or a customer or a manager). Instead, it’s been universal praise for the actual issues Cloudflare exists to solve (CDN, bot protection, serverless, etc)… they are a godsend for small businesses that otherwise get immediately flooded by spam requests, especially from China, Russia, and India.

And if you think Cloudflare is bad, it was even worse before they became dominant, with terrible services like Incapsula/Imperva charging way more but providing both worse bot protection AND more false positives, or the really hard early reCAPTCHAs (that Cloudflare was largely able to replace, for users who DO fit within the “norm”). That, or you’d have to fight every random sysadmin with their own lazy rules, like firewall rules that blacklisted entire regional ISPs and took weeks or months to resolve, if they ever even checked their emails.

As inconvenient as Cloudflare is for users who take privacy seriously and try to be less trackable, for the other 90% of us who don’t care as much and easily fit into their “norm” model, it’s much nicer than what came before. Site downtime and slowness are also much less common now, in no small part because of their easy CDN and caching.

From the implementation side, I’ve set up a few Cloudflare accounts in my career, but do take the time to try to configure it to balance security vs accessibility for any given target audience. Sometimes we’d block entire countries, other times we’d minimize security to ensure maximum reach, but usually we’d customize rulesets in the middle for any given company & audience. I never got a complaint about it (our emails were still available and not blocked).

This was always a direct response to some business need, usually spambots or DDoS attempts that fail2ban etc. couldn’t catch well enough. For the business, it was usually a “shit, our website is down again, what is it this time”, and the choice between “for free or $20 we can get it back up again and not have this issue anymore” or “we can spend thousands of dollars and weeks of labor building our own security solution” is pretty easy. “What about that one guy who is proxied behind TOR and three VPNs with a random user agent using a text-only browser he wrote himself?” never really factors into that process =/ There’s just not enough users like that out in the wild vs the very real constant threat of bots and malware.

It’s a shitty situation that the web is like this today, and I wish it weren’t the case, but it really is an arms race, and these imperfect weapons are just what most of us have access to…

> spam requests, especially from China, Russia, and India.

On my small website, bot traffic is almost entirely from DigitalOcean VPSs.

> If you look like a bot, how are they going to distinguish?

Some non-existant system of attesting that I’m person X (possibly through an e-ID card) who has issued a client certificate Y (cert chain, using my e-ID cert to sign) to be used with my device Z (presumably with a device fingerprint or IP range attached to the cert). Of course, this would mean no privacy, but that’s not that different from being signed in through Google as an identity provider, we’d just shift the mechanism to be universal (like client certs already are). One of the options that would take more coordination than will probably happen (though very similar to some e-signature solutions in EU, which we already use) but I could see using something like that for a variety of professional/service sites, since signing in with the e-ID card directly is already a thing on some sites here (government sites, banking sites, utilities sites).

Okay. Do that globally. And solve the ddos problem as you’re on it. If you add transparent tls termination, edge, caching, dns… maybe I’ll have a look!

I had a guy like that working with me. Blocked every possible tracker, disabled javascript, used some niche browser, proton mail, and then complains that google doesn’t allow him to sign in. I get it, privacy and what not. But the guy was an outlier.

Some random blogs, product pages aren’t gov, most likely have no way to opt-in for gov eID (maybe they aren’t based in the EU), and they only care that their service is available fast globally and that they get ddos protection for free (plus some other convenience features).

> Do that globally.

We already do a simpler version of that with TLS and HTTPS, there are globally trusted root certs that ship with most OSes and browsers. It’s just that we haven’t extended the same approach to client certs and identity verification, instead having a bunch of walled gardens and governments running legacy methods of figuring out who someone is, as opposed to various eID mechanisms.

It’s a problem that’s technically solvable (say, in 20-50 years), but won’t get done because good luck getting a bunch of governments to collaborate on that across the world. It’s actually a surprise that we have TLS in the first place.

Between what you described and having to run a vaguely standard browser config, I’ll take the latter, thanks.

If I have a process that works for 95% of the people, why should I care about outliers who use Linux behind a VPN on a heavily customized version of Firefox?

Maybe you should try to care about something other than just your bottom line. I’m sorry if this sounds mean, but this attitude just turns the web into a giant monoculture because you can’t be bothered to care. It actually ends up hurting everybody in the long run. Look how long we were trapped with IE6. Amazing how people forget history so quickly.

Because they are standards compliant and you aren’t, and you are legally required to provide an unsubscribe service or whatever without undue barriers around it.

For unsubscribe – yes.

Everything else – no.

But if I am using standards and they have an ad blocker that blocks some of the functioning of my site, am I also required to test my site against that?

It’ll be interesting to see what happens if someone takes that argument to court.

One side of the argument is that Cloudflare places an undue burden. The other side of the argument is that without the CF protections, the service provider doesn’t even have reason to believe the request is coming from a human being the law protects.

I honestly don’t see what’s so hard about a bot simulating “the norm” within the margin of error. This cat-and-mouse game is just like a GAN, the end result is indistinguishable even by a bot.

It depends on the defences. It starts trivial – just make a http request. Then there’s http version, user agent header, other headers, header ordering, cookies, TLS ciphers, session resolution, timing, behaviour for page resources, … and so many other things. It takes time, even if you order headless chrome.

Bot authors are lazy and won’t until they have to.. once you do, you can then pretend they aren’t bots and include them in the engagement numbers you feed prospective shareholders.

Agreed. From my past experiences though, a very good chunk of them will give up once there is a resistance. Basically, you want your bot protection to just be a little better than your competitor. Then the bot author will target them instead, because of the path of least resistance.

It’s ironic but I was having terrible problems accessing archive.today when I was using Cloudflare DNS (1.1.1.1) that cleared up when I switched to either my ISP’s provider or Google’s 8.8.8.8. I was not the only one

Ask HN: What’s Been Going on with Archive.today? | Hacker News

It’s been months and I’m still confused what’s going on with it. It often used to redirect to other TLD (ph, is…). More recently it seems to be not-happy with iCloud Relay connections and it either cannot be reached or gets in a reCAPTCHA loop.

What’s funny about it is that as a human I get tormented by those things all the time but I have been writing bots since 1999 and have yet to have had CAPTCHAs affect a webcrawling project in a big way: for instance I have a bot that collected 800,000 images from 4 web sites since last April, at times I thought they had anti-bot countermeasures but I realized that when they were having problems it was because the wheels were coming off their web site (don’t blame me, that is 0.03 requests/second and are not pipelined like the requests from a web browser.) I’m also prototyping one that can look at an article like

https://phys.org/news/2025-01-diversifying-dna-origami-gener…

see if there are links to journal articles in there, determine if the articles are Open Access and pick out an image for social… so far no problems. But if I want to pay my electric bill there’s a CAPTCHA — I mean, what kind of bot wants to pay my electric bill? (Kinda seems like it is asking for a lawsuit in this day and age if it prevents anyone ‘differently abled’ from accessing essential services…)

I deal with this fairly commonly, presumably because I use linux, and we all know only botnets use linux. Occasionally with cloudflare I’ll just get summary rejection and supposed blocking of my IP, but either it’s summary rejection or a pass without challenge.

Recently I had to deal with this for alibaba just to look at something, which I usually just use torbrowser with, and finally gave up as I couldn’t pass the challenge. I suppose I shouldn’t be surprised at that though, they trust me as much as I trust them.

The worst is usually adobe and cookielaw with all their related tracking crap, where I can’t even get the captcha to render as it’s so many layers buried in scripting I can’t enable enough sites between ublock, noscript, privacy badger, and firefox strict modes. I treat adobe like malware, but unfortunately things like albertsons.com for groceries and other mega companies love to use it, and their sites literally do not work without allowing their heavy scripting/tracking.

There are other usually smaller captcha players that I haven’t been human enough to pass with, I forget the names of the stupid to shame, but a few when I see them I recognize to just close the window and forget about whatever it was I was looking for there (like twitter/x).

Hooray commerce!

>…when I see them I recognize to just close the window and forget about whatever it was I was looking for there

This is the way.

I’m really afraid of what kind of internet we’ll have when these kinds of un-diagnosable un-appealable false-positives are not just transient blips, but become metadata companies use to blindly and permanently kill off accounts on other services.

I think it may have been what happened my since-2010 Reddit account was mysteriously killed a couple years ago, and literally the only cause I can think of is that I might’ve used the wrong public wifi for an evening.

Yes, I run into it from time to time. I just move on. If someone is going to make their website inaccessible to me, I’m not going to bend over backwards to try to work around that.

Incidentally, since I configured DNS over HTTPS in Firefox, using Cloudflare’s DNS, it seems I see this much less often.

I can’t use any of the kerbalspaceprogram.com domains because of improper discrimination against IPv6 clients triggered by CloudFlare.

Error 1015 Ray ID: …. • xxxx-xx-xx xx:xx:xx UTC
You are being rate limited
What happened?
The owner of this website (wiki.kerbalspaceprogram.com) has banned you temporarily from accessing this website.

This sort of monoculture creates an Orwellian SPoF.

I don’t think it’s an IPv6 problem. IPv6 clients are more static than IPv4, which is usually shared amongst many clients (at home) or at the network level (CGNAT).

It could be the address is being reused – is it home, cloud or corporate? Have you tried different browsers? Incognito mode?

I have an IPv6 block at home and have no problem accessing that site.

> – The “unsubscribe” button in Indeed’s job notification emails leads me to an impassable Cloudflare challenge.

Maybe indeed could be held liable here? From the can spam act (if you’re from the US):

> You can’t charge a fee, require the recipient to give you any personally identifying information beyond an email address, or make the recipient take any step other than sending a reply email or visiting a single page on an Internet website as a condition for honoring an opt-out request.

https://www.ftc.gov/business-guidance/resources/can-spam-act…

this nevertheless happens all the time. i have an old linkedin account i haven’t logged into in years and can’t be bothered to dig up the credentials so one of my e-mails gets stupid “network updates”. one must log in to disable these and navigate to some obscure settings page in one of the most heinously overcrowded UIs on the web.

so i just flagged it all as spam and hoped it hurts their deliverability a little.

Honestly I click an unsubscribe link but if it requires me to complete a survey or fill out a form, I just nix the tab and spam filter the email. I’m nobody’s fucking admin assistant and my time is valuable: you know my fucking email and could easily add it to the think, or at the most, ask me to type it into a box if you MUST. Anything more than that, if I have to manually opt out of “types” of messages or whatever, nah. Fuck you.

I didn’t ask for your fucking emails and I sure as shit am not going to do the homework you’re assigning me to make them stop.

Yep, I just spam filter the E-mails now. If that act adds 0.0001% to that sender having future E-mail deliverability problems, then all the better. If it’s commercial or political and I didn’t explicitly ask for the sender to E-mail, then it’s spam.

If you can’t pass the captcha you have to ask yourself, are you really a human being or have you just been programmed to believe that you are?

>I use a heavily customized Firefox config on Linux.

This is probably the cause, especially if you’re doing stuff like spoofing user agent. It’s not cloudflare “cracking down on privacy” or whatever either. Unmodified tor browser passes turnstile challenges just fine.

And it’s up to site owners and website security vendors to choose which user agents to admit.

I had similar issues as an (also heavily customized) Firefox user, but was able to fix it by installing Cloudflare’s Privacy Pass browser extension.

It seems ironic that as a human I can’t seem to reliably prove I am a human with a realistic amount of effort via these systems, but having installed a specific automated browser extension does?

I am not a fan of Cloudflare and don’t like the idea of running their software on my computer, but it seemed like the only options to continue using the internet at all.

I have experience bypassing these.

The primary cause of this is most likely any kind of ‘optimizations’ you have in your browser (or missing fingerprints).

If you want to ‘bypass’ these I recommend removing any use of Proxy[1] (via extensions). You should also look into disabling any kind of forced backgrounding. Make sure service workers are working.

1: They catch Proxy usage by using exceptions and analyzing the stacktrace. I assume you know what a javascript proxy is, but incase you don’t: It’s something that allows you to override any kind of object function such as navigator.hardwareConcurrecy.

I’m experiencing the same issue which is definitely exacerbated by straying from a ‘default’ configuration e.g. using a custom browser screen reader, browsing from Brazil, using a VPN, using Firefox. I think eventually I’ll be completely locked out of the ‘mainstream’ web

What I don’t understand is why you have to protect areas that require login so harshly?

If I can log in, especially with 2-factor, you can safely assume I am not a bot, or you have a larger problem.

If I have entered bad credentials 5+ times, okay, you can start backing me off or challenging me.

What am I missing? Fail2ban has been around a long time.

Problem is that a significant chunk of the technology industry still relies on “engagement” as its business model. The objective of slapping an overzealous bot protection system isn’t to protect high-risk endpoints like logins/etc, it’s to ensure a human is “engaging” and human time is being wasted by making even legitimate automated usage impossible.

From their perspective, the blocking of power users with unusual setups is actually a happy coincidence, as those are unlikely to “engage” with the product in the desired way (they run ad & spyware blockers, don’t fall for dark patterns, and are more likely to fight back if they get defrauded by the corporation).

40% of the internet’s traffic now is bots, with about half of those being malicious. Fail2ban is decent for a very small DDoS, but useless for one with any substance, and also useless against bots scraping data or probing for weaknesses.

Also remember, especially on AWS, bandwidth is expensive. A CDN cache + blocking bots = big savings.

I’d expect this to increase with the proliferation of AI Crawlers and scraping becoming easier with AI.

Same here, but Cloudflare’s captchas in particular are actually the easiest to pass in my experience. Google’s ones are the killers. But yeah everything has a captcha if you’re using a VPN or Firefox.

Can’t you have a normal firefox profile for such cases? Do you have any javascript filters? I bet the issue must be related to configs messing with the JS runtime.

The issue is scummy companies like cloudflare which are causing these issues. If your software is blocking legitimate users then your software is shit at its job. It’s not the users fault.

>The issue is scummy companies like cloudflare which are causing these issues. If your software is blocking legitimate users then your software is shit at its job. It’s not the users fault.

But if you’re going out of your way to look suspicious (ie. “I use a heavily customized Firefox config on Linux”), surely you’d agree at some point it goes from “your software is shit at its job” to “it’s your fault for looking suspicious”? If you walk into bank wearing a balaclava and get stopped by security, it’s not really “security is shit at its job”.

Yeah we could start blaming victims.

Maybe we should not be allowed to use software we want to use. Everyone should only be allowed to use windows and a chrome browser variant with no ad blocking. Cloudflare 100% should be allowed to arbitrarily block anyone not using this set up because they are suspicious.

>Everyone should only be allowed to use windows and a chrome browser variant with no ad blocking. Cloudflare 100% should be allowed to arbitrarily block anyone not using this set up because they are suspicious.

Seems like a slippery slope argument, but isn’t reflective of reality. They still allow Tor browser to pass, of all things.

It wasn’t meant to be taken seriously, I was using it to show the ridiculousness of blaming a user for the shortcomings of cloudflare.

But if you like: the arbitrarily blocked user if not at fault, cloudflare is at fault.

If it is triggered by the customizations you did in Firefox, then running a fresh Firefox in a container might help:

docker run -it –rm -e DISPLAY –net=host -v $XAUTHORITY:/root/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix debian:12-slim

Then inside the container, run:

apt update
apt install firefox-esr
firefox

The suggestion you should have to bend over backwards for shitty software like cloudflare is bad enough; but if you were going to surely creating a new browser profile is far easily than spinning up a debain docker image, updating it and the installing Firefox and the running it?

what is the advantage here over just running ‘firefox -ProfileManager’ and making a clean profile?

All host info not accessible via X11 protocol is hidden, for example font list, is replaced with generic one.

For even more protection, run VNC server with common resolution in the container and connect to it using VNC viewer. In this case firefox provides a super generic profile (latest debian with mesa GPU), making this browser very hard to distinguish from others. This has some downsides however: First, you cannot resize window. Second, a lot of actual bots use same config, so it might be blocked.

OP mentioned that they run a heavily modified browser, I think it means compiled with changes – docker means stock Firefox

Cloudflare’s —and most similar services’— stance here comes from these VPN funnelling not just people like you, but also attackers. It’s untrustworthy traffic from their perspective.

Use a VPN but use a normal network. VPN back to your home, your office. Your traffic will probably take a throughput and latency hit but it looks like real residential traffic, and that’s a lot less sus.

CrimeFlare is not interested in these problems for the users. If you have access to the hosting side, you can adjust the bot score for specific connections/clients. But consumers don’t matter to CF so apart from jumping through their hoops, there’s nothing better you can do.

Unless you accept the racket of course, start paying them and proxy your traffic through the CF workers https://github.com/pellaeon/cloudflare-worker-proxy and magically most barriers will disappear.

Sounds like all it does is make your IP reputation slightly better than tor, which is a pretty low bar to cross. You’d likely get the same effect from using any other VPN service, so it’s not exactly evidence that cloudflare is running a “racket” with its worker product. The linked blog post even touts the fact it’s free as an advantage. Rackets typically aren’t free.

I’ve honestly only experienced the opposite; their captcha is reasonably easy to bypass, and I’ve successfully automated access to a few sites “protected” by the Cloudflare captcha (behind a VPN, no less).

> I use a heavily customized Firefox config on Linux.

If you really care about privacy, you should blend in to look like everyone else. Avoiding being tracked raises alarm bells. You have to let them track something; but no one ever said it had to be you.

Unfortunately, your setup makes you look like a scraper: no history for Cloudflare to identify, the sort of browser / OS config someone would use to homebrew an automated “I sure am not a bot, look at how authentic my user-agent is!” bot, and so on. If you also have JavaScript disabled and clear your cookies frequently, Cloudflare can’t fingerprint your machine to know you passed a trust-check in the past.

Maybe keeping a heavily-sandboxed Chrome in a VM for situations where Cloudflare is getting in your way might help?

(In the large: this has been an issue a long time coming. Quite a bit of cyberpunk predicts the future where the web bifurcates into the “regular” web that is sanitized, corporate, controlled, and used by most people… And the “everyone else” web that is not, with all the pros and cons that entails. The tech has evolved to the point that companies that want a service provider “keeping the bad guys away” for them can pay to have that done, at the cost of false-positives… But at their scale, the false-positives may not matter to them).

I wish we could popularize some extension that pays a penny per page load or something using some shitcoin both as a means to support our favorite sites but also to validate that I’m not a bot, or at least if I am, I am willing to spend a lot of money in a DDOS that goes directly in your pocket

My local TV station’s website refuses to allow my to view their page and instead presents an a modal that cannot be blocked accusing me of using an ad blocker. The funny thing is that only happens on a mobile device using the default browser with no extensions. When I visit the same site on my laptop with uBO, the site is viewable with no blocking modals.

Sometimes you miss what you were aiming for I guess

The challenge is a small javascript program that checks the execution environment is consistent with a real browser. For instance, if your user agent says it’s chrome, but it’s missing features that’d normally be supported by chrome, it’ll fail you. The OP mentioned “heavily customized Firefox config”, so he might be doing stuff like this that makes his browser look suspicious.

Sadly, we probably all are LLMs/bots on the internet at this point, just talking to one another. The real humans have all become fed up and are now mostly off fishing by a lake.

It seems that if you use Firefox with an adblocker then cloudflare spam is all you see. Though I have experienced this in plain Firefox too.

Cloudflare are a scummy company trying to force you to use one browser and view all ads.

It can’t be just that. I use Firefox on Linux with ublock origin, strict tracking protection, and clear cookies on exit, and I’ve never ever seen a cloudflare challenge. Not even on sites with that “verifying your browser” page enabled.

Maybe you’re right, I see it all the time. Assume cloudflare do other dumb stuff too then like up ranges and just being generally crap at their jobs.

> Cloudflare challenges have made large portions of the web unusable for me.

I guess the best web experience is when one filters Cloudfare, Google and Microsoft at the firewall.

I ran into this, or something similar recently when our main connection went down (solar powered) and we switched to Starlink. Due to Starlink NAT issues I had tunneled our traffic to to a box colocated in a data center. This broke a number of web sites in weird ways. Became so annoying that I ended up bringing up a tunnel to our office in town to get back to the regular IP we used. Weird problems went away.

Report

What do you think?

Newbie

Written by Mr Viral

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

A New Perspective on Southeast Asia’s Hmong

A New Perspective on Southeast Asia’s Hmong

PlasticList’s Advice for Food Companies