these features should be part of a browser extension
You mean like Bypass Paywall Clean?
https://gitlab.com/magnolia1234/bypass-paywalls-chrome-clean
these features should be part of a browser extension
You mean like Bypass Paywall Clean?
https://gitlab.com/magnolia1234/bypass-paywalls-chrome-clean
Is there a Firefox version?
You don't even need to install it, just add it as an import line to UO, Google how. Game changing.
What's UO? This would've been a great comment with a little more info :)
After install go to "Filter Lists" > Import ... > and add the url of the "list"... which is actually from a different repo: https://gitlab.com/magnolia1234/bypass-paywalls-clean-filter...
Note: this apparently works for fewer sites than the linked extension.
Note: this apparently works for fewer sites than the linked extension.
Still, this is great to know because it can then be used on Firefox mobile.
Thank you for the insight. I usually hesitate to install add-ons, but now I can avoid that step entirely based on your advice.
https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clea...
It used to be in mozillas addon store, but they removed it, so have to install via dev mode
No need for dev mode - signed XPIs are avaiable from releases: https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clea....
Ah you're right, I confused it with Chrome
Or you can load this uBO filterlist, which should basically do the same thing as the extension
https://gitlab.com/magnolia1234/bypass-paywalls-clean-filters/-/raw/main/bpc-paywall-filter.txt
That's why I like reading HN comments. Thanks for that filter link
Does not work so well anymore. Better use a bookmarklet
javascript:location.href='https://archive.is/?run=1&url=%27+encodeURIComponent(documen...
Not sure what happened here but the end of that is chopped off. Trying to fix it:
javascript:location.href='https://archive.is/?run=1&url=%27+encodeURIComponent(document.location.href)+%27'
This works quite well and probably covers 90% of my needs. For the other 10% I still use archive.today or 12ft (RIP).
It's a shame Google won't let this addon be in the store.
I don't know for sure, but I would imagine there are more severe actions taken against circumventing paid material (content behind a paywall) than there is for free content supplemented by advertisements..
Edit : The Digital Millennium Copyright Act (DMCA) prohibits circumventing an effective technological means of control that restricts access to a copyrighted work. I guess that would apply here.
Given how liberally the DMCA is applied, you definitely don't want to be on the wrong side of that.
I remember some guy that wrote a WoW bot and got sued using the DMCA, with the argument that his bot was circumventing the anti-cheat and the anti-cheat could be seen as a 'mechanism protecting copyrighted material', because it was safeguarding access to the game servers, the servers were generating parts of the game world (such as sounds) dynamically, and those were under copyright... Wild stuff.
As far a I know section 1201 has never been prosecuted. Distribution of the copyrighted material is what's focused on.
This seems a good summary of the case I was talking about:
https://massivelyop.com/2020/02/28/lawful-neutral-cheating-c...
It happened to Honorbuddy, a very advanced bot for World Of Warcraft made by a German company. The argument in relation to DMCA was that the bot was circumventing warden, the games anti-cheat system. The legal battle was long and they ultimately had to strip many features of the bot, until the company went under.
The Digital Millennium Copyright Act (DMCA) prohibits circumventing an effective technological means of control that restricts access to a copyrighted work. I guess that would apply here.
It doesn't if you're not in the US.
Kim Dotcom believed so too, didn't fare too well.
Good old section 1201. The EFF has been fighting it for a while, but hasn't had much success unfortunately.
Isn't anything that can be circumvented ineffective?
Or, looking at it the other way, if you put a small sticker that says "do not do X" and even one person follows that, isn't that therefore an "effective" method?
there is below extension for this purpose which I know of, I think there can be many more if we search for them
chrome and firefox extension for removing paywall: https://github.com/iamadamdev/bypass-paywalls-chrome
This extension is asking for a lot of permissions it shouldn't ask for
If you want an alternative that only requests permissions for sites with paywalls, this one is better: https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clea...
Relevant: 12ft.io was banned by Vercel, taking down the developer's entire account with multiple other hosted projects & domains: https://twitter.com/thmsmlr/status/1718663563353755982
Edit: Access to other projects & domains was apparently restored some time after: https://twitter.com/thmsmlr/status/1719480558932148272
Lovely, the Google classic "ban the world" approach -- I've been desperately trying to move my client off of vercel, this might just be the gasoline.
One might consider Cloudflare as a very nice competitor to Vercel in terms of DX, although I suspect all companies use a ban the world approach, even banks.
Not the best time to recommend Cloudflare.
They just had a big outage. What’s the probability of having another so soon?
If outages don't depend on each other, the probability is the same.
Why would you assume independence? I'd expect an outage to put people "on edge" for a period of time following the outage, during which changes are scrutinized to a higher degree, and/or a greater engineering focus/budget is dedicated to reliability to reflect the changed business/image requirements.
Maybe, but they've kept customers informed throughout the entire outage. https://www.cloudflarestatus.com/incidents/hm7491k53ppg
Aside from this (which is already very shitty and would cause the same response in me) what are the issues you’re running into with Vercel?
- Support is failing us - I want my team to use you for vercel support, but it isn't there. - Support is failing our customers - when you fail, I end up reverse-depending your repo to tell us why it's failing -- just give us a clear answer, we all move away happy, bullshit and I go to lambda where I just accept it. - EOD: Vercel makes engineers happy to bullshit, but gives operations teams nothing acceptable - I want a deliverable product.
Would love to dig into your support issues. Let me know: rauchg@vercel.com
And if the support team had done so, I'd have nothing to converse about :)
After digging upwards, additional support seems like an option delivered too late, and too outside of 'proper' channels - if you want a sanitized rant I can probably deliver it tomorrow, but too-little too-late is where vercel has landed in the operations team.
FYI you have to use two line breaks to start a new line with (HN's) Markdown.
I followed this drama on Twitter. The author was breaking the terms of service and creating DMCA support burden for Vercel. They had proactively been in touch with him a few times to reach a solution.
I think it’s quite reasonable that they blocked the account rather than the project. You wouldn’t have got that level of service from big tech.
Care to provide any links? The Twitter claim above is that their was no communication and the ban occurred on a Friday.
I think it’s quite reasonable that they blocked the account rather than the project.
I’m just responding to your last sentence: why would you go out of your way to say it is reasonable to block the account rather than the project?
I can understand locking the account just as the “lazy default” but I would not call it in any way reasonable - but you did, so I’m curious.
If that is reasonable, what would you consider unreasonable?
(Because to me, the obviously reasonable thing to do would be to block the project and not his entire account.)
Very disappointing that this was the path Vercel chose to take. This is something I would expect from Google or Amazon, but not a developer darling like Vercel. Seems like all companies shed their values is service of growth and capitalism at some point or another. A shame.
Why desperately?
Taking down all his projects (not just 12ft) is heavy-handed, but otherwise Guillermo’s response in that thread seems pretty reasonable to me:
Hey Thomas. Your paywall-bypassing site broke our ToS and created hundreds of hours of support time spent on all the outreach from the impacted businesses.
Our support team reached out to you on Oct 14th to let you know this was unsustainable and to try to work with you.
The 12ft guy doesn’t look so great in that thread. He admits to ignoring the email (gosh I was busy mmkay?) and then argues that Vercel is lying about the extra work they had to do.
gosh I was busy mmkay?
Mischaracterization much? He was on Vacation.
How many of us read every email for personal projects that comes in when you're half way across the world and supposed to be relaxing.
Sure, but if you go on vacation and don't check your email for two weeks, you can't really claim “no warning”. If two weeks isn’t sufficient notice because of vacation it’s fine to say so, but it’s not the same thing as “no warning” just because you’re not checking email.
To take it even further, if you're a one man operation, not checking emails regarding your operation for two weeks is pure negligence, vacation or not.
I might ignore personal project emails while I'm on vacation, but I also won't complain if one of those emails says my billing method is out of date and I come home and it's been turned off.
I don't know why anyone trusted Vercel in the first place. The vibes of VC money funding an unsustainable offering for a relatively niche market are so strong, it doesn't make any sense.
Shows the importance of controlling your own critical infrastructure, or at least not being dependent for critical functions. Other examples include Github and Discord, both having shown the tendency do arbitarily ban users with little recourse.
Is it just me or has 12ft become less and less effective? I rarely get through with it these days.
Their policies have apparently… changed. They accept donations to not have your website bypassed. Archive.org is much better.
Edit: apparently it is down now.
402: PAYMENT_REQUIRED Code: DEPLOYMENT_DISABLED ID: fra1::8wkv2-1699275385535-39dedae23d6a
Archive.today never fails compared to Archive.org or various browser extensions
To remove paywalls 12ft<Archive.org<Archive.today is my opinion.
For some reason all the alternative "archive.XYZXDHWIQHDQ" type of sites always give me a captcha page, and I am never able to proceed. I'm assuming its to do with the cloudflare DNS, well if they don't care to fix it on their end, I don't care to use their service.
There is a bit of 'tussle' going on between the two of them for quite a few years as you pointed out.
https://x.com/archiveis/status/1018691421182791680?s=20
https://news.ycombinator.com/item?id=36971650
IIRC:
It's kind of a "everybody sucks" situation and there's no real winners.
Archive.[whatever] setup a server system to give you access from a country not your own, so that abusers have a harder time of archiving illegal content, then instantly reporting it to get the entire archive taken down. He uses EDNS to do this, but CF doesn't provide EDNS since it's a privacy issue to them.
So archive.[whatever] doesn't work for CF DNS because he doesn't want to risk bad actors being able to take down the archive.
Sensible reasons on both sides, especially for a service like archive.[whatever], and the real losers in this situation are the users.
Copying my previous comment over because I found a fix that works for me:
There's some issue with DNS over HTTPS, so you have to whitelist their sites in your settings, or turn off DNS over HTTPS (which I don't recommend).
To whitelist, on Firefox: Hamburger menu > settings > privacy and security > DNS over HTTPS > Manage exceptions > Add "archive.is", "archive.ph", and "archive.today"
For those on mobile it was the opposite, since the archive sites only show you the desktop sites
Does reader mode work on archive sites?
Is it donations they accept or legal threats?
Yes.
Before they went down it seemed that there were many big publishers who got the owner to disable it for their sites. Either that or the sites learned to actually not send their articles unless the user is logged in (and didn't care about googlebot not scanning it).
It was just an effective way to get through substack/medium in my experience.
I’ve rarely found it to be able to skip a paywall, I gave up after trying a few times.
Really dummy question: how do services like this work? As in, how do they bypass these paywalls?
The obvious thing is to mock Googlebot, but site owners can check that the request isn't coming from a Google-published IP and see that it's a fake, right?
Some possible clues:
https://github.com/kubero-dev/ladder#environment-variables
USER_AGENT User agent to emulate Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
X_FORWARDED_FOR IP forwarder address 66.249.66.1
RULESET URL to a ruleset file https://raw.githubusercontent.com/kubero-dev/ladder/main/rul... or /path/to/my/rules.yaml
Oh wow... I'm surprised that's enough. When I was researching scraping protection bypass, you had to do some real crazy stuff with the browser instance + using residential IPs at a minimum...
Thats not the full story. It works on many sites, but some (ft.com as an example) have more severe countermeasures to bypass the paywall. Therefore the ladders modifies the served HTML from origin to remove such.
Those rules still need to be build up. (by me or the OS-community)
I don’t know of any off-the-shelf product that respects X_FORWARDED_FOR unless the current request ip originates from a whitelisted (or lan) address.
related: If this is how they work, why doesn't google offer a private service to allow publishers to have content indexed while still protected?
It used to be against guidelines to serve different content to google vs what users would see. Not sure if still the case, but I don't think it's in google's interest to give a result that the user actually can't access.
I’m not aware that this policy has changed. What has changed is that Google will rank results it can’t (officially) index without showing their content. I’m guessing they do shadow index them but use the whole “if you outwardly can’t tell they did then it’s as if they didn’t” C++ compilers use to get away with insane optimizations.
site owners can check that the request isn't coming from a Google-published IP and see that it's a fake, right?
just because they can doesn't mean they will... also most "site owners" are (by this point) a completely different people than "site operators" (who I take to be the 'engineers' who indeed can check this IP things)
I use services like this as I often skip news site paywalls because I just can't afford, nor is it practical, to have so many subscriptions.
That said, I work in news media (and have been involved in building paywalls at different orgs - NYT and New Yorker). I know how money for these directly support journalism - salaries and the costs with associated with any story.
If you are skipping paywalls a lot, I would encourage you to pay for a subscription to at least one or two news sites you respect - bonus points if its a small or medium local newsroom that benefits!
For me that has been; NYTimes, New Yorker, Wired, Teen Vogue, and my wife's hometown paper in Illinois.
There's a huge need for subscription bundles. I'd gladly pay $20/mo for access to a bunch of big names, even if I'm limited to like 60 articles per month combined across those sources.
Instead I just don't pay anyone, turn back when I encounter a paywall and look for someone's summary if I'm really interested.
Isn’t that the value-prop of Apple News?
There used to be an app called scroll (https://twitter.com/tryscroll?lang=en), which got bought by Twitter, which is now part of subscription, but only for the top articles. Informed.so is doing something similar but different: https://www.informed.so/
The problem creating such a service is that most media houses believe that their content is the best thing since sliced bread and thus they often don't want to partner. Even though most of their content isn't that unique. Of course, some publications do have unique content, e.g. nyt, bloomberg.
I could see artifact being an interesting company to tackle this though (https://artifact.news/). They are already sending traffic to news sites and only serving what the user wants. If they now let me bypass paywalls for $20 that would be nice.
My personal experience with this has been that paying for a subscription still gets me inundated with ads and marketing (often more now that I'm on their official mailing list), is still inconvenient since I may not be logged in to every news site on every device where I may follow an article link, and leaves me to fight through dark patterns to unsubscribe, since a button to allow you to cancel online is clearly dark magic that has not yet been invented.
I do wish there was a better way for me to share an account across multiple news sites that let me properly pay for good journalism without these issues. I do subscribe to a very local news source that seems to handle this a lot better, but they also don't paywall (most) of their primary content.
In the meantime I do find it strange that so many sites wish to gain the advantage of advertising that they have put up an article on the web, without actually providing that article. I have no issue with paid content, but when that content gets listed in search engine results and social media links like a web page, but clicking on it does not behave like a web page, It feels something feels like something has broken from the idea of the linkable World Wide Web.
You mention 13ft as another open source inspiration. How is Ladder improving on what 13ft does?
I did try 13ft. But it misses several points.
The ladder applies custom rules to inject code. It basically modifies the origin website to remove the Paywall. It rewrites (most of) the links and assets in the origins HTML to avoid CORS Errors by routing thru the local proxy.
The ladder uses Golangs fiber/fasthttp, which is significantly faster than Python (biased opinion) .
Several small features like basic auth ...
The ladder uses Golangs fiber/fasthttp, which is significantly faster than Python
I have a feeling that this performance difference is practically imperceptible to regular humans. It's like optimizing CPU performance when the bottleneck is the database.
Not for any publicly hosted instance, it’s not. We’re not talking about the time it takes to perform one request but the scalability it affords a small vm to handle so many requests in parallel when it is being used by the general public.
If the paywall is implemented in client code, then usually just disabling javascript for the site is enough to let you view it. If it is implemented server side, then there usually isn't a way around it without an account.
Open source makes it easy for the cat in the cat mouse game, right?
There's no real cat & mouse game here (yet*) - sites don't do anything to mitigate this. Sites deliberately make their content available to robots to gain SEO traction: they're left with the choice of allowing this kind of bypass or hurting their own SEO.
* I say "yet" because there could conceivably be ways to mitigate this, but afaik most would involve individual deals/contracts between every search engine & every subscription website - Google's monopoly simplifies this somewhat, but there's not much of an incentive from Google's perpsective to facilitate this at any scale.
Google publishes IP ranges for GoogleBot. You can also reverse-lookup the request IP address - the resolved domain should in turn resolve to the original address.
Does anyone else remember 10 years ago when Google would penalize sites for serving different content to GoogleBot than to normal users? Those were the days.
It seemed to me like 12ft.io was useful for a couple of months, but then stopped being useful as they agreed to blacklist more and more URLs. I thought everybody switched to archive.is, which (so far) works 100% of the time, even if it is sometimes a pain in the butt.
Is there an open source version of archive.is?
Just webrecorder + magnolia and you'll get something similar.
Maybe even better: magnolia outperforms archive.is on paywalls
The operator of archive.is must constantly re-up on hacked credentials for wsj and nyt. Given this is a critical aspect of the service, it is not really feasible/useful to open source it.
In the README there is a WHY paragraph:
Freedom of information is an essential pillar of democracy and informed decision-making. While media organizations have legitimate financial interests, it is crucial to strike a balance between profitability and the public's right to access information. The proliferation of paywalls raises concerns about the erosion of this fundamental freedom, and it is imperative for society to find innovative ways to preserve access to vital information without compromising the sustainability of journalism.
For me this is grotesque. Democracy is in dispair so is journalism. What exactly is this software doing to support journalism or democracy?
We live in a world, where we have more misinformation and poor journalism every day, and less money in the pockets of the people to afford paying for good journalism. So this might start a more open discussion on how to finance journalism. And while discussions are still going on, people can inform themselves with good journalism, which supports the democracy.
For folks like me who have no idea what 12ft.io or 1ft.io are, they appear to be services for bypassing paywalls on websites.
Previous dicussions of the service on HN:
Those were Paywall bypassing tools. 12ft.io was shut down one week ago and 1ft.io still works.
But I feel a bit unconfident to let someone inject code to sites i view.
First of all congrats on the project and thank you for open sourcing it.
Freedom of information is an essential pillar of democracy
However, this reads like this tool saves democracy by letting you bypass a crappy pay wall on a site you visit once a year, and that whoever wants to get paid for their published content online is an enemy of democracy.
Ironically, the rest of the paragraph you quoted from gives their reasoning why they believe this tool is needed beyond "whoever wants to get paid for their published content online is an enemy of democracy". Double-plus democracy and all that...
12ft was really good!
In deed it was. Sad it's gone.
One single downside was the intransparency. It was not clear which code was added or removed on the site you where looking at.
The README says "The author does not endorse or encourage any unethical or illegal activity."
Is it actually illegal anywhere to bypass a paywall?
Not sure about the paywalls. But it might be used for "drive by attacks" or phishing.
I'm very new to this kind of service, but do you have to write your own rulesets for each site you want to bypass? The repo doesn't seem to include much...
Yes, the one i provide is still pretty empty yet. I plan to build one that can be used as a starting point or as a default.
This reminds me of the thread when 12ft was taken down.
Does anyone have any insight into how it would take Vercel hundreds of hours of support time? https://twitter.com/rauchg/status/1718680650067460138
My assumption here is that affected websites sent multiple, persistent support tickets and engaged in back and forth communication, as well as updates to the client, support team contacting engineering/legal/management/meetings on how to deal with 12ft.
Not relevant to the project but I usually check for earlier versions of the paywalled pages in the wayback machine (~75% success). I felt bad using these services (paywall removers), and just feeling a bit better checking in archive.org.
Given a very different paywall model for Substack, what exactly would work for bypassing their paywalls?
Wouldn't we always require a paid account to cache the HTML through (the SciHub model)?
The docker image, and on the upside is fairly easy to get running. But I'm downside, I'm zero for two actually using it.
I tried a Bloomberg article which gave me a "suspicious activity from your IP, please fill out this captcha" page, only the captcha was broken and didn't load.
Then I tried a WSJ article which loaded basically the same couple of paragraphs that I could get for free, but did not load any of the rest of the content.
Really great and easy to use. I was trying to read an article that was on the front page of HN and couldn't due to paywall. Downloaded the binary and was reading it within 30 seconds. Awesome and very useful tool, thanks!
Slightly edited "Why":
Access to private property is an essential pillar of democracy and the safe proliferation of ideas. While property owners have legitimate financial interests, it is crucial to strike a balance between property and the public's right to access property. The proliferation of locks on doors raises concerns about the erosion of this fundamental freedom, and it is imperative for society to find innovative ways to preserve access to people's homes and workspaces without compromising the sustainability of property ownership.. In a world where property should be shared and not commodified, locks should be critically examined to ensure that they do not undermine the principles of an open and informed society.
Sounds great, not just for paywalls, but for removing CORS as well:
Remove CORS headers from responses, assets, and images ...
I have noticed that on a lot of websites, if you stop the page loading at just the right moment (you have to be quick), the whole content will display without the paywall. And that's without any external tools. These kinds of tools seem, of course, much more convenient.
Create a browser book mark and set this as the URL of the bookmark:
javascript:window.location.href="https://archive.is/latest/"+location.href
It will usually open up the archived version of article without the paywall.
I still miss outline.com
I use txtify.it
I got the feeling that these features should be part of a browser extension the same way as there are AdBlock extensions. I guess the reason it is not is "personal preference" of the author, or is there some technical reason?