return to table of content

Show HN: Ladder, open source alternative to 12ft.io and 1ft.io

ktpsns
26 replies
1d4h

I got the feeling that these features should be part of a browser extension the same way as there are AdBlock extensions. I guess the reason it is not is "personal preference" of the author, or is there some technical reason?

sva_
14 replies
1d3h

these features should be part of a browser extension

You mean like Bypass Paywall Clean?

https://gitlab.com/magnolia1234/bypass-paywalls-chrome-clean

johnmaguire
10 replies
1d3h

Is there a Firefox version?

xipho
4 replies
1d3h

You don't even need to install it, just add it as an import line to UO, Google how. Game changing.

xaellison
3 replies
1d2h

What's UO? This would've been a great comment with a little more info :)

rustyminnow
2 replies
1d2h

https://ublockorigin.com/

After install go to "Filter Lists" > Import ... > and add the url of the "list"... which is actually from a different repo: https://gitlab.com/magnolia1234/bypass-paywalls-clean-filter...

Note: this apparently works for fewer sites than the linked extension.

sva_
0 replies
1d

Note: this apparently works for fewer sites than the linked extension.

Still, this is great to know because it can then be used on Firefox mobile.

gzer0
0 replies
1d2h

Thank you for the insight. I usually hesitate to install add-ons, but now I can avoid that step entirely based on your advice.

sva_
4 replies
1d3h

https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clea...

It used to be in mozillas addon store, but they removed it, so have to install via dev mode

penguin_booze
1 replies
1d1h

No need for dev mode - signed XPIs are avaiable from releases: https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clea....

sva_
0 replies
1d

Ah you're right, I confused it with Chrome

m-p-3
1 replies
1d1h

Or you can load this uBO filterlist, which should basically do the same thing as the extension

    https://gitlab.com/magnolia1234/bypass-paywalls-clean-filters/-/raw/main/bpc-paywall-filter.txt
bluish29
0 replies
1d

That's why I like reading HN comments. Thanks for that filter link

Beijinger
1 replies
1d

Does not work so well anymore. Better use a bookmarklet

javascript:location.href='https://archive.is/?run=1&url=%27+encodeURIComponent(documen...

yubiox
0 replies
20h10m

Not sure what happened here but the end of that is chopped off. Trying to fix it:

  javascript:location.href='https://archive.is/?run=1&url=%27+encodeURIComponent(document.location.href)+%27'
NelsonMinar
0 replies
1d

This works quite well and probably covers 90% of my needs. For the other 10% I still use archive.today or 12ft (RIP).

It's a shame Google won't let this addon be in the store.

bilekas
8 replies
1d4h

I don't know for sure, but I would imagine there are more severe actions taken against circumventing paid material (content behind a paywall) than there is for free content supplemented by advertisements..

Edit : The Digital Millennium Copyright Act (DMCA) prohibits circumventing an effective technological means of control that restricts access to a copyrighted work. I guess that would apply here.

mckirk
3 replies
1d3h

Given how liberally the DMCA is applied, you definitely don't want to be on the wrong side of that.

I remember some guy that wrote a WoW bot and got sued using the DMCA, with the argument that his bot was circumventing the anti-cheat and the anti-cheat could be seen as a 'mechanism protecting copyrighted material', because it was safeguarding access to the game servers, the servers were generating parts of the game world (such as sounds) dynamically, and those were under copyright... Wild stuff.

judge2020
1 replies
1d2h

As far a I know section 1201 has never been prosecuted. Distribution of the copyrighted material is what's focused on.

mckirk
0 replies
1d

This seems a good summary of the case I was talking about:

https://massivelyop.com/2020/02/28/lawful-neutral-cheating-c...

kkzz99
0 replies
1d

It happened to Honorbuddy, a very advanced bot for World Of Warcraft made by a German company. The argument in relation to DMCA was that the bot was circumventing warden, the games anti-cheat system. The legal battle was long and they ultimately had to strip many features of the bot, until the company went under.

Aaargh20318
1 replies
1d2h

The Digital Millennium Copyright Act (DMCA) prohibits circumventing an effective technological means of control that restricts access to a copyrighted work. I guess that would apply here.

It doesn't if you're not in the US.

zeusk
0 replies
1d

Kim Dotcom believed so too, didn't fare too well.

nottheengineer
0 replies
1d3h

Good old section 1201. The EFF has been fighting it for a while, but hasn't had much success unfortunately.

nerdbert
0 replies
1d2h

Isn't anything that can be circumvented ineffective?

Or, looking at it the other way, if you put a small sticker that says "do not do X" and even one person follows that, isn't that therefore an "effective" method?

overtomanu
1 replies
1d4h

there is below extension for this purpose which I know of, I think there can be many more if we search for them

chrome and firefox extension for removing paywall: https://github.com/iamadamdev/bypass-paywalls-chrome

user764743
0 replies
1d1h

This extension is asking for a lot of permissions it shouldn't ask for

If you want an alternative that only requests permissions for sites with paywalls, this one is better: https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clea...

some1else
25 replies
1d4h

Relevant: 12ft.io was banned by Vercel, taking down the developer's entire account with multiple other hosted projects & domains: https://twitter.com/thmsmlr/status/1718663563353755982

Edit: Access to other projects & domains was apparently restored some time after: https://twitter.com/thmsmlr/status/1719480558932148272

abofh
16 replies
1d3h

Lovely, the Google classic "ban the world" approach -- I've been desperately trying to move my client off of vercel, this might just be the gasoline.

threatofrain
5 replies
1d2h

One might consider Cloudflare as a very nice competitor to Vercel in terms of DX, although I suspect all companies use a ban the world approach, even banks.

MaKey
4 replies
1d2h

Not the best time to recommend Cloudflare.

JCharante
2 replies
1d1h

They just had a big outage. What’s the probability of having another so soon?

orphea
1 replies
1d1h

If outages don't depend on each other, the probability is the same.

explaininjs
0 replies
1d1h

Why would you assume independence? I'd expect an outage to put people "on edge" for a period of time following the outage, during which changes are scrutinized to a higher degree, and/or a greater engineering focus/budget is dedicated to reliability to reflect the changed business/image requirements.

judge2020
0 replies
1d2h

Maybe, but they've kept customers informed throughout the entire outage. https://www.cloudflarestatus.com/incidents/hm7491k53ppg

rgrieselhuber
4 replies
1d3h

Aside from this (which is already very shitty and would cause the same response in me) what are the issues you’re running into with Vercel?

abofh
3 replies
1d3h

- Support is failing us - I want my team to use you for vercel support, but it isn't there. - Support is failing our customers - when you fail, I end up reverse-depending your repo to tell us why it's failing -- just give us a clear answer, we all move away happy, bullshit and I go to lambda where I just accept it. - EOD: Vercel makes engineers happy to bullshit, but gives operations teams nothing acceptable - I want a deliverable product.

Rauchg
1 replies
1d

Would love to dig into your support issues. Let me know: rauchg@vercel.com

abofh
0 replies
23h35m

And if the support team had done so, I'd have nothing to converse about :)

After digging upwards, additional support seems like an option delivered too late, and too outside of 'proper' channels - if you want a sanitized rant I can probably deliver it tomorrow, but too-little too-late is where vercel has landed in the operations team.

judge2020
0 replies
1d2h

FYI you have to use two line breaks to start a new line with (HN's) Markdown.

benjaminwootton
2 replies
1d1h

I followed this drama on Twitter. The author was breaking the terms of service and creating DMCA support burden for Vercel. They had proactively been in touch with him a few times to reach a solution.

I think it’s quite reasonable that they blocked the account rather than the project. You wouldn’t have got that level of service from big tech.

ensignavenger
0 replies
23h0m

Care to provide any links? The Twitter claim above is that their was no communication and the ban occurred on a Friday.

ComputerGuru
0 replies
23h36m

I think it’s quite reasonable that they blocked the account rather than the project.

I’m just responding to your last sentence: why would you go out of your way to say it is reasonable to block the account rather than the project?

I can understand locking the account just as the “lazy default” but I would not call it in any way reasonable - but you did, so I’m curious.

If that is reasonable, what would you consider unreasonable?

(Because to me, the obviously reasonable thing to do would be to block the project and not his entire account.)

lxe
0 replies
1d1h

Very disappointing that this was the path Vercel chose to take. This is something I would expect from Google or Amazon, but not a developer darling like Vercel. Seems like all companies shed their values is service of growth and capitalism at some point or another. A shame.

canadianfella
0 replies
1d3h

Why desperately?

paulgb
5 replies
1d3h

Taking down all his projects (not just 12ft) is heavy-handed, but otherwise Guillermo’s response in that thread seems pretty reasonable to me:

Hey Thomas. Your paywall-bypassing site broke our ToS and created hundreds of hours of support time spent on all the outreach from the impacted businesses.

Our support team reached out to you on Oct 14th to let you know this was unsustainable and to try to work with you.
hombre_fatal
4 replies
1d2h

The 12ft guy doesn’t look so great in that thread. He admits to ignoring the email (gosh I was busy mmkay?) and then argues that Vercel is lying about the extra work they had to do.

flutas
3 replies
1d2h

gosh I was busy mmkay?

Mischaracterization much? He was on Vacation.

How many of us read every email for personal projects that comes in when you're half way across the world and supposed to be relaxing.

paulgb
1 replies
1d1h

Sure, but if you go on vacation and don't check your email for two weeks, you can't really claim “no warning”. If two weeks isn’t sufficient notice because of vacation it’s fine to say so, but it’s not the same thing as “no warning” just because you’re not checking email.

stronglikedan
0 replies
1d

To take it even further, if you're a one man operation, not checking emails regarding your operation for two weeks is pure negligence, vacation or not.

jdminhbg
0 replies
23h34m

I might ignore personal project emails while I'm on vacation, but I also won't complain if one of those emails says my billing method is out of date and I come home and it's been turned off.

treyd
0 replies
1d1h

I don't know why anyone trusted Vercel in the first place. The vibes of VC money funding an unsustainable offering for a relatively niche market are so strong, it doesn't make any sense.

alwayslikethis
0 replies
1d3h

Shows the importance of controlling your own critical infrastructure, or at least not being dependent for critical functions. Other examples include Github and Discord, both having shown the tendency do arbitarily ban users with little recourse.

roydivision
12 replies
1d4h

Is it just me or has 12ft become less and less effective? I rarely get through with it these days.

user_7832
9 replies
1d4h

Their policies have apparently… changed. They accept donations to not have your website bypassed. Archive.org is much better.

Edit: apparently it is down now.

402: PAYMENT_REQUIRED Code: DEPLOYMENT_DISABLED ID: fra1::8wkv2-1699275385535-39dedae23d6a

i67vw3
6 replies
1d3h

Archive.today never fails compared to Archive.org or various browser extensions

To remove paywalls 12ft<Archive.org<Archive.today is my opinion.

giancarlostoro
3 replies
1d3h

For some reason all the alternative "archive.XYZXDHWIQHDQ" type of sites always give me a captcha page, and I am never able to proceed. I'm assuming its to do with the cloudflare DNS, well if they don't care to fix it on their end, I don't care to use their service.

i67vw3
1 replies
1d3h
flutas
0 replies
1d2h

IIRC:

It's kind of a "everybody sucks" situation and there's no real winners.

Archive.[whatever] setup a server system to give you access from a country not your own, so that abusers have a harder time of archiving illegal content, then instantly reporting it to get the entire archive taken down. He uses EDNS to do this, but CF doesn't provide EDNS since it's a privacy issue to them.

So archive.[whatever] doesn't work for CF DNS because he doesn't want to risk bad actors being able to take down the archive.

Sensible reasons on both sides, especially for a service like archive.[whatever], and the real losers in this situation are the users.

PawgerZ
0 replies
1d2h

Copying my previous comment over because I found a fix that works for me:

There's some issue with DNS over HTTPS, so you have to whitelist their sites in your settings, or turn off DNS over HTTPS (which I don't recommend).

To whitelist, on Firefox: Hamburger menu > settings > privacy and security > DNS over HTTPS > Manage exceptions > Add "archive.is", "archive.ph", and "archive.today"

ryeights
1 replies
20h47m

For those on mobile it was the opposite, since the archive sites only show you the desktop sites

gnicholas
0 replies
20h40m

Does reader mode work on archive sites?

jdiff
1 replies
1d3h

Is it donations they accept or legal threats?

ProllyInfamous
0 replies
1d3h

Yes.

snarkyturtle
0 replies
1d3h

Before they went down it seemed that there were many big publishers who got the owner to disable it for their sites. Either that or the sites learned to actually not send their articles unless the user is logged in (and didn't care about googlebot not scanning it).

It was just an effective way to get through substack/medium in my experience.

ams92
0 replies
1d4h

I’ve rarely found it to be able to skip a paywall, I gave up after trying a few times.

janejeon
8 replies
1d3h

Really dummy question: how do services like this work? As in, how do they bypass these paywalls?

The obvious thing is to mock Googlebot, but site owners can check that the request isn't coming from a Google-published IP and see that it's a fake, right?

Fnoord
3 replies
1d1h

Some possible clues:

https://github.com/kubero-dev/ladder#environment-variables

USER_AGENT User agent to emulate Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

X_FORWARDED_FOR IP forwarder address 66.249.66.1

RULESET URL to a ruleset file https://raw.githubusercontent.com/kubero-dev/ladder/main/rul... or /path/to/my/rules.yaml
janejeon
1 replies
1d1h

Oh wow... I'm surprised that's enough. When I was researching scraping protection bypass, you had to do some real crazy stuff with the browser instance + using residential IPs at a minimum...

2cpu1container
0 replies
1d

Thats not the full story. It works on many sites, but some (ft.com as an example) have more severe countermeasures to bypass the paywall. Therefore the ladders modifies the served HTML from origin to remove such.

Those rules still need to be build up. (by me or the OS-community)

ComputerGuru
0 replies
23h30m

I don’t know of any off-the-shelf product that respects X_FORWARDED_FOR unless the current request ip originates from a whitelisted (or lan) address.

calflegal
2 replies
1d2h

related: If this is how they work, why doesn't google offer a private service to allow publishers to have content indexed while still protected?

matsemann
1 replies
1d2h

It used to be against guidelines to serve different content to google vs what users would see. Not sure if still the case, but I don't think it's in google's interest to give a result that the user actually can't access.

ComputerGuru
0 replies
23h27m

I’m not aware that this policy has changed. What has changed is that Google will rank results it can’t (officially) index without showing their content. I’m guessing they do shadow index them but use the whole “if you outwardly can’t tell they did then it’s as if they didn’t” C++ compilers use to get away with insane optimizations.

narinxas
0 replies
1d3h

site owners can check that the request isn't coming from a Google-published IP and see that it's a fake, right?

just because they can doesn't mean they will... also most "site owners" are (by this point) a completely different people than "site operators" (who I take to be the 'engineers' who indeed can check this IP things)

donohoe
4 replies
1d1h

I use services like this as I often skip news site paywalls because I just can't afford, nor is it practical, to have so many subscriptions.

That said, I work in news media (and have been involved in building paywalls at different orgs - NYT and New Yorker). I know how money for these directly support journalism - salaries and the costs with associated with any story.

If you are skipping paywalls a lot, I would encourage you to pay for a subscription to at least one or two news sites you respect - bonus points if its a small or medium local newsroom that benefits!

For me that has been; NYTimes, New Yorker, Wired, Teen Vogue, and my wife's hometown paper in Illinois.

mejthemage
2 replies
1d1h

There's a huge need for subscription bundles. I'd gladly pay $20/mo for access to a bunch of big names, even if I'm limited to like 60 articles per month combined across those sources.

Instead I just don't pay anyone, turn back when I encounter a paywall and look for someone's summary if I'm really interested.

orpheansodality
0 replies
23h57m

Isn’t that the value-prop of Apple News?

QkPrsMizkYvt
0 replies
23h26m

There used to be an app called scroll (https://twitter.com/tryscroll?lang=en), which got bought by Twitter, which is now part of subscription, but only for the top articles. Informed.so is doing something similar but different: https://www.informed.so/

The problem creating such a service is that most media houses believe that their content is the best thing since sliced bread and thus they often don't want to partner. Even though most of their content isn't that unique. Of course, some publications do have unique content, e.g. nyt, bloomberg.

I could see artifact being an interesting company to tackle this though (https://artifact.news/). They are already sending traffic to news sites and only serving what the user wants. If they now let me bypass paywalls for $20 that would be nice.

stetrain
0 replies
22h37m

My personal experience with this has been that paying for a subscription still gets me inundated with ads and marketing (often more now that I'm on their official mailing list), is still inconvenient since I may not be logged in to every news site on every device where I may follow an article link, and leaves me to fight through dark patterns to unsubscribe, since a button to allow you to cancel online is clearly dark magic that has not yet been invented.

I do wish there was a better way for me to share an account across multiple news sites that let me properly pay for good journalism without these issues. I do subscribe to a very local news source that seems to handle this a lot better, but they also don't paywall (most) of their primary content.

In the meantime I do find it strange that so many sites wish to gain the advantage of advertising that they have put up an article on the web, without actually providing that article. I have no issue with paid content, but when that content gets listed in search engine results and social media links like a web page, but clicking on it does not behave like a web page, It feels something feels like something has broken from the idea of the linkable World Wide Web.

SigmundurM
4 replies
1d4h

You mention 13ft as another open source inspiration. How is Ladder improving on what 13ft does?

2cpu1container
3 replies
1d4h

I did try 13ft. But it misses several points.

The ladder applies custom rules to inject code. It basically modifies the origin website to remove the Paywall. It rewrites (most of) the links and assets in the origins HTML to avoid CORS Errors by routing thru the local proxy.

The ladder uses Golangs fiber/fasthttp, which is significantly faster than Python (biased opinion) .

Several small features like basic auth ...

withinboredom
1 replies
1d3h

The ladder uses Golangs fiber/fasthttp, which is significantly faster than Python

I have a feeling that this performance difference is practically imperceptible to regular humans. It's like optimizing CPU performance when the bottleneck is the database.

ComputerGuru
0 replies
23h32m

Not for any publicly hosted instance, it’s not. We’re not talking about the time it takes to perform one request but the scalability it affords a small vm to handle so many requests in parallel when it is being used by the general public.

oh_sigh
0 replies
22h44m

If the paywall is implemented in client code, then usually just disabling javascript for the site is enough to let you view it. If it is implemented server side, then there usually isn't a way around it without an account.

pacifika
3 replies
1d3h

Open source makes it easy for the cat in the cat mouse game, right?

lucideer
2 replies
1d3h

There's no real cat & mouse game here (yet*) - sites don't do anything to mitigate this. Sites deliberately make their content available to robots to gain SEO traction: they're left with the choice of allowing this kind of bypass or hurting their own SEO.

* I say "yet" because there could conceivably be ways to mitigate this, but afaik most would involve individual deals/contracts between every search engine & every subscription website - Google's monopoly simplifies this somewhat, but there's not much of an incentive from Google's perpsective to facilitate this at any scale.

tiagod
1 replies
1d2h

Google publishes IP ranges for GoogleBot. You can also reverse-lookup the request IP address - the resolved domain should in turn resolve to the original address.

ForkMeOnTinder
0 replies
1d2h

Does anyone else remember 10 years ago when Google would penalize sites for serving different content to GoogleBot than to normal users? Those were the days.

karaterobot
3 replies
19h41m

It seemed to me like 12ft.io was useful for a couple of months, but then stopped being useful as they agreed to blacklist more and more URLs. I thought everybody switched to archive.is, which (so far) works 100% of the time, even if it is sometimes a pain in the butt.

Axsuul
2 replies
19h30m

Is there an open source version of archive.is?

raxi
0 replies
1h30m

Just webrecorder + magnolia and you'll get something similar.

Maybe even better: magnolia outperforms archive.is on paywalls

metadat
0 replies
18h44m

The operator of archive.is must constantly re-up on hacked credentials for wsj and nyt. Given this is a critical aspect of the service, it is not really feasible/useful to open source it.

j-a-a-p
2 replies
1d

In the README there is a WHY paragraph:

Freedom of information is an essential pillar of democracy and informed decision-making. While media organizations have legitimate financial interests, it is crucial to strike a balance between profitability and the public's right to access information. The proliferation of paywalls raises concerns about the erosion of this fundamental freedom, and it is imperative for society to find innovative ways to preserve access to vital information without compromising the sustainability of journalism.
j-a-a-p
1 replies
1d

For me this is grotesque. Democracy is in dispair so is journalism. What exactly is this software doing to support journalism or democracy?

2cpu1container
0 replies
1d

We live in a world, where we have more misinformation and poor journalism every day, and less money in the pockets of the people to afford paying for good journalism. So this might start a more open discussion on how to finance journalism. And while discussions are still going on, people can inform themselves with good journalism, which supports the democracy.

fader
2 replies
1d4h

For folks like me who have no idea what 12ft.io or 1ft.io are, they appear to be services for bypassing paywalls on websites.

alberto_ol
0 replies
1d2h

Previous dicussions of the service on HN:

https://hn.algolia.com/?q=12ft.io

2cpu1container
0 replies
1d1h

Those were Paywall bypassing tools. 12ft.io was shut down one week ago and 1ft.io still works.

But I feel a bit unconfident to let someone inject code to sites i view.

serial_dev
1 replies
20h0m

First of all congrats on the project and thank you for open sourcing it.

Freedom of information is an essential pillar of democracy

However, this reads like this tool saves democracy by letting you bypass a crappy pay wall on a site you visit once a year, and that whoever wants to get paid for their published content online is an enemy of democracy.

UncleEntity
0 replies
18h41m

Ironically, the rest of the paragraph you quoted from gives their reasoning why they believe this tool is needed beyond "whoever wants to get paid for their published content online is an enemy of democracy". Double-plus democracy and all that...

jwmoz
1 replies
1d2h

12ft was really good!

2cpu1container
0 replies
1d1h

In deed it was. Sad it's gone.

One single downside was the intransparency. It was not clear which code was added or removed on the site you where looking at.

gumby
1 replies
1d1h

The README says "The author does not endorse or encourage any unethical or illegal activity."

Is it actually illegal anywhere to bypass a paywall?

2cpu1container
0 replies
1d

Not sure about the paywalls. But it might be used for "drive by attacks" or phishing.

fyzix
1 replies
1d3h

I'm very new to this kind of service, but do you have to write your own rulesets for each site you want to bypass? The repo doesn't seem to include much...

2cpu1container
0 replies
1d1h

Yes, the one i provide is still pretty empty yet. I plan to build one that can be used as a starting point or as a default.

benatkin
1 replies
21h17m

This reminds me of the thread when 12ft was taken down.

Does anyone have any insight into how it would take Vercel hundreds of hours of support time? https://twitter.com/rauchg/status/1718680650067460138

someotherperson
0 replies
21h12m

My assumption here is that affected websites sent multiple, persistent support tickets and engaged in back and forth communication, as well as updates to the client, support team contacting engineering/legal/management/meetings on how to deal with 12ft.

szaboat
0 replies
1d3h

Not relevant to the project but I usually check for earlier versions of the paywalled pages in the wayback machine (~75% success). I felt bad using these services (paywall removers), and just feeling a bit better checking in archive.org.

rounakdatta
0 replies
1d1h

Given a very different paywall model for Substack, what exactly would work for bypassing their paywalls?

Wouldn't we always require a paid account to cache the HTML through (the SciHub model)?

nfriedly
0 replies
14h3m

The docker image, and on the upside is fairly easy to get running. But I'm downside, I'm zero for two actually using it.

I tried a Bloomberg article which gave me a "suspicious activity from your IP, please fill out this captcha" page, only the captcha was broken and didn't load.

Then I tried a WSJ article which loaded basically the same couple of paragraphs that I could get for free, but did not load any of the rest of the content.

cooper_ganglia
0 replies
19h40m

Really great and easy to use. I was trying to read an article that was on the front page of HN and couldn't due to paywall. Downloaded the binary and was reading it within 30 seconds. Awesome and very useful tool, thanks!

boplicity
0 replies
23h16m

Slightly edited "Why":

Access to private property is an essential pillar of democracy and the safe proliferation of ideas. While property owners have legitimate financial interests, it is crucial to strike a balance between property and the public's right to access property. The proliferation of locks on doors raises concerns about the erosion of this fundamental freedom, and it is imperative for society to find innovative ways to preserve access to people's homes and workspaces without compromising the sustainability of property ownership.. In a world where property should be shared and not commodified, locks should be critically examined to ensure that they do not undermine the principles of an open and informed society.

arendtio
0 replies
1d

Sounds great, not just for paywalls, but for removing CORS as well:

Remove CORS headers from responses, assets, and images ...
TanguyN
0 replies
20h56m

I have noticed that on a lot of websites, if you stop the page loading at just the right moment (you have to be quick), the whole content will display without the paywall. And that's without any external tools. These kinds of tools seem, of course, much more convenient.

KoftaBob
0 replies
1d1h

Create a browser book mark and set this as the URL of the bookmark:

javascript:window.location.href="https://archive.is/latest/"+location.href

It will usually open up the archived version of article without the paywall.

JustinGoldberg9
0 replies
1d1h

I still miss outline.com

I use txtify.it