

It works well because they use paid accounts to scrape a bunch of paywalled sites, which is why publishers are trying to figure out who runs it.
It’s completely untrustworthy now that they’ve shown that they can (and do) edit archived pages.
Aussie living in the San Francisco Bay Area.
Coding since 1998.
.NET Foundation member. C# fan
https://d.sb/
Mastodon: @[email protected]


It works well because they use paid accounts to scrape a bunch of paywalled sites, which is why publishers are trying to figure out who runs it.
It’s completely untrustworthy now that they’ve shown that they can (and do) edit archived pages.


Why do you need an archive of Wikipedia though? Each page retains its entire history, so you can easily go back to old versions without using a third-party site (especially one that DDoSes people)
Wikimedia also provide downloads of the whole of Wikipedia, including page history. You can easily have your own copy of the entirety of Wikipedia if you want to, as long as you’ve got enough disk space and patience to download it.
Edit: I’m an idiot but I’m leaving this comment here. I didn’t realise you meant dead links on Wikipedia, not to Wikipedia.
I understand now. I completely missed the point.