You are here: Home » internet » Should Archive.org Ignore Robots.txt Directives And Cache Everything?

Should Archive.org Ignore Robots.txt Directives And Cache Everything?

Archive.org argues robots.txt files are geared toward search engines, and now plans instead to represent the web “as it really was, and is, from a user’s perspective.”
We have also seen an upsurge of the use of robots.txt files to remove entire domains from search engines when they transition from a live web site into a parked domain, which has historically also removed the entire domain from view in the Wayback Machine… We receive inquiries and complaints on these “disappeared” sites almost daily.”
In response, Slashdot reader Lauren Weinstein writes:
We can stipulate at the outset that the venerable Internet Archive and its associated systems like Wayback Machine have done a lot of good for many years — for example by providing chronological archives of websites who have chosen to participate in their efforts. But now, it appears that the Internet Archive has joined the dark side of the Internet, by announcing


 

Original article