From the earliest days of the web, users have been aware of the fickleness of linking to content. In some ways, 1998 was a simpler time for the Internet. In other ways, like basic website design principles, everything old is new again. Jakob Nielson, writing “Fighting Linkrot” in 1998, reported on a then-recent survey that suggested 6% of links on the web were broken. The advice then hasn’t changed: run a link validator on your site regularly, and update or remove broken links. Also set up redirects for links that do change. The mantra for Nielson was “you are not allowed to break any old links.” Several years later, partly in response to Nielson, John S. Rhodes wrote a very interesting piece called “Web Sites That Heal.” Rhodes was interested in the causes of link rot and listed several technological and habitual causes. These included the growing use of Content Management Systems (CMSs), which relied on back-end databases and server-side scripting that generated unreliable URLs, and the growing complexity of websites which was leading to sloppy information architecture. On the behavioral side, website owners were satisfied to tell their users to “update their bookmarks,” websites were not tested for usability, content was seen as temporary, and many website owners were simply apathetic about link rot. Rhodes also noted the issue of government censorship and filtering, though he did not foresee the major way in which government would obfuscate old web pages, which will be discussed below. Rhodes made a pitch for a web server tool that would rely on the Semantic Web and allow websites to talk to each other automatically to resolve broken links on their own. Although that approach hasn’t taken off, there are other solutions to the problem of link rot that are gaining traction.
White, Justin, "Link Rot, Reference Rot, and Link Resolvers" (2019). University Library Publications and Presentations. 1.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 License.