On my business website sometime ago, I took the following actions

  1. Rename the article directory to articles
  2. Copy all the *.htm files to *.aspx, in preparation for migrating the contents of the statically written HTML files to dynamically generated information.
  3. Setting redirection on all the *.htm files to it's corresponding *.aspx files.
  4. Update all the files containing references to article, and change it to articles and also rename the htm to aspx.

In an ideal world,

  1. a web browser browsing to the article directory receives a 404 - no such resource.
  2. a web browser browsing to the main website sees the updated content, and therefore, clicking on any articles link, goes to the articles directory, and retrieves the appropriate content.

In our world, robots insist on caching outdated content, even though my entire business website has an expiry date of 1 day. What this means is that, if a particular page is accessed today, then, anytime before the same time tomorrow, if the same page is accessed, the browser can use it's cache to display the content. However, if the page is accessed anytime after the same time tomorrow, the browser has to re-fetch (or refresh) the page from the website.

Worse still, a particular web robot named ZyBorg insists on checking the article directory more than a month after I have migrated it's contents. In order to direct this particular web robot, I had to

  1. create a directory named article, and
  2. set the URL redirection on article to point to articles.

I then tested the URL redirection by trying http://mysite/article/somefile.htm, and IIS did two redirections, firstly, it redirected to http://mysite/articles/somefile.htm, then it redirected to http://mysite/articles/somefile.aspx.

So now, ZyBorg should be pretty happy.