I was just wondering how big the internet really is. Billions of web pages scattered in every part of the world serving from its Webserver. I was so excited when i noticed Google cached the whole internet in it's server while it crawls over the webpages to index them.
While i was Googling to find more about how Google managed to do it(was wondering how Google get to keep copyrighted content in its server), I found out http://www.archive.org does it too. If you ever missed something from a website by its latest update thats the site you might want to look for the old version of the site :). They even say in their homepage that they archived 250 billion pages!
I used google cache while i check the college results on its release date. My university site never loads when results are announced. :-| All i did was load the cached version of result page then lookup the result from there. It works pretty well. :D