Building the crawler was the easiest part of this project.
All this crawler does is take a seed blog (my blog) URL, run through all the links in its front page and store the ones that look like a blogpost URL. I assume that all the blogs in this world are linked to on at least one other blog. Thus all of them will get indexed in this world if the spider is given enough time ad memory.
This is the code for the crawler. Its in Python and is quite easy. Please run through it and let me know if there is any other way to optimise it further: