RezwanAhmed & His Team || Software Engineer

Basic Search Engine Technologies

Basic Search Engine Technologies |


Component I: Crawler/Spider/Robot

• Building a “toy crawler” is easy
   –Start with a set of “seed pages” in a priority queue
   –Fetch pages from the web
   –Parse  fetched pages for hyperlinks; add them to the queue
   –Follow the hyperlinks in the queue
• A real crawler is much more complicated…
   –Robustness (server failure, trap, etc.)
   –Crawling courtesy (server load balance, robot exclusion, etc.)
   –Handling file types (images, PDF files, etc.)
   –URL extensions (cgi script, internal references, etc.)
   –Recognize redundant pages (identical and duplicates)
   –Discover “hidden” URLs (e.g., truncated)

Crawling strategy is a main research topic 

Major Crawling Strategies

• Breadth-First (most common; balance server load)
• Parallel crawling
• Focused crawling
   –Targeting at a subset of pages (e.g., all pages about “automobiles” )
   –Typically given a query
• Incremental/repeated crawling
   –Can learn from the past experience
   –Probabilistic models are possible