Ticker

6/recent/ticker-posts

Web Crawling or Web Crawler

 

Web Crawling

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. This process is called Web crawling or spidering. 

Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. 

A Web crawler is one type of bot or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies. 

Crawling Policies 

There are three important characteristics of the Web that make crawling it very difficult: 

1) Its Large Volume: The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads 

2) Its Fast Rate of Change: The high rate of change implies that by the time the crawler is downloading the last pages from a site, it is very likely that new pages have been added to the site or that pages have already been updated or even deleted.