what is web spider ?

A spider is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The major search engines on the Web all have such a program, which is also known as a "crawler" or a "bot." Spiders are typically programmed to visit sites that have been submitted by their owners as new or updated. Entire sites or specific pages can be selectively visited and indexed. Spiders are called spiders because they usually visit many sites in parallel at the same time, their "legs" spanning a large area of the "web." Spiders can crawl through a site's pages in several ways. One way is to follow all the hypertext links in each page until all the pages have been read.
 
A Web crawler is an Internet bot which systematically browses the World Wide Web, By the time a Web crawler has finished its crawl, many events could have happened, including creations, updates, and deletions.
 
Hello,

A web crawler (also known in other terms like ants, automatic indexers, bots, web spiders, web robots) is an automated program, or script, that methodically scans or “crawls” through web pages to create an index of the data it is set to look for. This process is called Web crawling or spidering.

Search engine for locating information on WWW, it indexes all the words in a document, adds them to a database, then follows all hyperlinks and indexes and adds that information also to the database. Also called web spider, spider.
 
A web crawler (also called an web spider or web robot) could be a program or machine-controlled script that browses the globe Wide web in a very organized, machine-controlled manner. This method is termed web creep or spidering. several legitimate sites, especially search engines, use spidering as a way of providing up-to-date information.
 
A web crawler also known as a web spider or web robot is a program or automated script which browses the World Wide Web in a methodical, automated manner. This process is called Web crawling or spidering. Many legitimate sites, in particular search engines, use spidering as a means of providing up-to-date data.
 
A web crawler (also known as a web spider or web robot) is a program or automated script which browses the World Wide Web in a methodical, automated manner. This process is called Web crawling or spidering. Many legitimate sites, in particular search engines, use spidering as a means of providing up-to-date data.
 
A spider is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index. The major search engines on the Web all have such a program, which is also known as a "crawler" or a "bot." Spiders are typically programmed to visit sites that have been submitted by their owners as new or updated. Entire sites or specific pages can be selectively visited and indexed. Spiders are called spiders because they usually visit many sites in parallel at the same time, their "legs" spanning a large area of the "web." Spiders can crawl through a site's pages in several ways. One way is to follow all the hypertext links in each page until all the pages have been read.
 
A Web spider is an Online bot which consistently browses the Globe Extensive Web, By plenty of time a Web spider has completed its spider, many activities could have occurred, such as designs, up-dates, and deletions.
 
Back
Top