Spider
A spider is a software program that travels the Web (hence the name "spider"), locating and indexing websites for search engines. All the major search engines, such as Google and Yahoo!, use spiders to build and update their indexes. These programs constantly browse the Web, traveling from one hyperlink to another.
For example, when a spider visits a website's home page, there may be 30 links on the page. The spider will follow each of the links, adding all the pages it finds to the search engine's index. Of course, the new pages that the spider finds may also have links, which the spider continues to follow. Some of these links may point to pages within the same website (internal links), while others may lead to different sites (external links). The external links will cause the spider to jump to new sites, indexing even more pages.
Because of the interwoven nature of website links, spiders often return to websites that have already been indexed. This allows search engines to keep track of how many external pages link to each page. Usually, the more incoming links a page has, the higher it will be ranked in search engine results. Spiders not only find new pages and keep track of links, they also track changes to each page, helping search engine indexes stay up to date.
Spiders are also called robots and crawlers, which may be preferable for those who are not fond of arachnids. The word "spider" can also be used as a verb, such as "That search engine finally spidered my website last week."
Published: 2006