15 November 2011

Working Structure of Search Engine

The Structure of search engine depends or relies on spiders. Spiders or robots are same acts a browser like program which follow links from different sources and provide indexing of your pages. When we submit a Web Pages to a search engine, spider reads that pages and helps the websites to store in a search engines database.

Components of Search Engines
1- The Spiders
2- The Indexer
3- The Database
4- The Search – Software
5- The Interface

Spider – 
Spiders also known as robot which is a browser like program which retrieve from web pages of a websites. The main job of spider is to read it and send it to the indexer, follow a link to the next page and so on .Important to remember is that the spider doesn’t see the page .Its look at the pages source. (Example –Open a website – file menu- view - source)

It’s the indexer job to analyze the data received from the spider before dumping it into database .It analyses the various elements of each page. looking at things like the title ,headings ,body text ,links etc 

Search engine database are massive “copies” of the web. It doesn’t contain replicas of web pages, but information on each page the indexer analyzed. Most search engines store only key information on each page. Only full text search engines store every single word 

Search Software – 
Based on the search engines algorithm

This is the part that you and I see the web page, search box, advertisements etc. This is where a search start, the text entered in the search box (the query) is sent to the relevant documents, sorts them most relevant and sends it back to the user in the form of search results