How Search Engines Work

Crawler-based engines such as Google, Yahoo! MSN etc. send crawlers, or spiders, to look at websites on the net. These crawlers visit a web site, read the information on the page they land on, and attempt to follow hyperlinks from this page, until they’ve found the other pages of the site - as well as to all external sites that are linked to from any page that is discovered.

The crawler returns all that information back to a central information bank where the data is indexed. The crawler will also return regularly to the sites to check for any information that has changed.

If your site’s not constructed with the search engine spiders in mind, then your site will fail at this early stage. The primary reasons for sites not being crawled fully are:

· Javascript navigation – search engine spiders read text and follow standard hypertext links – they can’t decipher javascript code.

· Flash – although a web browser can navigate links coded into Flash animations, search engine spiders can’t.

· Text and links visible in graphics – visible to humans, but not to search spiders.

· Deep levels on site – the search engine spiders are programmed to only spider down to level 3 – the home page being level 1. Many sites have pages 6 or more levels down – only major sites with high pagerank and thousands of backlinks are going to get crawled deeper than 3 levels.

- Any site menu should be constructed with as many pages level 1 as possible and the linking of the pages to others should follow a logical sequence of relevancy of one page to another.

The Search Index

When you enter a phrase into the search box of a search engine, you are not actually searching the Web - you are only searching through the search engine’s index. The search engine’s index was built weeks or months before, and you’re only performing a lookup to find matching entries to your search phrase. In effect, the search engine indices are data banks of information that are collected, stored and searched.

Although the search algorithm (computer program) varies between the different search engines, in principle they all work the same.

Getting Your Site Noticed

Recently, search engines (especially Google) have made major changes to their search criteria to deter unscrupulous website operators employing manipulative techniques to get their sites noticed.
Of course, the exact way search engine algorithms rank web pages is a closely guarded secret - however we do know in broad terms what they are looking for - and there are three main criteria they look for when ranking a site:

Site Construction

A site needs to be constructed in a way that a search engine will find easy to crawl and collect relevant information. Many sites have good content, but are not built properly – this hinders the search engine spiders in their quest to find relevant information.

Content and Keywords

The keywords and how they are appear on the site’s web pages and page names are considered, as well as the placing keywords in the domain name. In terms of a sites content, the old saying “content is king” is very true. The more relevant content you have the better your site is likely to rank in search engines.

The search engines also like fresh updated content and this will have an impact on your position in the rankings. A frequently updated site will tend to out rank a similar site which never updates.

Linking – The Key to Getting High Rankings for Your Site

This used to be the most important part of the way search engines ranked sites - and accounted for 80% of the “points” needed to get the top listings on the search engine results pages. Google has moved on – and now ranks sites according to their “authority”. Yahoo still weighs links heavily - MSN is currently discounting links that don’t come from important sites.

Linking is defined as the way that pages on the same site link to each other, as well as how pages on a site link to other web sites – called “external” linking.

By analysing how web pages link to each other, the search engines determine what a page is about. It used to be possible to get a web page top rankings, simply by placing the desired keyword in enough hypertext links pointing to the “target” page. Enough keyword rich anchor text links pointing in to a page resulted in top rankings even if the “target” page had no merit - it didn’t even have to have the target keywords on the page or in its metatags.

The search engines now look at a site as an entity – not an individual page. So you now have to try and get rankings for a whole section of your site – or the whole site – not the individual pages. LSA based websites take this to the extreme – each section of the site is isolated from the other sections of the site – so that the search engines must evaluate and rank the pages on a section-by-section basis. This is extremely effective, and is causing new LSA sites to outrank established web sites with high pagerank and thousands of backlinks.

Size of Site and Content Updates

As a rule, if a site has good content, then adding more good content every few days will cause pages of the site to rank highly – and fight off the ever growing number of competing web sites.

How Any Site can get more Traffic

The web is now massive - and you are competing for visitors against millions of other web sites - and the web continues to grow at a rapid rate - making it more difficult to get enough traffic each month.

The good news is that ANY website can get more traffic – you only need to decide whether to compete for the traffic on the search engine results pages – or let somebody else get the rankings and simply siphon off some of their traffic by using such proven viral marketing techniques such as inserting information on social bookmarking or article sites Read More.


 

How Search Engines Work | Building Websites for Traffic Generation | Viral Marketing Success | Social Networking | Free Website