The World Wide Web is a vast repository of data and innovative concepts. To a user, searching for a particular webpage in the internet can prove to be a daunting chore.
This is where the search engine comes in. This useful internet program assists the user in exploring the Internet by keeping an index of keywords that enables it to locate a website with any particular keyword. With this obvious utility factor, search engines have thus become an indispensable tool of the internet.
During the early days in search engine history, indexes are composed of merely a few thousand pages. After advances in technology and several years later, web page compatibility with search engines improved consistently that nowadays these engines are capable of listing any website through the use of varying keyword combinations, handling huge volumes of queries daily.
The workings of a search engine
A search engine works by searching databases using a list, or index, of stored single words or phrases, called keywords, and matching websites.
A program known as a spider is responsible for adding different websites and their keywords in the search engine index. As the spider encounters more links in the internet, it makes connections with the websites in those links and adds their keywords to the index.
Any website reached by the spider can be opened through the search engine. Upon entry of a keyword a list of websites retrieved from the index appears, and clicking on a link will open the actual page.
The spider program is designed to ignore the article content of a page and instead look for terms present in titles, subtitles and meta-tags.
An Evolutionary Timeline of Search Engines
By the year 1993 a program called Aliweb was introduced. Its developers created the web directory program manually, and so the program itself is severely limited in its function. Another program known as JumpStation was invented in the same year using spider technology, and earned the distinction of being the first modern search engine. It lets internet users find keywords in web page headers and titles.
At first this was hugely successful. Eventually, this search engine became inefficient in proportion to the increasing volume of websites being added to the internet and closed down entirely in 1994.
By then another engine with improved features, Web Crawler has taken over its place. For example, it already has the ability to search a webpage’s entire content. Later on, its owners would sell it to Excite, an internet portal.
Next down the line was Lycos. It boasted superior features to conventional search engines of that time, such as improved keyword matching and relevantly-ordered list retrievals. Two years later MetaCrawler, the world’s first metasearch engine, was created. It provided a very useful service in that it can search other search engines and index the results.
By 1997, Yahoo had entered the scene, developed by two university students at Stanford named David Filo and Jerry Yang. Its contemporaries include AltaVista which used spider technology, and is capable of handling web pages daily in the millions.
Before that same year ended the well-known search engine called Google made its appearance for the first time. This was considered an important event in search engine history, with Google already employs an effective system known as page ranking. The system worked by ranking pages based on how many links it has.
Today, Google has become hugely popular with users because of its vast index. The term googling, at present, refers to the practice of searching for information through Google. Microsoft has also launched its own search engine, called Bing, which operates using categorized searches. This lets the user search for images and videos better, and allows previewing of searched items.
Popular Search Engines Today
Some popular search engines today include Bing, the Chinese-based Baidu, Sogou and Sohu search engines, Duck Duck Go, the Russian language-based Yandex, Rediff.com, Guruji.com, and the now-defunct Cuil.
Search Engines in the Future
The today majority of the internet’s search engines uses exact keyword matches, which is somewhat unreliable since individual words can have various meanings. In the future, search engine development will enable the use of searches based on concepts and not just keywords, and will be capable of processing queries in a way a question is posed to a human person, a process called natural language query.