How the Searchking Search Engine Worked 1997 - 200?

Searchking.com had two divisions: 1. A search engine, 2. remotely hosted web directories (think Wordpress.com only for directories,) run by different individuals. This article will deal with the search engine alone.  The Searchking search engine I am going to talk about is not the Searchking directory that exists today even though it is using a similar concept in presentation.

This is all from memory having used it quite a lot.  I have no insider knowledge of how it worked or was administrated.

What was Searchking?

Searchking was a minor search engine born at a time when there were 7 - 8 major search engines, many dozens of minor search engines, and many hundreds of directories.  It would seem very crude by today’s standards, but back in 1997 it was on par with many of the majors and almost all the minors for search quality.  It was designed by Searchking CEO Bob Massa and coded by Sargeant Hatch.

The reality was SearchKing search engine was not really a search engine.  It had no crawler other than, maybe a meta tag grabber, my memory is a little cloudy on that.  It was, really a flat directory.  That is it had no hierarchy of categories.  All it searched was the Title, Description and Keywords submitted for each page.  I’m sure there was a rudimentary algo, for example, giving more weight to a keyword in the tile than in a description, but that was it.

How Did it Work?

For the person searching, it appeared to be a search engine.  There was just a searchbox.  When you searched, you got a SERP with 10 results on page one, ten more on page two.  When you clicked through to a site it was shown to you with a frame at either the top or bottom.  On the frame you could vote for the quality of the site.  You could also report spam or dismiss the frame.  The quality votes also effected ranking.  The reality was that very few searchers ever voted.  This means a key feature of the algo was rarely used.  The spam report was used a bit.  Mostly, people dismissed the frame as quickly as they could.

There was some human review.  I think Admins running the search engine kept a weather eye on the submissions and did some random checks.  They did act on reports.  But remember, the Web was wide open, wild frontier when this was built. Nobody knew what worked and what didn’t.  Even some of the major search engines like Infoseek mainly indexed only meta tags and maybe a little on page text for that one page.  Hardly a deep crawl.

Submitting your website to Searchking really meant submitting each page by hand, manually.  Again keep in mind, there were no CMS’s yet nor blog platforms in general use, so most websites were hand coded in HTML. Plus everyone was on dialup which was slow.  So websites tended to be 5 - 25 static pages.  Because of the slowness of dialup internet, we all tended to keep Titles, Descriptions and keywords short.  Search too was rudimentary everywhere.  Keyword searches were one or two words on all search engines.  In 1997 when Searchking search engine was built, it was built to work within the limitations of the day.

By year 2000, Searchking was starting to show it’s age, but it remain viable.  One feature the Searchking search engine had was instant listing.  When you added a page, it went live instantly.  This would play a key role in 2001.

By 2001, “Mighty” Yahoo was still the king of search, Google was rapidly gaining popularity for it’s deep spidering and better search results.  The other major search engines were falling behind rapidly or had disappeared.  In those days, it took Google about a month to start listing a new site that had been submitted too it.  Getting a new site to rank in Google was yet another matter, you might be listed, but you might be on page 20 of Google’s SERPs.  Also, Google was still just a web search engine, it was not a news search engine. News stories maybe got in and ranked faster but it still took a week or two before it would appear on Google.

9-11 and the need for Instant News and Information

Then the 9-11 terrorist attacks on the Twin Towers in New Your City occurred  on September 11, 2001.  Everyone was stunned.  Over on the Searchking directory hosting side, we had a forum community of directory owners.  I think we all spent most of the 11th glued to our televisions and radios, but we were also hitting all the news websites for updates.  We started sharing the URL’s to news stories on the forums.  Google had been caught flat footed, for days there was little useful or current information on it, which they acknowledged and tried to correct, but that was slow.

Over at Searchking, CEO Bob Massa told us in the forums that they were monitoring people desperately searching Searchking for any kind of information about the attacks, survivors and relief efforts.  Because Searchking could list new pages instantly, he asked us to help: to dig out and add the URL’s to news articles, new web pages of survivors, relief news, defense news, background news, to Searchking to help those looking for information.  And the SK community responded, we were searching for any news we could find ourselves anyway, TV and radio announcers were reading our makeshift URL’s of survivor lists, and we could jot down and share those too.  This was most important in those first couple of days after the attack, but people from the UK, Canada, the US, even a person in Greenland, cranked out listings of information for about 2 weeks straight. Eventually Google reprogrammed their crawlers and started catching up as did the mainstream media.  All the fancy high tech crawlers failed, but little low tech Searchking actually delivered.

It was probably Searchking search engine’s last best moment.

Why is this Important Today?

There are lessons to be learned from this example that would help make a new directory viable today.
  1. The idea of a flat directory, without drilling down through a hierarchy of categories fits in with the way people use search in 2018.  Make a directory, look and act like a search engine to the searcher.
  2. Even if you have a hierarchy you can hide it on another page so you don’t scare people off.  Just present the searchbox front and center to the user.  (Personally, I would still have a hierarchy of categories as “spider food” for Google, Bing and Yandex, but just make the searchbox so prominent users won’t ignore it.)
  3. One might combine a directory with one of the many available open source web search engine scripts  in some way and combine human reviewed results with crawler results.
  4. Instant listing after a human review is still fast.
  5. Don’t rely on user voting for rankings. People want their information in as few clicks as possible.
  6. Allow for longer Titles, Descriptions and Keywords.  You can better capture what a site is about that way.  We kept these short because of dialup slowness, but that isn’t a problem anymore.
 

Interested in the directory hosting side of the old SK?  I will have more on that in a later post.

 

This was also posted to /en/linking.

 

 

Brad Enslen @bradenslen

Like?

Search Indieseek.xyz

An IndieWeb Webring 🕸💍

<-  Hotline Webring  ->

Member of the Micro.blog Blogs Linear Ring