Lucky Tackle Box is the most popular monthly fishing subscription box that introduces fisherman to new baits and fishing tackle every month for every species! Get the best new baits from the top brands, SAVE MONEY and CATCH MORE FISH! No mystery in the tackle box you’ll get… perfect for yourself or as a gift!

Bookmark: Lucky Tackle Box | Monthly Fishing Bait & Tackle Subscriptions Boxes – LUCKY TACKLE BOX

I just gave a subscription as a birthday gift and it was a hit.  Good gift for an angler.

Also on:

Introduction The “best” feed reader is largely a matter of individual preference. There are many good ones. Most of them, including the best, are free like browsers. The one that matches the way you want to work is best for you. 🙂 No matter which reader you choose, it should give you some way to back up your feeds, preferably as an OPML file. You may also be able to use your OPML file to move to another reader, although the formats may not be compatible.

Like: Best Free RSS Reader-Aggregator | Gizmo’s Freeware

Wow what a great article.  It’s much more comprehensive than most of it’s kind.  One thing I’ve learned you really really need a feed reader in the Indieweb space and eventually on Micro.blog.

In the Indieweb you are going to really want to follow all those neat blogs you discover.  On Micro.blog the timeline is purposely fleeting.  There will be people you follow who you don’t want to miss any of their posts or you just find that you are following too many interesting people and the timeline moves past too quickly: the solution is to subscribe to their Micro.blog blogs in the feed reader.  That way you capture it all.

It’s just an essential tool.  I use Inoreader, which is listed in the article.

Also on:

If I were building a search engine…

  1. You need to start building your own index of the web.  That means you need a crawler (robot) and it needs to be good.  It takes time to build an index and it is not cheap.
  2. I would gather in info from other providers: Wikipedia, Wolfram Alpha whatever I could cobble together.
  3. Until your own index is ready, you need to have a search results from a big search engine.  There are only two left: Google and Bing in English.
  4. What happens if Google or Bing refuse to sell you a search feed or if they refuse to renew because you are getting too big?  You could probably hammer together a good blended backfill feed by combining Mojeek, Gigablast and Yandex using your own algorithm.  You either lead with your own results and use the other three as backfill or you blend your own results in with the three others in a sort of seamless metasearch.  (You do need to plan for “what if” scenarios.
  5. All the above is to buy you time while you learn and refine how to crawl the web on your own.
  6. Eventually you roll out your own search index as the backbone of the search results with the others maybe on standby for queries you are weak on.

Is Duckduckgo using Bing as it’s backbone search provider while it builds its own index?  We know DDG has it’s own crawler, but we don’t know if it is building it’s own index or just spam hunting.  I certainly do not know.  If it were me, I would not be building a search engine user base successfully, and expect somebody else to provide the search feed forever.  Not when there are only two big indexes.  If I were DDG I would have a Big Plan plus I would have contingency plans about 3 or 4 layers deep.  DDG is pretty quiet about all this which is probably wise.

Qwant seems to be the other player.  Their strategy is more open:  they are actively building an index by crawling the web.  Bing is providing backfill.  It is not obvious where Qwant’s index ends and Bing’s begins.  The results are good and seamless.  With a little luck, marketing and money Qwant will eventually need to use Bing less and less.  This too is a good plan.

Fortunately, Bing is willing to sell it’s search feed to just about anyone that can pay the fee.  For now.  I do not think Microsoft really knows what to do with Bing, except to somehow milk it for all the dollars they can get until they decide what to do with it and how it fits in with the company strategy.

Search engines are important but they are not as important as they were before the social media silos of Facebook and Twitter.  You don’t need Google to find a company’s Facebook page.

If I were trying to build a search engine today those are some of the things I would try. All this could change in a year.

Also on:

The finding of the Coroner’s Court is that 1990’s style webrings are officially dead.

Evidence of Demise

  • Two of the three remaining Ring Hosts are broken.  Both Webring.org and Ringsurf.com are broken in such a way that nobody can sign up as new members and it has been that way for a time.
  • The third remaining ring host, Webringo.com, is functioning.  It just appears that they have no traffic.  But points to them for keeping things in working order.
  • New 1990’s style rings created have had zero take up.  This is too small a sample to really tell but it is a small indicator.
  • A newer Indie-tech style webring has little useful traffic despite a user base.

What Killed the Webring?

  1. Generation shift.  Web 1.0 users “surfed the Web” so they liked the idea of a curated grouping of websites they could surf to.  Modern web users are used to helicoptering into a single web page via a search engine.  They only care about that page and it’s information, not websites or surfing.
  2. Rings are passive.  They sit there and wait to be discovered.  They are passive in recruiting members and they are passive in finding users.  Passive cannot break through the noise of the modern web.
  3. Search engines used to suck.  That was one reason for webrings you couldn’t find anything.
  4. Geocities, Tripod and Homestead.  Webmasters on these free hosts wanted to be found, joining a webring got you traffic.  Those free host webmasters were also familiar with HTML so they were not intimidated by having to put a ring code on their sites.  Modern webmasters use CMS’s and are more intimidated by messing with HTML code.
  5. Young webmasters may have heard of webrings in passing but have never seen one in the wild.  They don’t know what they are. Ditto the public visitor, they don’t know what they are.
  6. Commerce.  The web in the 1990’s was little used for commerce.  It was a place to explore, have fun, find neat things, exchange information and ideas.  Rings were good for explorers but not daily commuters.  Today commerce has taken over the web, efficiency rules so we can maximize sales, revenue and consumption. Webrings were never good for that.
  7. Lack of traffic.  Webring hosts had hubs.  These were a directory of webrings organized by subject. Example.  Many tens of thousands of visitors went to these ring hosts to find rings to surf, because search engines sucked.  So a webring gained traffic from both the ring host and the ring codes on individual sites.  The biggest reason you joined a ring or started your own ring, was to tap into the hundreds of thousands of eyeballs at those hubs, the code on other websites was icing on the cake.

The notable exception to this today might be the Bomis style ring.  It had enough differences that it might be a sleeper.  I’ve searched for any old perl or php scripts that would create a Bomis clone, there are hints that one may have existed at one time, but it is long gone.

There may still be some life in old style webrings: it seems to me neocities.org is a perfect match for webrings.  But it would take some promotion.  A ring host would need to get listed in Neocities webmaster resources pages and it might catch on. They would be a good match just as they were for Geocities et al.  But it would take effort.

The demise of the webring does not make me sad.  It’s time has passed and there are better ways to find websites.  It would have been nice to have it as a tool in the fight against the Google search monopoly silo but it’s a bit like fighting Delta Force with a sword.

This is part of a series: See Part II Here.

This was also posted to
/en/linking.

Also on:

To Do’s

Internal ads:  I need to look into ad or banner system plugins for WordPress.  I don’t need it right now but will need.  I am not talking about Adsense or outside ads, but I need to drive some traffic to my own content.  The big example would be the Blog Directory.  I need to let people know it is there.

Graphical banner ads are a pain.  Mainly because I’m terrible at making graphics.  The other thing is they can screw up the view on mobile.  The good thing is you set up the campaigns and forget it.

Text ads? Maybe.  Worth trying.

Placement?  I hate ads between paragraphs in articles but it is effective.

I need to think this through and see what free plugins are available.

More directory listings:  I need to add more.  I could easily add the famous blogs, but that really isn’t the focus of the blog directory, which is hopefully for good new or unfound blogs – the underappreciated.  Will work on this.

Also on:

UK based Mojeek.com is the next privacy search engine I am going to make my default and use for a few weeks.  (There is a UK specialized version at Mojeek.co.uk, plus German and French versions.)

I did a quick look over before, but this test will be daily use as my default browser.  I will try my best to use it exclusively, without resorting to a backup search engine.

I really don’t know what to expect.  Mojeek has it’s own unique database that it is building by crawling the web with a robot.  But, unlike Duckduckgo or Qwant, it has no backfill from a massively larger search engine to back up the Mojeek search index.  They out there with no training wheels.  That takes guts.

So we will just have to see how I get on.

I don’t have to do this alone.  You can join me!  Just make Mojeek your default search engine for a few weeks and use it as your daily driver.  I would love to hear how you do and what you think.

Also on:

My experiment with Qwant search engine is ending early.  On July 20th, 2018 I started using it as my default search engine for a month long test.  Today, August 7th, I am terminating further testing.

The test terminated when I got yet another captcha because they detected unusual activity from me and thought I was a robot. That never happens to me.  I just don’t type that fast.  When I’m trying to find something I do keep refining my search query.  I guess that is too much for Qwant.  They apparently only want dilettantes doing an occasional search. This is not a robust search engine for somebody who seriously works and searches.

The truth is, I was already getting irritated with Qwant.  Not for the quality of it’s search results, but because the search results page was taking too long to load.  The page is pretty but slow.  I’d want lean and fast results.  The actual search results were pretty darn good.

But I can’t use a search engine that craps out on me right in the middle of a project.

Pros:

Trying to build their own unique index.

Decent search results.

Privacy

Cons:

Slow

Not robust. Can’t handle multiple searches typed by a slow human typist.

Also on: