Like: The mysterious case of missing URLs and Google's AMP | sonniesedge.co.uk
Like The mysterious case of missing URLs and Google’s AMP | sonniesedge.co.uk
At least I’m not alone in thinking this.
Source: Oatmeal
Like The mysterious case of missing URLs and Google’s AMP | sonniesedge.co.uk
At least I’m not alone in thinking this.
Source: Oatmeal
Read: Google AMP Can Go To Hell | Polemic Digital
IMHO Google is like a virus, it’s wormed it’s way in and taken control of phones, browsers, search online advertising, http vs https, how you publish your news, and how you build your website. Anti trust breakup is the solution.
My random thought for the day. These can be dangerous. Hold my beer.
What would happen if you combined a standard web directory script with Indieweb.org features like webmentions and such? I think you could end up with a very powerful tool for a directory. I have not the slightest idea how one would actually do it.
Presuming that both parties have webmentions here is how I see it working:
There are two kinds of directories 1. rare legitimate directories that are trying to be a navigation aid on the web, and 2. link popularity directories that are basically there to sell a link to websites for SEO purposes. A mention from the former should be welcome to any webmaster. And a directory that is trying to be a legit web navigation aid needs to attract searchers to use it. Win-win. On the later, I could see spam directories trying to abuse this. The moderating factor is unlike a decade ago, very few new directories of any type are being started these days.
Why?
The web has changed in 20 years. Webmasters no longer run out and submit their sites to directories, they are used to social media and the search engines just find them eventually. So a very high percentage of listings in a new directory are going to have to be added by an Editor rather than by a submission to the Add URL form. Because of this it is even more important for the directory to let webmasters that have not submitted to the directory know they have been listed. It’s a point of contact.
This was also posted to /en/linking.
Mojeek is a crawler based search engine with its very own index of web pages, so we are not reliant on any other engine to produce our resultsLike: Independent and Unbiased Search Results
See the last paragraph for more. This is why search engines like Mojeek and Gigablast and even the directory Curlie.org are important - they have their own index.
I have been using Mojeek.com privacy search engine for about 3 weeks as my daily driver on all laptops. Most of my initial thoughts have turned out to be right.
I used Mojeek just as hard as I did Qwant and all the other search engines. There were many times where I was adjusting my search terminology to refine my results. Unlike Qwant, I was never mistaken for a robot in the middle of important work and challenged to prove I was human. Mojeek gave me what it had every time, no time outs, no challenges to my humanity.
Mojeek has an optional feature for Emotional Search. I did not use this, and I really do not care about it.
The search engine has one of the most uncluttered SERP of any search engine: no ads, no trying to lure you to onsite portal features, no product placement, just straight up search results with appropriate Wikipedia articles linked to in the right hand column.
The Mojeek algorithm seems pretty darn good. I do know that the algo uses both on-page and off-page (linking) factors for ranking which is exactly what you would expect for any modern crawling search engine. Only Mojeek knows exactly how these are applied.
I do a lot of certain types of searches: Navigational searches and review, comparison type searches. Navigational searches are where I know the site I want, but I’m unsure of the domain name: (eg: was it FOMOCO, Ford Motors, Ford Cars, ford.com?) Review comparison searches (eg. “best free email client for Windows 10,” “best notes software for linux 2018”)
However, the size of the index does come into play. Sometimes the topic is so obscure that Mojeek just didn’t have either very many results or not enough good results. This is when I would try to phrase my search a different way. Mostly that didn’t work.
Long complex multi keyword searches often came up with very few results. Again this is just index size. Mojeek understood the complex search, it just didn’t have much that fit all the keywords.
I would say about 70 percent of the time, in daily default use, Mojeek gave me something useful. Sometimes it dug up gems that Google or Bing fed search engines would have buried on page 4 or more. And it was because of those gems, that I really didn’t mind using Mojeek as my primary search engine. If it failed I could easily run the same search on another engine for backup. All said, this test with Mojeek was more fun than annoying.
Actually, the more I used Mojeek the more I came to respect it. Working with only their own index and algo is like performing a high wire act without a net. You don’t have that feed from Google or Bing as a safety net to back you up. I kind of looked forward to seeing what it would bring up, but that is just me, YMMV.
I don’t think most mainstream people will use Mojeek yet as their primary search engine but I do think, right now, it is a good second search engine, for when you are tired of seeing the same domains dominating your search results, you can pop over to Mojeek and find other voices. I still wish I could code a parallel search form with DDG and Mojeek on it, I would use that every day and have the best of both worlds.
UPDATE: 22 March 2019
Per Mojeek: The current index is 2.3 billion pages. They hope to double that in 2019 and double that again in 2020. That will help to address many of the weaknesses I noted in the above review. Good news. Source
Noted Jacqueline Pearce obituary | Television & radio | The Guardian
I really only know her from the wonderful, Blake’s 7 (a show which followed the old BBC rule of “the cheaper and more wobbly the sets the better the script and the dialog,) where she did a fantastic acting job. Physically Jacqueline was striking, but often not the most beautiful woman on the set, but she could emit - Presence - with just enough camp, and project that through the lens and onto the screen. That takes some acting.
RIP Jacqueline Pearce.
I’ll be the first to admit, I’d like to run either a search engine or a directory of my own. It’s one of those itches that isn’t going away.
I almost started a real directory a couple of months ago but better judgment prevailed.
Then today I discovered this: WSN Links Wordpress plugin. Suddenly many of my objections from my previous post were solved. Attaching a full directory to a blog allows you to build traffic on a budget. While a stand alone directory would be hard to promote without spending lots of cash.
See, the other problem is I’m on managed Wordpress hosting so I can’t just install other stand alone scripts on the server. But a plugin, yes that I could install.
I had a ton of other questions: General vs niche, if niche what niche, how to seed it with listings and more? The only way to answer many of these questions would be to install the plugin and see exactly what the plugin’s capabilities are.
So I tried and the installation failed. Wordpress plugin uploader wants .zip files NOT .php and without FTP access to the server uploading that file just is not going to happen.
Saved, I think.
Maybe someday, maybe soon, maybe not.
I’m going to start using Gigablast.com as my default daily search engine for the next 3 weeks or so. Gigablast does not bill itself as a privacy search engine so I figure they are tracking some behavior. However, they are not in the advertising business so I don’t think it matters too much for me.
Once again, I invite you to do the same with me and let me know what you think of Gigablast. Just set Gigablast as your default search engine and use it first every time you search.
Searchking.com had two divisions: 1. A search engine, 2. remotely hosted web directories (think Wordpress.com only for directories,) run by different individuals. This article will deal with the search engine alone. The Searchking search engine I am going to talk about is not the Searchking directory that exists today even though it is using a similar concept in presentation.
This is all from memory having used it quite a lot. I have no insider knowledge of how it worked or was administrated.
The reality was SearchKing search engine was not really a search engine. It had no crawler other than, maybe a meta tag grabber, my memory is a little cloudy on that. It was, really a flat directory. That is it had no hierarchy of categories. All it searched was the Title, Description and Keywords submitted for each page. I’m sure there was a rudimentary algo, for example, giving more weight to a keyword in the tile than in a description, but that was it.
There was some human review. I think Admins running the search engine kept a weather eye on the submissions and did some random checks. They did act on reports. But remember, the Web was wide open, wild frontier when this was built. Nobody knew what worked and what didn’t. Even some of the major search engines like Infoseek mainly indexed only meta tags and maybe a little on page text for that one page. Hardly a deep crawl.
Submitting your website to Searchking really meant submitting each page by hand, manually. Again keep in mind, there were no CMS’s yet nor blog platforms in general use, so most websites were hand coded in HTML. Plus everyone was on dialup which was slow. So websites tended to be 5 - 25 static pages. Because of the slowness of dialup internet, we all tended to keep Titles, Descriptions and keywords short. Search too was rudimentary everywhere. Keyword searches were one or two words on all search engines. In 1997 when Searchking search engine was built, it was built to work within the limitations of the day.
By year 2000, Searchking was starting to show it’s age, but it remain viable. One feature the Searchking search engine had was instant listing. When you added a page, it went live instantly. This would play a key role in 2001.
By 2001, “Mighty” Yahoo was still the king of search, Google was rapidly gaining popularity for it’s deep spidering and better search results. The other major search engines were falling behind rapidly or had disappeared. In those days, it took Google about a month to start listing a new site that had been submitted too it. Getting a new site to rank in Google was yet another matter, you might be listed, but you might be on page 20 of Google’s SERPs. Also, Google was still just a web search engine, it was not a news search engine. News stories maybe got in and ranked faster but it still took a week or two before it would appear on Google.
Over at Searchking, CEO Bob Massa told us in the forums that they were monitoring people desperately searching Searchking for any kind of information about the attacks, survivors and relief efforts. Because Searchking could list new pages instantly, he asked us to help: to dig out and add the URL’s to news articles, new web pages of survivors, relief news, defense news, background news, to Searchking to help those looking for information. And the SK community responded, we were searching for any news we could find ourselves anyway, TV and radio announcers were reading our makeshift URL’s of survivor lists, and we could jot down and share those too. This was most important in those first couple of days after the attack, but people from the UK, Canada, the US, even a person in Greenland, cranked out listings of information for about 2 weeks straight. Eventually Google reprogrammed their crawlers and started catching up as did the mainstream media. All the fancy high tech crawlers failed, but little low tech Searchking actually delivered.
It was probably Searchking search engine’s last best moment.
Interested in the directory hosting side of the old SK? I will have more on that in a later post.
This was also posted to /en/linking.
Like: How to Sanction Google for their Aggressive Behavior | Michael Martinez
An excellent list above. Yandex Metrica is a good free replacement for Google Analytics.
A couple ideas of my own:
I used Keurig single serve coffee machines for many years. I went through a lot of machines, some made by Keurig and some by others. I liked my Keurig machines. Then I lost two machines just a couple of months apart. It wasn’t the Keurigs’ fault, I had two weird electric power company outages that must have had a power spike. Both machines bricked themselves and no amount of unplugging or cures found online would bring them back to working order.
I decided to try a simpler way to make coffee. Something low tech. Something less expensive.
I still liked the single serve aspect for making coffee. I’d used pod coffeemakers and then Keurig machines for a long time and didn’t want to give that up. I also wanted low maintenance and easy cleanup plus durability. I wanted good coffee.
One thing the Keurig was - it was fast, but I was willing to sacrifice some speed for all of the above.
Pour Over Cone Filter: next was a stainless steel pour over cone filter. I looked at other systems like French Press but the glass is breakable and I didn’t know how easy cleaning the parts would be. With this I just stick it in the dishwasher every night and it comes out fine. I also liked that it was eco-friendly and didn’t require paper filters.
Coffee Storage: I use pre ground coffee. It’s just easier and faster for me. I store it in this stainless steel canister.
Coffees I Like
I like dark roasted coffees.
Favorite: Seattle’s Best Post Alley
Cheap Everyday Good: Aldi Dark French Roast
What happens when you have company? I bought the Melitta 10 cup coffee maker. I have not used it, but I got it just in case.
My response to Nick Montfort’s recent article asserting that the era of the open web as the main platform for digital writing has forever passed.Like: Fogknife : Rejecting the “Post-web era” while embracing The Future (5 minute read)
The cool part of the web is that you can carve out your own little chunk of it. We can build whatever web we want.
Bookmark: via 10 Tab Management Tricks | Vivaldi Browser
These come straight from the users. If you always have a lot of tabs open, Vivaldi is your browser.
This blog is just a couple of months old and same for the domain. I was looking at my Comments admin panel and I have just over 200 approved “comments” this includes both written comments and mentions which appear on site as “facepiles”. I’m thinking only about 3 or 4 of those comments were from the traditional comment forms at the bottom of each post. The rest come from Indieweb style webmentions from other Indieweb blogs, Micro.blog, Twitter and G+.
I’m not telling you this to brag. This is still just a insignificant, dumpy, tiny, newish blog. But I have blogged before, and while I have had participation, I have never had this level of good, thoughtful, helpful engagement. It just does not happen on a new blog by a nobody. Ever.
Part of this is I stumbled upon the Indieweb which we bloggers never had before and they responded. Part is due to the great Micro.blog community.
I can also tell you this much: people do click on those links I left commenting on other peoples blogs and equally people are clicking on the links other people leave on this blog.
Yup, these Indieweb folks are definitely on to something.
Bookmark via WSJ: Yahoo plans to scan users’ messages for data to sell to advertisers / Boing Boing
Good link to privacy settings in the Boing boing article.
One thing I liked about Microsoft Outlook (Hotmail) is they said they don’t scan your emails for advertising.
#search engines #social networks #silos #indieweb
I hear a lot of people wanting the social network silos (mainly Facebook and Twitter) to go away. I too want them to go. Eventually. But before they do, I want to examine some things in this little essay.
Moreover, that hole in Google (plus Google’s bad record on privacy) gave smaller search engines just enough breathing room to try and become established (ie. Duckduckgo, Qwant, Mojeek.)
Web Advertising: Again, before Facebook and Twitter, Google had a lock on both search advertising and display advertising. Facebook in particular opened that up. Suddenly, sellers had an alternative place for ad campaigns besides something owned by Google. If you are not selling stuff this means nothing to you, but if you are in business, large or small, it means a lot.
Traffic: Posting on Facebook and Twitter can drive a lot of traffic to your website or blog. Syndication (crossposting) is just another way of posting. I’m convinced that a whole new generation has grown up that really does not remember the times before Facebook, Twitter and the other social network silos. I can see it by their actions and inactions. They don’t know how to get traffic besides syndicating to Facebook and Twitter. What happens if those two cut off syndication? What happens if everybody leaves FB and Twitter so nobody reads your posts?
See, right now as a blogger, I don’t really need Google traffic. I have Indieweb webmentions, Twitter and other social networks for traffic. But if Twitter goes down or walls itself off, it is going to be lean pickings for visitors.
My biggest fear, is that if Facebook and Twitter suddenly crumble, we will go right back to having Google control everything. By that I mean Google will control both traffic and discovery on the Web.
Yes it won’t be quite as all pervasive as it was before, at least as long as Bing sticks around and does not jump the shark. Indieweb stuff is good but still a tiny niche (heck blogs are a small niche). Smart things are being worked on, experimented with, new kinds of automated directories, new innovative webrings, - all discovery tools but they are not ready yet, that and nobody among the public know how to use them. Things like RSS, which is a good source of repeat traffic, are experiencing a revival, but again this is just a small segment. Given time I think RSS will be big but it ain’t there yet.
Google is a silo too. And I can tell you Google is part of what sucked all the fun out of Web 1.0. Facebook and Twitter were not even around. It was Google. And living under Google dominance is no fun. Right now the Facebooks and the Twitters are still around so word can spread without Google. It’s a rare opportunity but you better hurry.
Seriously, if FB and Twitter unravel quickly, how do we counter the Google silo? Ideas?
via GitHub - dylanbeattie/rockstar: The Rockstar programming language specificationBut why?
Mainly because if we make Rockstar a real (and completely pointless) programming language, then recruiters and hiring managers won’t be able to talk about ‘rockstar developers’ any more.Also ‘cos it’s kinda fun and any language based on the idea of compiling Meatloaf lyrics has to be worth a look, right?
This was also posted to /en/code.
via LG G7 One announced: Android One flagship with solid specs and ‘exceptional’ price - The Verge
I hope this comes to the US. We need more Android One phones with some specs.
A few days ago I bought a Windows laptop, my first Windows 10 machine and my first Windows computer in decades. But this is mainly a hardware review, Windows is Windows no matter what it is running on, so I will save most of my adventures with Win10 for a separate post.
I searched Amazon high and low trying to find something suitable, new, at my price point, that I didn’t have to wait a week to get. And I was pretty well stymied with every laptop there was always something missing. Finally out of desperation, I clicked on Amazon’s Choice recommended lineup for laptops. And that is where I found my Asus Vivobook Thin and Light 14” laptop. (How’s that for a mouthful?) Frankly, it was a little more than I wanted to spend but it was about the only laptop that had all the features I really wanted and wasn’t on back order or something. It was in stock and ready to ship.
The computer arrived on time. The box contained the laptop, two small booklets and the AC power source. That’s it. Frankly that is all you need. One booklet was a Getting Started guide, fast read, easy to understand. The second, much thicker, booklet was written by lawyers and bean counters so I put it aside without looking at it. I liked that there was not a lot of disks or loose papers - clean and neat.
Everything on the Asus seems to be well made.
Silenced Cortana: this is just my personal preference. Cortana is decent, if you like it keep it on, I just don’t like things talking to me.
Games: I’m not a gamer so Candy Crush could go. Minecraft and more went too.
MS Office Trial: I don’t need it and I’m not going to pay for it. (See below.)
Tiles: Those things in the Windows menu tray left over from Windows 8. They take up a lot of memory so I got rid of them.
Google Chrome: Google has this loaded with ways to track you. I got rid of it.
Anti ransomware: Cybereason RansomFree (free). Always defend in depth.
Office: WPS Office (free) also came on the laptop. I already have WPS on my Android phone and it seems decent for me as little as I use it, anyway I kept it. A free heavy duty office suite is LibreOffice which is great if you want more.
Browser: I installed Vivaldi which is my favorite and takes care of my privacy concerns. Normally I would also install Firefox, to have a backup browser, but Win10 comes with Edge browser which you cannot uninstall so I will use that as a backup for now.
eMail: Win10 comes with a program “Mail” which looks like a modern version of Outlook. It’s attractive. Some people have reported problems with it, so I took some advice and installed Mailbird, which I like so far.
That’s it, you have to root around Win10 to change defaults but that is a Windows thing. None of this software swapping in any way reflects badly on Asus, I would do the same with any Windows 10 computer.
One final thing: I think this is under Windows Settings >> Privacy, I shut down all tracking and reporting Win 10 sends to Microsoft. If you use Cortana, you might need some of it but I don’t so I shut it all down.
2018
Bookmark: via NewsGuard Browser Extension Aims to Alert You to Fake News Sites
If this works it’s a great find.
Bookmark via How to Set Up Sync in Vivaldi and Synchronize Your Browsing Data
It’s still experimental but fairly stable. From MakeUseof.
I added a Guestbook Page (under About) using a Wordpress plugin I found Gwolle Guestbook.
What I like about this guestbook plugin is it has multi layer protection against spam, plus a final human review before anything goes live. Also, it does not try to use the Comments part of Wordpress. Somehow it just seems tidier to keep Comments, which in my mind are about posts, as comments and guestbook entries separate.
Heck it’s all part of an experiment: can you have old things like Guestbooks, will anybody know what they are, will they be useful, in 2018? The only way to tell is try it.
And it might lead to discovery of new websites for viewers.
The idea came from @vega Weird Indieweb Idea of the Day Guestbooks. Many thanks Vega.
I updated my On Blog Design post to include many links to give examples as they exist on this blog of what I’m talking about. I think it works better or maybe less muddled.
In reply to: Ticker Tape Parade.
What no black background with with neon green text? 😀
Seriously, I really like this. At first I was disoriented for a second because I was expecting rigid and flat. But then it just all clicks and makes so much sense. The threading and the different sub styles for different post types all work really well together. Colors are good. Most important, it’s highly readable. Very cool.