Regardless of whether you think that search is a popularity contest, for years Google categorized the web in terms of “haves” and “have nots” by creating two indexes — the normal, regularly crawled results, and the supplemental results, those slightly off, unpopular, ugly performers who, try as they might to hang with the cool kids, still got put in the corner.
Of course, if this happened to your website, it was a pretty big bummer. And for larger sites, some pages almost inevitably sank down to the dusty corners of Google’s index, blamed on a variety of things from duplicate meta tags to limited content to low page authority.
That was all abolished a couple of months back when Google announced the end of the supplemental results, though SEOs have speculated as to what exactly HAS happened to all of those pages… after all, Google can’t possibly search the whole web on every query, can they?
Well, last night’s post on the fate of supplemental results certainly suggests they can. It’s a mind bogglingly massive operation, which requires, as they say, “some truly amazing technical feats,” but is promising for all of those poor pages who didn’t quite make the A-list cut (which makes me wonder who it’s a victory for: small businesses with poor SEO skills or the legions of web spammers).
Many were happy to see the supplemental index go — after all, it was the kiss of death for web rankings — but with it went an important indicator for diagnosing a site’s quality in Google and SEO best practices on sprawling, content-managed sites. With the new system, there is renewed life for all of those pages… and the double responsibility to ensure your content is keyword rich, unique and readable.