I’m writing this after John Mueller caused a minor stir on Twitter on Monday, with this post:
The concept of toxic links is something that's made up by SEO tools -- I'd just ignore it, and perhaps move on to more serious tools.
— π johnmu.xml (personal) π (@JohnMu) June 6, 2022
Now, at Moz we do not actually use this “toxic” language in our tools or accompanying guides, so this probably isn’t aimed at us. That said, I do think there’s an interesting discussion to be had here, and our competitor Ahrefs made an interesting conclusion about how this applies to “Spam Score” third party metrics, which of course is a term we coined:
Good thing we managed to resist this very popular feature request for quite a few years. π https://t.co/Obci3Gx4xspic.twitter.com/0Uc1E1uCzi
— Tim Soulo πΊπ¦ (@timsoulo) June 7, 2022
At risk of getting myself eviscerated by John Mueller and perhaps the entire SEO industry on Twitter, I want to push back slightly on this. To be clear, I don’t think he’s wrong, or acting in bad faith. However, there is sometimes a gap between how Google talks about these issues and how SEOs experience them.
Google has suggested for a while now that, essentially, bad (“toxic”) links won’t have a negative impact on your site — at least in the overwhelming majority of cases, or perhaps even all cases. Instead, the algorithm will supposedly be smart enough to simply not apply any positive benefit from such a link.
If this is true now, it definitely wasn’t always true. Even today, though, many SEOs will say this description is not consistent with their own recent experience. This could be confirmation bias on their part. Alternatively, it could be a case where the Google algorithm has an emergent characteristic, or indirect effect, meaning it can be true that something is (or isn’t) a ranking factor, and that it also affects rankings in one direction or another. (My former colleague Will Critchlow has talked about this pattern in SEO a bunch, and I have written about the distinction between something affecting rankings and it being a ranking factor.)
Either way, whether links like these are negative or merely not beneficial, it’s surely useful to have some clues as to which links they are. That way you can at least prioritize or contextualize your efforts, or indeed your competitor’s efforts, or your potential acquisition’s efforts, accordingly.
This is the purpose of Moz’s Spam Score metric, and other metrics like it that now exist in the industry. True, it isn’t perfect — nothing can be in this space — as Google’s algorithm is a black box. It’s also, like almost all SEO metrics, very frequently misunderstood or misapplied. Spam Score works by quantifying common characteristics between sites that have been penalized by Google. As such, it’s not magic, and it’s perfectly possible for a site to have some of these characteristics and not get penalized, or even remotely deserve to be penalized.
We would, therefore, encourage you not to monitor or attempt to optimize your own site’s Spam Score, as this is likely to result in you investing in things which, although correlated, have no causal link with search performance or penalties. Similarly, this is not a useful metric for questions that don’t relate to correlations with Google penalties — for example, a site’s user experience, its reputation, its editorial rigor, or its overall ability to rank.
Nonetheless, Spam Score is a better clue than SEOs would have access to otherwise, as to which links might be less valuable than they initially appear. That is why we offer it, and will continue to do so.