I couldn’t believe the response to my last post about coming up with content ideas in the B2C space during COVID-19. Thank you to all who read and commented — I truly hope it was helpful.
One piece of feedback we received was an ask to see some B2B content ideas, which, frankly, is an excellent subject. At first I was stumped about how to determine this, but then I decided that a different tool could do the trick.
Exploding Topics, the new tool by Brian Dean (Backlinko) and Josh Howarth, explores topics that are surging in popularity but haven’t hit their peak.
This time around, rather than focusing on specific keywords, I focused on overall trends so we can identify which categories might be of interest to your target businesses and their audiences. Then, you can examine whether these trends make sense for your niche and draw inspiration from them for your content.
All things remote
This trend obviously applies to B2C as well, but it’s an important consideration for B2B. Nearly everything has been either canceled, paused, or moved into the world of the virtual. For many companies and industries, this is uncharted territory, and they need guidance.
There is another category I could have included here that focuses on website and app development, programming, and the open source tools that help people build those types of assets as they lean more into digital.
If you’re not one of these B2B providers, there are still ways to gain inspiration from this data. Consider if your brand can provide:
The logistics of how to set up remote platforms
Best practices on how to make anything remote more successful and engaging
Comparison guides for different tools and solutions
The platform for people to lend the help and support they’re hoping to (like in the case of virtual tip jars)
Communication tips and solutions to help people stay productively connected
Shipping and delivery
Consumers are interested in having things shipped directly to them, but not everyone has the infrastructure to deal with shipping to begin with, let alone an increased order volume with the (understandable) safety limitations now in place.
Consumers and businesses alike are curious about how to make the shipping and delivery process more effective.
Consider if your brand can provide:
Guides for small businesses who’ve never had to ship product before
Tips on how companies can message shipping updates and delays to consumers
Advice on how to improve the delivery component of a business
UX or language tips for updating delivery messaging in apps or on websites
Transactions and payment
As we’re all staying six feet away from each other, we’re also trying not to hand off credit cards (let alone cash). Companies used to brick-and-mortar business models are also needing to adapt to fully digital payment systems.
Not all of these searches apply to business (like Venmo), but they do point to a concern everyone’s having: How do we pay for things now?
Consider if your brand can provide:
Answers about privacy or security questions people have regarding digital payments
A detailed list of all the payment options available
Advice on how to optimize storefronts and purchasing processes
Explanations of how payment processes can impact sales, and how to optimize them
Design tools
This section speaks to an overall trend I touched on before: Professionals now build their own assets if they can’t afford to hire web developers, designers, etc. More and more people are trying to figure out how to keep their businesses going when they can’t keep on as much staff or hire as many contractors.
Perhaps you can identify what your target audience might be struggling with and suggest free or inexpensive online tools to help.
Consider if your brand can provide:
A list of tools that can assist your target audience in communicating, organizing, creating, etc.
Design advice to help them get up to speed as quickly as possible
Resources in how to complete tasks on a smaller team
Recommendations for what should be prioritized when money is tight
Ethical trends
This is perhaps the most fascinating trend I saw arise. The four brands below have something in common: they all have to do with either sustainability or a transparent, mission-driven approach.
My theory is now that people don’t have as much disposable income, they’re becoming more mindful in their shopping choices, selecting items they believe match their own values.
Consider if your brand can provide:
A greater level of analysis on this potential trend
Research into how the consumer perspective has shifted during COVID-19
Advice on how to potentially shift marketing, branding, and advertising messaging
Tips on how your target audience can better understand their marketing during this tumultuous time
And finally (*sigh of relief*), marketing
Yes, as I was doing my research, my instinct that marketing would remain crucial during this time was confirmed.
That doesn’t mean you won’t lose business. We’ve had clients pull back because even though they’d like to keep marketing, keeping the company afloat by fulfilling their product orders and services and paying their employees will always (and very understandably) come first by a long shot.
But for businesses that can still afford marketing, they’ll likely need it, and they’re looking for the tools and insight they need to thrive.
Consider if your brand can provide:
Marketing 101 tips for smaller businesses
Specific how-to guides for different aspects of inbound or outbound marketing
Tool recommendations to help people get marketing tasks done quickly and cheaply
Advice on the kind of marketing that’s most successful during an economic downturn
Conclusion
Remember: This is only for inspiration. What matters most is what your target audience needs and wants. Put yourself in their shoes to be able to best address their challenges and concerns.
But hopefully some of these concepts spark some ideas for how your B2B brand can provide value to your target audiences. Companies around the world are looking for guidance and support now more than ever, and if you’re in a position to provide it to them, your content can go a long way in building trust.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
I couldn’t believe the response to my last post about coming up with content ideas in the B2C space during COVID-19. Thank you to all who read and commented — I truly hope it was helpful.
One piece of feedback we received was an ask to see some B2B content ideas, which, frankly, is an excellent subject. At first I was stumped about how to determine this, but then I decided that a different tool could do the trick.
Exploding Topics, the new tool by Brian Dean (Backlinko) and Josh Howarth, explores topics that are surging in popularity but haven’t hit their peak.
This time around, rather than focusing on specific keywords, I focused on overall trends so we can identify which categories might be of interest to your target businesses and their audiences. Then, you can examine whether these trends make sense for your niche and draw inspiration from them for your content.
All things remote
This trend obviously applies to B2C as well, but it’s an important consideration for B2B. Nearly everything has been either canceled, paused, or moved into the world of the virtual. For many companies and industries, this is uncharted territory, and they need guidance.
There is another category I could have included here that focuses on website and app development, programming, and the open source tools that help people build those types of assets as they lean more into digital.
If you’re not one of these B2B providers, there are still ways to gain inspiration from this data. Consider if your brand can provide:
The logistics of how to set up remote platforms
Best practices on how to make anything remote more successful and engaging
Comparison guides for different tools and solutions
The platform for people to lend the help and support they’re hoping to (like in the case of virtual tip jars)
Communication tips and solutions to help people stay productively connected
Shipping and delivery
Consumers are interested in having things shipped directly to them, but not everyone has the infrastructure to deal with shipping to begin with, let alone an increased order volume with the (understandable) safety limitations now in place.
Consumers and businesses alike are curious about how to make the shipping and delivery process more effective.
Consider if your brand can provide:
Guides for small businesses who’ve never had to ship product before
Tips on how companies can message shipping updates and delays to consumers
Advice on how to improve the delivery component of a business
UX or language tips for updating delivery messaging in apps or on websites
Transactions and payment
As we’re all staying six feet away from each other, we’re also trying not to hand off credit cards (let alone cash). Companies used to brick-and-mortar business models are also needing to adapt to fully digital payment systems.
Not all of these searches apply to business (like Venmo), but they do point to a concern everyone’s having: How do we pay for things now?
Consider if your brand can provide:
Answers about privacy or security questions people have regarding digital payments
A detailed list of all the payment options available
Advice on how to optimize storefronts and purchasing processes
Explanations of how payment processes can impact sales, and how to optimize them
Design tools
This section speaks to an overall trend I touched on before: Professionals now build their own assets if they can’t afford to hire web developers, designers, etc. More and more people are trying to figure out how to keep their businesses going when they can’t keep on as much staff or hire as many contractors.
Perhaps you can identify what your target audience might be struggling with and suggest free or inexpensive online tools to help.
Consider if your brand can provide:
A list of tools that can assist your target audience in communicating, organizing, creating, etc.
Design advice to help them get up to speed as quickly as possible
Resources in how to complete tasks on a smaller team
Recommendations for what should be prioritized when money is tight
Ethical trends
This is perhaps the most fascinating trend I saw arise. The four brands below have something in common: they all have to do with either sustainability or a transparent, mission-driven approach.
My theory is now that people don’t have as much disposable income, they’re becoming more mindful in their shopping choices, selecting items they believe match their own values.
Consider if your brand can provide:
A greater level of analysis on this potential trend
Research into how the consumer perspective has shifted during COVID-19
Advice on how to potentially shift marketing, branding, and advertising messaging
Tips on how your target audience can better understand their marketing during this tumultuous time
And finally (*sigh of relief*), marketing
Yes, as I was doing my research, my instinct that marketing would remain crucial during this time was confirmed.
That doesn’t mean you won’t lose business. We’ve had clients pull back because even though they’d like to keep marketing, keeping the company afloat by fulfilling their product orders and services and paying their employees will always (and very understandably) come first by a long shot.
But for businesses that can still afford marketing, they’ll likely need it, and they’re looking for the tools and insight they need to thrive.
Consider if your brand can provide:
Marketing 101 tips for smaller businesses
Specific how-to guides for different aspects of inbound or outbound marketing
Tool recommendations to help people get marketing tasks done quickly and cheaply
Advice on the kind of marketing that’s most successful during an economic downturn
Conclusion
Remember: This is only for inspiration. What matters most is what your target audience needs and wants. Put yourself in their shoes to be able to best address their challenges and concerns.
But hopefully some of these concepts spark some ideas for how your B2B brand can provide value to your target audiences. Companies around the world are looking for guidance and support now more than ever, and if you’re in a position to provide it to them, your content can go a long way in building trust.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
When it boils down to it, every idea in SEO can be understood as a set of measurements we use to rank one page over another. And that means that when it comes to measuring a concept like the authoritativeness of your content, there are almost certainly factors that you can analyze and tweak to improve it.
But if Google were to use a measure of content authority, what might go into it? Against what yardstick should SEOs be measuring their content's E-A-T? In this episode of Whiteboard Friday, Russ Jones walks us through a thought experiment as to what exactly might constitute a "content authority" score and how you can begin to understand your content's expertise like Google.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, folks, this is Russ Jones here with another Whiteboard Friday, and today we're going to have fun. Well, at least fun for me, because this is completely speculative. We're going to be talking about this concept of content authority and just some ideas around ways in which we might be able to measure it.
Maybe Google uses these ways to measure it, maybe not. But at the same time, hopefully what we'll be able to do is come up with a better concept of metrics we can use to get at content authority.
Now, we know there's a lot of controversy around this. Google has said quite clearly that expertise, authority, and trustworthiness are very important parts of their Quality Rater Guidelines, but the information has been pretty flimsy on exactly what part of the algorithm helps determine exactly this type of content.
We do know that they aren't using the quality rater data to train the algorithm, but they are using it to reject algorithm changes that don't actually meet these standards.
How do we measure the authoritativeness of content?
So how can we go about measuring content authority? Ultimately, any kind of idea that we talk about in search engine optimization has to boil down in some way, shape, or form to a set of measurements that are being made and in somehow shape or form being used to rank one page over another.
Now sometimes it makes sense just to kind of feel it, like if you're writing for humans, be a human. But authoritative content is a little bit more difficult than that. It's a little harder to just off the top of your head know that this content is authoritative and this isn't. In fact, the Quality Rater Guidelines are really clear in some of the examples of what would be considered really highly authoritative content, like, for example, in the News section they mention that it's written by a Pulitzer Prize winning author.
Well, I don't know how many of you have Pulitzer Prize winning authors on your staff or whose clients have Pulitzer Prize winning authors. So I don't exactly see how that's particularly helpful to individuals like ourselves who are trying to produce authoritative content from a position of not being an award-winning writer.
So today I want to just go through a whole bunch of ideas, that have been running through my head with the help of people from the community who've given me some ideas and bounced things off, that we might be able to use to do a better job of understanding authoritative content. All right.
1. ALBERT
So these are what I would consider some of the potential measures of authoritative content. The first one, and this is just going to open up a whole rat's nest I'm sure, but okay, ALBERT. We've talked about the use of BERT for understanding language by Google. Well, ALBERT, which stands for "a lighter BERT," is a similar model used by Google, and it's actually been trained in specific circumstances for the goal of answering questions.
Now that might not seem like a particularly big deal. We've been doing question answering for a whole long time. Featured snippets are exactly that. But ALBERT has jumped on the scene in such a dominant fashion as to have eclipsed anything we've really seen in this kind of NLP problem.
So if you were to go to the SQuAD dataset competition, which is Stanford's Question Answering competition, where they've got these giant set of questions and giant set of documents and then they had humans go in and find the answers in the documents and say which documents don't have answers and which do, and then all sorts of different organizations have produced models to try and automatically find the answers.
Well, this competition has just been going back and forth and back and forth for a really long time between a bunch of heavy hitters, like Google, Baidu, multiple Microsoft teams. We're talking the smartest people in the world, the Allen Institute, all fighting back and forth.
Well, right now, ALBERT or variations thereof have the top 5 positions and 9 of the top 10 positions, and all of them perform better than humans. That is dominance. So we've got right here this incredible technology for answering questions.
Well, what does this have to do with content authority? Why in the world would this matter? Well, if you think about a document, any kind of piece of content that we produce, the intention is that we're going to be answering the questions that our customers want answered. So any topic we start with, let's say the topic we started with was data science, well, there are probably a lot of questions people want to know about that topic.
They might want to know: What is a data scientist? How much money do they make? What kind of things do you need to know to be a data scientist? Well, this is where something like ALBERT could come in and be extremely valuable for measuring the authoritativeness of the content. You see, what if one of the measures of the authoritative content is how well that content answers all of the related questions to the topic?
So you could imagine Google looking at all of the pages that rank for data science, and they know the top 10 questions that are asked about it, and then seeing which piece of content answers those 10 questions best. If they were able to do that, that would be a pretty awesome metric for determining how thorough and how significant and valuable and useful and authoritative that content is.
So I think this one, the ALBERT algorithm really has a lot of potential. But let's move on from that. There are all sorts of other things that might have to do with content authority.
2. Information density
One that I really like is this idea of information density. So a lot of times when we're writing content, especially when we're not familiar with the topic, we end up writing a lot of fluff.
We kind of are just putting words in there to meet the word length that is expected by the contract, even though we know deep down that the number of words on the page really doesn't determine whether or not it's going to rank. So one of the ways that you can get at whether a piece of content is actually valuable or not or at least is providing important information is using natural language programs to extract information.
ReVerb + OpenIE
Well, the probably most popular NLP open source or at least openly available technology started as a project called ReVerb and now has merged into the Open IE project. But essentially, you can give it a piece of content, and it will extract out all of the factual claims made by that content.
So if I gave it a paragraph that said tennis is a sport that's played with a racket and a ball and today I'm having a lot of fun, something of that sort, it would be able to identify the factual claim, what tennis is, that it's a sport played with a racket and a ball.
But it would ignore the claim that I'm having a lot of fun today, because that's not really a piece of information, a factual claim that we're making. So the concept of information density would be the number of facts that can be extracted from a document versus the total number of words. All right.
If we had that measurement, then we could pretty easily sift through content that is just written for length versus content that is really information rich. Just imagine a Wikipedia article, how dense the information is in there relative to the type of content that most of us produce. So what are some other things?
3. Content style
Let's talk about content style.
This would be a really easy metric. We could talk about the use of in-line citations, which Wikipedia does, in which after stating a fact they then link to the bottom of the page where it shows you the citation, just like you would do if you were writing a paper in college or a thesis, something that would be authoritative. Or the use of fact lists or tables of contents, like Wikipedia does, or using datelines accurately or AP style formatting.
These are all really simple metrics that, if you think about it, the types of sites that are more trustworthy more often use. If that's the case, then they might be hints to Google that the content that you're producing is authoritative. So those aren't the only easy ones that we could look at.
4. Writing quality
There are a lot of other ones that are pretty straightforward, like dealing with writing quality.
How easy is it to make sure you are using correct spelling and correct grammar? But have you ever looked at the reading level? Has it ever occurred to you to make sure that the content that you're writing isn't written at a level so difficult that no one can understand it, or is written at a level so low as to be certainly not thorough and not authoritative? If your content is written at a third-grade level and the page is about some health issue, I imagine Google could use that metric pretty quickly to exclude your site.
There are also things like sentence length, which deals with readability, the uniqueness of the content, and also the word usage. This is a pretty straightforward one. Imagine that once again we're looking at data science, and Google looks at the words you use on your page. Then maybe instead of looking at all sites that mention data science, Google only looks at edu sites or Google only looks at published papers and then compares the language usage there.
That would be a pretty easy way for Google to identify a piece of content that's meant for consumers that is authoritative versus one that's meant for consumers and isn't.
5. Media styles
Another thing we can look at is media styles. This is something that is a little bit more difficult to understand how Google might actually be able to take advantage of.
But at the same time, I think that these are measurable and easy for search engine optimizers, like ourselves, to use.
Annotated graphs
One would be annotated graphs. I think we should move away from graph images and move more towards using open source graphing libraries. That way the actual factual information, the numbers can be provided to Google in the source code.
Unique imagery
Unique imagery is obviously something that we would care about. In fact, it's actually listed in the Quality Rater Guidelines.
Accessibility
Then finally, accessibility matters. I know that accessibility doesn't make content authoritative, but it does say something about the degree to which a person has cared about the details of the site and of the page. There's a really famous story about, and I can't remember what the band's name was, but they wrote into their contracts that for every concert they needed to have a bowl of M&Ms, with all of the brown M&Ms removed, waiting for them in the room.
Now it wasn't because they had a problem with the brown M&Ms or they really liked M&Ms or anything of that sort. It was just to make sure that they read the contract. Accessibility is kind of one of those things of where they can tell if you sweat the details or not.
6. Clickbait titles, author quality, and Google Scholar
Now finally, there are a couple of others that I think are interesting and really have to be talked about. The first is clickbait titles.
Clickbait titles
This is explicitly identified as something that Google looks at or at least the quality raters look at in order to determine that content is not authoritative. Make your titles say what they mean, not try to exaggerate to get a click.
Author quality
Another thing they say specifically is do you mention your author qualifications. Sure, you don't have a Pulitzer Prize writer, but your writer has some sort of qualifications, at least hopefully, and those qualifications are going to be important for Google in assessing whether or not the author actually knows what they're talking about.
Google Scholar
Another thing that I think we really ought to start looking at is Google Scholar. How much money do you think Google makes off of Google Scholar? Probably not very much. What's the point of having a giant database of academic information when you don't run ads on any of the pages? Well, maybe that academic information can be mined in a way so that they can judge the content that is made for consumers as to whether or not it is in line with, whether we're talking about facts or language or authoritativeness, with what academia is saying about that same topic.
Now, course, all of these ideas are just ideas. We've got a giant question mark sitting out there about exactly how Google gets at content authority. That doesn't mean we should ignore it. So hopefully these ideas will help you come up with some ideas to improve your own content, and maybe you could give me some more ideas in the comment section.
That would be great and we could talk more about how those might be measured. I'm looking forward to it. Thanks again.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
When it boils down to it, every idea in SEO can be understood as a set of measurements we use to rank one page over another. And that means that when it comes to measuring a concept like the authoritativeness of your content, there are almost certainly factors that you can analyze and tweak to improve it.
But if Google were to use a measure of content authority, what might go into it? Against what yardstick should SEOs be measuring their content's E-A-T? In this episode of Whiteboard Friday, Russ Jones walks us through a thought experiment as to what exactly might constitute a "content authority" score and how you can begin to understand your content's expertise like Google.
Click on the whiteboard image above to open a high-resolution version in a new tab!
Video Transcription
Hey, folks, this is Russ Jones here with another Whiteboard Friday, and today we're going to have fun. Well, at least fun for me, because this is completely speculative. We're going to be talking about this concept of content authority and just some ideas around ways in which we might be able to measure it.
Maybe Google uses these ways to measure it, maybe not. But at the same time, hopefully what we'll be able to do is come up with a better concept of metrics we can use to get at content authority.
Now, we know there's a lot of controversy around this. Google has said quite clearly that expertise, authority, and trustworthiness are very important parts of their Quality Rater Guidelines, but the information has been pretty flimsy on exactly what part of the algorithm helps determine exactly this type of content.
We do know that they aren't using the quality rater data to train the algorithm, but they are using it to reject algorithm changes that don't actually meet these standards.
How do we measure the authoritativeness of content?
So how can we go about measuring content authority? Ultimately, any kind of idea that we talk about in search engine optimization has to boil down in some way, shape, or form to a set of measurements that are being made and in somehow shape or form being used to rank one page over another.
Now sometimes it makes sense just to kind of feel it, like if you're writing for humans, be a human. But authoritative content is a little bit more difficult than that. It's a little harder to just off the top of your head know that this content is authoritative and this isn't. In fact, the Quality Rater Guidelines are really clear in some of the examples of what would be considered really highly authoritative content, like, for example, in the News section they mention that it's written by a Pulitzer Prize winning author.
Well, I don't know how many of you have Pulitzer Prize winning authors on your staff or whose clients have Pulitzer Prize winning authors. So I don't exactly see how that's particularly helpful to individuals like ourselves who are trying to produce authoritative content from a position of not being an award-winning writer.
So today I want to just go through a whole bunch of ideas, that have been running through my head with the help of people from the community who've given me some ideas and bounced things off, that we might be able to use to do a better job of understanding authoritative content. All right.
1. ALBERT
So these are what I would consider some of the potential measures of authoritative content. The first one, and this is just going to open up a whole rat's nest I'm sure, but okay, ALBERT. We've talked about the use of BERT for understanding language by Google. Well, ALBERT, which stands for "a lighter BERT," is a similar model used by Google, and it's actually been trained in specific circumstances for the goal of answering questions.
Now that might not seem like a particularly big deal. We've been doing question answering for a whole long time. Featured snippets are exactly that. But ALBERT has jumped on the scene in such a dominant fashion as to have eclipsed anything we've really seen in this kind of NLP problem.
So if you were to go to the SQuAD dataset competition, which is Stanford's Question Answering competition, where they've got these giant set of questions and giant set of documents and then they had humans go in and find the answers in the documents and say which documents don't have answers and which do, and then all sorts of different organizations have produced models to try and automatically find the answers.
Well, this competition has just been going back and forth and back and forth for a really long time between a bunch of heavy hitters, like Google, Baidu, multiple Microsoft teams. We're talking the smartest people in the world, the Allen Institute, all fighting back and forth.
Well, right now, ALBERT or variations thereof have the top 5 positions and 9 of the top 10 positions, and all of them perform better than humans. That is dominance. So we've got right here this incredible technology for answering questions.
Well, what does this have to do with content authority? Why in the world would this matter? Well, if you think about a document, any kind of piece of content that we produce, the intention is that we're going to be answering the questions that our customers want answered. So any topic we start with, let's say the topic we started with was data science, well, there are probably a lot of questions people want to know about that topic.
They might want to know: What is a data scientist? How much money do they make? What kind of things do you need to know to be a data scientist? Well, this is where something like ALBERT could come in and be extremely valuable for measuring the authoritativeness of the content. You see, what if one of the measures of the authoritative content is how well that content answers all of the related questions to the topic?
So you could imagine Google looking at all of the pages that rank for data science, and they know the top 10 questions that are asked about it, and then seeing which piece of content answers those 10 questions best. If they were able to do that, that would be a pretty awesome metric for determining how thorough and how significant and valuable and useful and authoritative that content is.
So I think this one, the ALBERT algorithm really has a lot of potential. But let's move on from that. There are all sorts of other things that might have to do with content authority.
2. Information density
One that I really like is this idea of information density. So a lot of times when we're writing content, especially when we're not familiar with the topic, we end up writing a lot of fluff.
We kind of are just putting words in there to meet the word length that is expected by the contract, even though we know deep down that the number of words on the page really doesn't determine whether or not it's going to rank. So one of the ways that you can get at whether a piece of content is actually valuable or not or at least is providing important information is using natural language programs to extract information.
ReVerb + OpenIE
Well, the probably most popular NLP open source or at least openly available technology started as a project called ReVerb and now has merged into the Open IE project. But essentially, you can give it a piece of content, and it will extract out all of the factual claims made by that content.
So if I gave it a paragraph that said tennis is a sport that's played with a racket and a ball and today I'm having a lot of fun, something of that sort, it would be able to identify the factual claim, what tennis is, that it's a sport played with a racket and a ball.
But it would ignore the claim that I'm having a lot of fun today, because that's not really a piece of information, a factual claim that we're making. So the concept of information density would be the number of facts that can be extracted from a document versus the total number of words. All right.
If we had that measurement, then we could pretty easily sift through content that is just written for length versus content that is really information rich. Just imagine a Wikipedia article, how dense the information is in there relative to the type of content that most of us produce. So what are some other things?
3. Content style
Let's talk about content style.
This would be a really easy metric. We could talk about the use of in-line citations, which Wikipedia does, in which after stating a fact they then link to the bottom of the page where it shows you the citation, just like you would do if you were writing a paper in college or a thesis, something that would be authoritative. Or the use of fact lists or tables of contents, like Wikipedia does, or using datelines accurately or AP style formatting.
These are all really simple metrics that, if you think about it, the types of sites that are more trustworthy more often use. If that's the case, then they might be hints to Google that the content that you're producing is authoritative. So those aren't the only easy ones that we could look at.
4. Writing quality
There are a lot of other ones that are pretty straightforward, like dealing with writing quality.
How easy is it to make sure you are using correct spelling and correct grammar? But have you ever looked at the reading level? Has it ever occurred to you to make sure that the content that you're writing isn't written at a level so difficult that no one can understand it, or is written at a level so low as to be certainly not thorough and not authoritative? If your content is written at a third-grade level and the page is about some health issue, I imagine Google could use that metric pretty quickly to exclude your site.
There are also things like sentence length, which deals with readability, the uniqueness of the content, and also the word usage. This is a pretty straightforward one. Imagine that once again we're looking at data science, and Google looks at the words you use on your page. Then maybe instead of looking at all sites that mention data science, Google only looks at edu sites or Google only looks at published papers and then compares the language usage there.
That would be a pretty easy way for Google to identify a piece of content that's meant for consumers that is authoritative versus one that's meant for consumers and isn't.
5. Media styles
Another thing we can look at is media styles. This is something that is a little bit more difficult to understand how Google might actually be able to take advantage of.
But at the same time, I think that these are measurable and easy for search engine optimizers, like ourselves, to use.
Annotated graphs
One would be annotated graphs. I think we should move away from graph images and move more towards using open source graphing libraries. That way the actual factual information, the numbers can be provided to Google in the source code.
Unique imagery
Unique imagery is obviously something that we would care about. In fact, it's actually listed in the Quality Rater Guidelines.
Accessibility
Then finally, accessibility matters. I know that accessibility doesn't make content authoritative, but it does say something about the degree to which a person has cared about the details of the site and of the page. There's a really famous story about, and I can't remember what the band's name was, but they wrote into their contracts that for every concert they needed to have a bowl of M&Ms, with all of the brown M&Ms removed, waiting for them in the room.
Now it wasn't because they had a problem with the brown M&Ms or they really liked M&Ms or anything of that sort. It was just to make sure that they read the contract. Accessibility is kind of one of those things of where they can tell if you sweat the details or not.
6. Clickbait titles, author quality, and Google Scholar
Now finally, there are a couple of others that I think are interesting and really have to be talked about. The first is clickbait titles.
Clickbait titles
This is explicitly identified as something that Google looks at or at least the quality raters look at in order to determine that content is not authoritative. Make your titles say what they mean, not try to exaggerate to get a click.
Author quality
Another thing they say specifically is do you mention your author qualifications. Sure, you don't have a Pulitzer Prize writer, but your writer has some sort of qualifications, at least hopefully, and those qualifications are going to be important for Google in assessing whether or not the author actually knows what they're talking about.
Google Scholar
Another thing that I think we really ought to start looking at is Google Scholar. How much money do you think Google makes off of Google Scholar? Probably not very much. What's the point of having a giant database of academic information when you don't run ads on any of the pages? Well, maybe that academic information can be mined in a way so that they can judge the content that is made for consumers as to whether or not it is in line with, whether we're talking about facts or language or authoritativeness, with what academia is saying about that same topic.
Now, course, all of these ideas are just ideas. We've got a giant question mark sitting out there about exactly how Google gets at content authority. That doesn't mean we should ignore it. So hopefully these ideas will help you come up with some ideas to improve your own content, and maybe you could give me some more ideas in the comment section.
That would be great and we could talk more about how those might be measured. I'm looking forward to it. Thanks again.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
Often in SEO, we get so preoccupied with technical SEO (pagination, site speed, the latest Python course, etc.) that we forget the basis of winning SEO begins and ends with keywords.
Not choosing keywords before you start with SEO means shooting in the dark — a likely losing gamble if your content will succeed or not.
Choosing the wrong keywords means wasting your time and budget on content that will never gain visibility in search results.
Conversely, choosing smart, targeted keywords can help carve out and dominate a traffic niche that raises you above the competition.
No doubt, the difference between good SEOs and mediocre SEOs is often their keyword research strategy.
Here at Moz, a question we often hear after people finish reading the famous Beginner's Guide to SEO is: What do I read next?
To give people a practical place to start, we wanted to provide you with concrete keyword research workflows. It's as if you're looking over our shoulder as we do strategic keyword research.
We also included a few intermediate-to-advanced concepts, such as keyword grouping, understanding keyword priority, and on-page keyword optimization.
And finally, we wanted to make sure it was free.
If you want, feel free to jump to the guide now, or read below about what the guide covers and how it differs from any other guide on keyword research.
We call them "seed" keywords because all your other keywords grow out of them. Finding the right seed keywords will absolutely make or break your entire keyword research strategy.
Finding the right seed keywords is about asking and answering three key questions:
What do you think you want to rank for?
What do you already rank for?
What do your competitors rank for?
After this, you validate your answers with data to find the absolute best seeds.
We also show you the exact process and tools we use to extract these seeds, such as Google Search Console (shown below).
The cool thing about seed keywords is this: they grow more seeds! Once you find the right seeds, you can reiterate the process again and again to grow a complete keyword strategy for an entire site, even one that's thousands of pages.
This is where the rubber hits the road. Here you expand your seed keywords into complete lists. These lists support multiple pages and topics, and can even grow more seeds.
This is also the place you want to be as comprehensive as possible, in order to uncover the opportunities your competition probably missed.
Nearly any old keyword tool can give you lists of hundreds or thousands of keywords. The secret to success is knowing which keywords to prioritize and pursue.
Which keywords will actually prove profitable? Which keywords can you actually rank for?
To answer these questions, we do a deep dive into the keyword metrics that help us to prioritize our keyword lists:
Relevance
Monthly volume
Keyword difficulty
Organic click-through rate (CTR)
Priority
Understanding how to use these metrics goes a long way in choosing the exact right keywords to invest in.
Keywords never exist in a vacuum. Instead, they almost always appear with other keywords.
Adding related keywords to a page is a smart strategy for increasing topical relevance. At the same time, trying to target too many keywords on the same page may dilute their relevance and make it more difficult to rank.
Here, we show you techniques to address both of these problems:
When to create separate pages for each keyword
How to group related keywords together
We'll also show you some grouping tips to help set you up for your next task: on-page keyword optimization.
Very few keyword research guides ever even mention on-page keyword optimization.
We wanted to do better.
Because keyword research uncovers intent, this is a great starting point for on-page optimization. If you understand not only what your users are searching for, but also what they expect to find, you can better create your content to satisfy their expectations.
We've also included a brief overview of where and how to incorporate keywords on the page. While this section is mostly beginner level, more immediate SEOs should find the refresher useful.
If you’re a consultant, agency, in-house SEO, or simply work for yourself, you want to know how your keywords perform in search engines.
Traditionally, keyword tracking was synonymous with "ranking" — but times have changed. Today, with personalization, localization, and shifting competitive environments, keyword tracking has grown much more sophisticated.
In this chapter, we'll cover:
Traditional keyword ranking
Local rank tracking
Rank indexes
Share of Voice (SOV) and visibility
By the end of this chapter, you'll understand which type of keyword tracking is right for you, and how to report these numbers to the people who matter.
We couldn't squeeze everything in the previous chapters, so we added all our extra resources here. The crème de la crème is the Keyword Research Cheat Sheet. You can download, print, share with your team, or pin to your wall.
We've also made a handy list of our favorite keyword research tools, along with a few other useful resources on keyword research.
We hope you enjoy! Let us know what you think in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
Often in SEO, we get so preoccupied with technical SEO (pagination, site speed, the latest Python course, etc.) that we forget the basis of winning SEO begins and ends with keywords.
Not choosing keywords before you start with SEO means shooting in the dark — a likely losing gamble if your content will succeed or not.
Choosing the wrong keywords means wasting your time and budget on content that will never gain visibility in search results.
Conversely, choosing smart, targeted keywords can help carve out and dominate a traffic niche that raises you above the competition.
No doubt, the difference between good SEOs and mediocre SEOs is often their keyword research strategy.
Here at Moz, a question we often hear after people finish reading the famous Beginner's Guide to SEO is: What do I read next?
To give people a practical place to start, we wanted to provide you with concrete keyword research workflows. It's as if you're looking over our shoulder as we do strategic keyword research.
We also included a few intermediate-to-advanced concepts, such as keyword grouping, understanding keyword priority, and on-page keyword optimization.
And finally, we wanted to make sure it was free.
If you want, feel free to jump to the guide now, or read below about what the guide covers and how it differs from any other guide on keyword research.
We call them "seed" keywords because all your other keywords grow out of them. Finding the right seed keywords will absolutely make or break your entire keyword research strategy.
Finding the right seed keywords is about asking and answering three key questions:
What do you think you want to rank for?
What do you already rank for?
What do your competitors rank for?
After this, you validate your answers with data to find the absolute best seeds.
We also show you the exact process and tools we use to extract these seeds, such as Google Search Console (shown below).
The cool thing about seed keywords is this: they grow more seeds! Once you find the right seeds, you can reiterate the process again and again to grow a complete keyword strategy for an entire site, even one that's thousands of pages.
This is where the rubber hits the road. Here you expand your seed keywords into complete lists. These lists support multiple pages and topics, and can even grow more seeds.
This is also the place you want to be as comprehensive as possible, in order to uncover the opportunities your competition probably missed.
Nearly any old keyword tool can give you lists of hundreds or thousands of keywords. The secret to success is knowing which keywords to prioritize and pursue.
Which keywords will actually prove profitable? Which keywords can you actually rank for?
To answer these questions, we do a deep dive into the keyword metrics that help us to prioritize our keyword lists:
Relevance
Monthly volume
Keyword difficulty
Organic click-through rate (CTR)
Priority
Understanding how to use these metrics goes a long way in choosing the exact right keywords to invest in.
Keywords never exist in a vacuum. Instead, they almost always appear with other keywords.
Adding related keywords to a page is a smart strategy for increasing topical relevance. At the same time, trying to target too many keywords on the same page may dilute their relevance and make it more difficult to rank.
Here, we show you techniques to address both of these problems:
When to create separate pages for each keyword
How to group related keywords together
We'll also show you some grouping tips to help set you up for your next task: on-page keyword optimization.
Very few keyword research guides ever even mention on-page keyword optimization.
We wanted to do better.
Because keyword research uncovers intent, this is a great starting point for on-page optimization. If you understand not only what your users are searching for, but also what they expect to find, you can better create your content to satisfy their expectations.
We've also included a brief overview of where and how to incorporate keywords on the page. While this section is mostly beginner level, more immediate SEOs should find the refresher useful.
If you’re a consultant, agency, in-house SEO, or simply work for yourself, you want to know how your keywords perform in search engines.
Traditionally, keyword tracking was synonymous with "ranking" — but times have changed. Today, with personalization, localization, and shifting competitive environments, keyword tracking has grown much more sophisticated.
In this chapter, we'll cover:
Traditional keyword ranking
Local rank tracking
Rank indexes
Share of Voice (SOV) and visibility
By the end of this chapter, you'll understand which type of keyword tracking is right for you, and how to report these numbers to the people who matter.
We couldn't squeeze everything in the previous chapters, so we added all our extra resources here. The crème de la crème is the Keyword Research Cheat Sheet. You can download, print, share with your team, or pin to your wall.
We've also made a handy list of our favorite keyword research tools, along with a few other useful resources on keyword research.
We hope you enjoy! Let us know what you think in the comments below.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!
There are several studies (and lots of data) out there about how people use Google SERPs, what they ignore, and what they focus on. An example is Moz’s recent experiment testing whether SEOs should continue optimizing for featured snippets or not (especially now that Google has announced that if you have a featured snippet, you no longer appear elsewhere in the search results).
Two things I have never seen tested are the actual user reactions to and behavior with SERPs. My team and I set out to test these ourselves, and this is where biometric technology comes into play.
What is biometric technology and how can marketers use it?
Biometric technology measures physical and behavioral characteristics. By combining the data from eye tracking devices, galvanic skin response monitors (which measure your sweat levels, allowing us to measure subconscious reactions), and facial recognition software, we can gain useful insight into behavioral patterns.
We’re learning that biometrics can be used in a broad range of settings, from UX testing for websites, to evaluating consumer engagement with brand collateral, and even to measuring emotional responses to TV advertisements. In this test, we also wanted to see if it could be used to help give us an understanding of how people actually interact with Google SERPs, and provide insight into searching behavior more generally.
The plan
The goal of the research was to assess the impact that SERP layouts and design have on user searching behavior and information retrieval in Google.
To simulate natural searching behavior, our UX and biometrics expert Tom Pretty carried out a small user testing experiment. Users were asked to perform a number of Google searches with the purpose of researching and buying a new mobile phone. One of the goals was to capture data from every point of a customer journey.
Participants were given tasks with specific search terms at various stages of purchasing intent. While prescribing search terms limited natural searching behavior, it was a sacrifice made to ensure the study had the best chance of achieving consistency in the SERPs presented, and so aggregated results could be gained.
The tests were run on desktop, although in the future we have plans to expand the study on mobile.
Users began each task on the Google homepage. From there, they informed the moderator when they found the information they were looking for. At that point they proceeded to the next task.
Data inputs
Eye tracking
Facial expression analysis
Galvanic skin response (GSR)
Data sample
20 participants
Key objectives
Understand gaze behavior on SERPs (where people look when searching)
Understand engagement behavior on SERPs (where people click when searching)
Identify any emotional responses to SERPs (what happens when users are presented with ads?)
Interaction analysis with different types of results (e.g. ads, shopping results, map packs, Knowledge Graph, rich snippets, PAAs, etc.).
Research scenario and tasks
We told participants they were looking to buy a new phone and were particularly interested in an iPhone XS. They were then provided with a list of tasks to complete, each focused on searches someone might make when buying a new phone. Using the suggested search terms for each task was a stipulation of participation.
Tasks
Find out the screen size and resolution of the iPhone XS
Search term: iPhone XS size and resolution
Find out the talk time battery life of the iPhone XS
Search term: iPhone XS talk time
Find reviews for the iPhone XS that give a quick list of pros and cons
Search term: iPhone XS reviews
Find the address and phone number of a phone shop in the town center that may be able to sell you an iPhone XS
Search term: Phone shops near me
Find what you feel is the cheapest price for a new iPhone XS (handset only)
Search term: Cheapest iPhone XS deals
Find and go on to buy a used iPhone XS online (stop at point of data entry)
Search term: Buy used iPhone XS
We chose all of the search terms first for ease of correlating data. (If everyone had searched for whatever they wanted, we may not have gotten certain SERP designs displayed.) And second, so we could make sure that everyone who took part got exactly the same results within Google. We needed the searches to return a featured snippet, the Google Knowledge Graph, Google's “People also ask” feature, as well as shopping feeds and PPC ads.
On the whole, this was successful, although in a few cases there were small variations in the SERP presented (even when the same search term had been used from the same location with a clear cache).
“When designing a study, a key concern is balancing natural behaviors and giving participants freedom to interact naturally, with ensuring we have assets at the end that can be effectively reported on and give us the insights we require.” — Tom Pretty, UX Consultant, Coast Digital
The results
Featured Snippets
This was the finding that our in-house SEOs were most interested in. According to a study by Ahrefs, featured snippets get 8.6% of clicks while 19.6% go to the first natural search below it, but when no featured snippet is present, 26% of clicks go to the first result. At the time, this meant that having a featured snippet wasn’t terrible, especially if you could gain a featured snippet but weren't ranking first for a term. who doesn't want to have real estate above a competitor?
However, with Danny Sullivan of Google announcing that if you appear in a featured snippet, you will no longer appear anywhere else in the search engine results page, we started to wonder how this would change what SEOs thought about them. Maybe we would see a mass exodus of SEOs de-optimising pages for featured snippets so they could keep their organic ranking instead. Moz’s recent experiment estimated a 12% drop in traffic to pages that lose their featured snippet, but what does this mean about user behavior?
What did we find out?
In the information-based searches, we found that featured snippets actually attracted the most fixations. They were consistently the first element viewed by users and were where users spent the most time gazing. These tasks were also some of the fastest to be completed, indicating that featured snippets are successful in giving users their desired answer quickly and effectively.
All of this indicates that featured snippets are hugely important real estate within a SERP (especially if you are targeting question-based keywords and more informational search intent).
In both information-based tasks, the featured snippet was the first element to be viewed (within two seconds). It was viewed by the highest number of respondents (96% fixated in the area on average), and was also clicked most (66% of users clicked on average).
People also ask
The “People also ask” (PAA) element is an ideal place to find answers to question-based search terms that people are actively looking for, but do users interact with them?
What did we find out?
From the results, after looking at a featured snippet, searchers skipped over the PAA element to the standard organic results. Participants did gaze back at them, but clicks in those areas were extremely low, thus showing limited engagement. This behavior indicates that they are not distracting users or impacting how they journey through the SERP in any significant way.
Knowledge Graph
One task involved participants searching using a keyword that would return the Google Knowledge Graph. The goal was to find out the interaction rate, as well as where the main interaction happened and where the gaze went.
What did we find out?
Our findings indicate that when a search with purchase intent is made (e.g. “deals”), then the Knowledge Graph attracts attention sooner, potentially because it includes visible prices.
By also introducing heat map data, we can see that the pricing area on the Knowledge Graph picked up significant engagement, but there was still a lot of attention focused on the organic results.
Essentially, this shows that while the knowledge graph is useful space, it does not wholly detract from the main SERP column. Users still resort to paid ads and organic listings to find what they are looking for.
Location searches
We have all seen data in Google Search Console with “near me” under certain keywords, and there is an ongoing discussion of why, or how, to optimise for them. From a pay-per-click (PPC) point of view, should you even bother trying to appear in them? By introducing such a search term in the study, we were hoping to answer some of these questions.
What did we find out?
From the fixation data, we found that most attention was dedicated to the local listings rather than the map or organic listings. This would indicate that the greater amount of detail in the local listings was more engaging.
However, in a different SERP variant, the addition of the product row led to users spending a longer time reviewing the SERP and expressing more negative emotions. This product row addition also changed gaze patterns, causing users to progress through each element in turn, rather than skipping straight to the local results (which appeared to be more useful in the previous search).
This presentation of results being deemed irrelevant or less important by the searcher could be the main cause of the negative emotion and, more broadly, could indicate general frustration at having obstacles put in the way of finding the answer directly.
Purchase intent searching
For this element of the study, participants were given queries that indicate someone is actively looking to buy. At this point, they have carried out the educational search, maybe even the review search, and now they are intent on purchasing.
What did we find out?
For “buy” based searches, the horizontal product bar operates effectively, picking up good engagement and clicks. Users still focused on organic listings first, however, before returning to the shopping bar.
The addition of Knowledge Graph results for this type of search wasn't very effective, picking up little engagement in the overall picture.
These results indicate that the shopping results presented at the top of the page play a useful role when searching with purchasing intent. However, in both variations, the first result was the most-clicked element in the SERP, showing that a traditional PPC or organic listing remains highly effective at this point in the customer journey.
Galvanic skin response
Looking at GSR when participants were on the various SERPs, there is some correlation between the self-reported “most difficult” tasks and a higher than normal GSR.
For the “talk time” task in particular, the featured snippet presented information for the iPhone XS Max, not the iPhone XS model, which was likely the cause of the negative reaction as participants had to spend longer digging into multiple information sources.
For the “talk time” SERP, the challenges encountered when incorrect data was presented within a featured snippet likely caused the high difficulty rating.
What does it all mean?
Unfortunately, this wasn't the largest study in the world, but it was a start. Obviously, running this study again with greater numbers would be the ideal and would help firm up some of the findings (and I for one, would love to see a huge chunk of people take part).
That being said, there are some solid conclusions that we can take away:
The nature of the search greatly changes the engagement behavior, even when similar SERP layouts are displayed. (Which is probably why they are so heavily split tested).
Featured snippets are highly effective for information-based searching, and while they led to some 33% of users choosing not to follow through to the site after finding the answer, two-thirds still clicked through to the website (which is very different from the data we have seen in previous studies).
Local listings (especially when served without a shopping bar) are engaging and give users essential information in an effective format.
Even with the addition of Knowledge Graph, “People also ask”, and featured snippets, more traditional PPC ads and SEO listings still play a big role in searching behavior.
Featured snippets are not the worst thing in the world (contrary to the popular knee-jerk reaction from the SEO industry after Google's announcement). All that has changed is that now you have to work out what featured snippets are worth it for your business (instead of trying to just claim all of them). On purely informational or educational searches, they actually performed really well. People stayed fixated on them for a fairly lengthy period of time, and 66% clicked through. However, we also have an example of people reacting badly to the featured snippet when it contained irrelevant or incorrect information.
The findings also give some weight to the fact that a lot of SEO is now about context. What do users expect to see when they search a certain way? Are they expecting to see lots of shopping feeds (they generally are if it’s a purchasing intent keyword), but at the same time, they wouldn't expect to see them in an educational search.
What now?
Hopefully, you found this study useful and learned something new about search behavior . Our next goal is to increase the amount of people in the study to see if a bigger data pool confirms our findings, or shows us something completely unexpected.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!