Google Search Console image search reporting bug June 5-7
Google posted a notice that between the dates of June 5 through June 7, it was unable to capture data around image search traffic. This is just a reporting bug and did not impact actual search traffic, but the Search Console performance report may show drops in image search traffic in that date range.
The notice. The notice read, “June 5-7: Some image search statistics were not captured during this period due to an internal issue. Because of this, you may see a drop in your image search statistics during this period. The change did not affect user Search results, only the data reporting.”
How do I see this? If you login to Google Search Console, click into your performance report and then filter by clicking on the “search type” filter. You can then select image from the filters.
Here is a screen shot of this filter:
Why we should care. If your site gets a lot of Google Image search traffic, you may notice a dip in your traffic reporting within Google Search Console. You may have not noticed a similar dip in your other analytics tools. That being said, Google said this is only a reporting glitch within Google Search Console and did not impact your actual traffic to your web site.
via Search Engine Land https://selnd.com/1BDlNnc
June 17, 2019 at 02:17PM
Pinterest broadens e-commerce capabilities with ‘Complete the Look’ visual search feature
Pinterest has launched a new “Complete the Look” visual search tool that recommends relevant products in the home decor and fashion categories based on the context of scene. For example, if a user searches for a beach scene Pin, the platform will recommend products found in similar images such as hats, sandals and sunglasses.
Why we should care
Pinterest has been focusing on its e-commerce capabilities for some time now, and this new visual search tool is another step in that direction. The technology makes it possible for the platform to recommend fashion and home decor products based on the context and attributes of all objects within an image a user searches for or saves. As a result, brands will potentially gain more exposure on Pinterest as more of their Pins are surfaced via visual search.
“Complete the Look takes context like an outfit, body type, season, indoors vs. outdoors, various pieces of furniture, and the overall aesthetics of a room to power taste-based recommendations across visual search technology,” writes Eric Kim and Eileen Li who are part of Pinterest’s visual search team.
A Gfk “Path to Purchase” report from last November found that 78% of Pinterest users who engaged with home decor Pins made a purchase based on content shared by brands on the platform — that number increased to 83% of users who engaged with fashion Pins on a weekly basis.
More on the news
About The Author
Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Search Engine Land, Marketing Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy's articles.
via Search Engine Land https://selnd.com/1BDlNnc
June 17, 2019 at 01:03PM
Decode the science of SEO- Live webinar tomorrow!
Since it first debuted in 2011, Search Engine Land’s Periodic Table of SEO has become a globally recognized tool that search professionals have relied on to help them understand the elements essential to a winning SEO strategy. And while much of the foundation of search engine optimization has either stayed the same or has become further entrenched, much has also changed as the web has become more mobile, instantly accessible and aligned to new Internet-connected devices.
Join the expert editorial team from Search Engine Land as they break down the elements that are either essential, are emerging or to be avoided at all costs in a modern SEO strategy. Register today for “The Elements of SEO — Exploring The 2019 Periodic Table of SEO Factors.”
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
About The Author
Search Engine Land is a daily publication and information site covering search engine industry news, plus SEO, PPC and search engine marketing tips, tactics and strategies. Special content features, site announcements, notices about our SMX events, and occasional sponsor messages are posted by Search Engine Land.
via Search Engine Land https://selnd.com/1BDlNnc
June 17, 2019 at 12:59PM
Google allegedly caught scraping lyrics from Genius
Song lyrics website Genius said it has caught Google stealing lyrics from its website and featuring them as rich results, the Wall Street Journal reported Sunday.
Google denied the claims, stating that its lyrics are licensed from third-party partners, not generated from other web sites, such as Genius.
“Lyrics in info boxes on Google Search are licensed, we don’t generate them from other sites on the web. We’re investigating this issue and if our data licensing partners are not upholding good practices, we will end our agreements,” Google said via its Google Communications Twitter account.
More on the story. In order to catch Google in the act, Genius began switching between curly single-quote marks and straight apostrophes, in the same sequence, for every song. When the quote marks and apostrophes are translated to the dashes and dots used in Morse code, the sequence spells out “red handed.” Then, Genius waited to see if the content appeared in a lyrics box.
The lyrics box. Google began providing song lyrics in info box results in 2014, stating that it had licensed the lyrics. At that time, the lyrics linked to Google Play.
Genius first started to suspect that Google might be taking its lyrics in 2016, when one of its software engineers noticed that the Google version of lyrics for a particular perfectly matched Genius’ own version, despite Genius having obtained the definitive version from the artist himself. Genius says that it began notifying Google of the copied lyrics as early as 2017. In April 2019, it warned Google that reusing Genius’ transcriptions is a violation of its terms of service as well as antitrust law.
Why we should care. Genius’ complaint is just one example of the predicament that many companies currently face as Google serves more direct content right in its search results. As SEO and digital marketer AJ Kohn pointed out on Twitter, click-through rate can decrease dramatically when lyrics are presented as a rich result.
Brands and publishers invest heavily in creating useful content to attract visitors and monetize by selling products, services or ads on their own web sites. Info boxes and similar result formats often keep users on Google rather than sending them to the creators’ websites. Search engines keep the traffic (and reap any rewards that may come from it) while leaving the heavy lifting to the brands.
Any legal case that Genius might file against Google is weakened by the fact that Genius does not possess the copyright on the lyrics. However, Google is facing more regulatory scrutiny for antitrust practices, which may give Genius’ complaint more weight. For now, Google is taking the position of blaming its partners for the problem.
About The Author
George Nguyen is an Associate Editor at Third Door Media. His background is in content marketing, journalism, and storytelling.
via Search Engine Land https://selnd.com/1BDlNnc
June 17, 2019 at 12:28PM
WordProof, CoBlocks update, Genesis framework 3.0 beta and more
Another week, another news roundup! In this edition, we’ll cover an interesting solution to really authenticate your content. I’d also like to highlight a tutorial on how to add AMP to your site and a cool gallery block enhancing plugin. And there’s more, so let’s get started…
Time-stamping your content with WordProof
WordProof solves exactly this problem by time-stamping your WordPress content to the Blockchain. And yes, this is the first real-life application with the blockchain that actually makes sense to me. All you need to do is install their plugin and follow the instructions to connect your site with the blockchain.
Yes, you read that correctly, the CoBlocks plugin comes with several variations, with different types of enhancements to the gallery block. They released their 1.10 version, which polishes the blocks even more, has easier maps, Form Block Spam Protection, and more. So, check out the plugin if you haven’t yet.
AMP your site up the right way
Bill Erickson walks us through building a Native AMP site. His tutorial takes the perspective of doing this in the Genesis Framework. But, don’t let that stop you from learning from it.
Genesis Framework 3.0 beta released
Genesis 3.0 will be the first big release in years. Since Genesis is already 9 years old, there were definitely things that could be removed and improved. The entire theme has been overhauled and, for instance, the blog template will be removed entirely.
One of the things which will be added to Genesis 3.0 is the integration with AMP. Which means that Bill’s above-mentioned AMP tutorial is actually easier to do with Genesis 3.0. You can try out the 3.0 beta and see for yourself.
The post WordProof, CoBlocks update, Genesis framework 3.0 beta and more appeared first on Yoast.
via Yoast https://yoast.com
June 17, 2019 at 11:25AM
What’s it like to be a Google Gold Product Expert?: An interview with Ben Fisher
As anyone who has used Google’s Help Community knows, it’s not always the official Google representatives that provide the most helpful and useful responses. Oftentimes the best assistance comes from Google’s Gold Product Experts.
Previously known as “top contributors” due to their contributions to the now-defunct advertiser community, these luminaries have been handpicked by Google to provide expert assistance on their products.
I spoke to Google Gold Product Expert, Steady Demand’s Ben Fisher to find out how one becomes a PE, and to uncover the benefits of volunteering time to the Google cause.
How did you become a product expert? Was it something you worked towards or did it come about fairly organically?
All of the product experts were brought into the program after working in the Google Help community as volunteers, but each of us has our own story about how we ended up here and what kept us going. For me, it was an email from Google themselves after they’d noticed my work volunteering on the help community.
So there’s no one-size-fits-all route to getting on the PE list? It’s more invite-only. How did you feel when you were selected?
Once you get that email from Google asking if you’d like to join the Product Expert Program, you realize what an honor it is to have earned their trust, and that they feel you truly are an expert at using Google My Business.
With this offer, the team is acknowledging that you have valuable expertise, you might be able to help shape the product, and your knowledge and experience can be of benefit to users as well as the team at Google. In my opinion, it’s a huge honor.
You’re a Gold Product Expert. Does that mean there are other titles, like Silver and Bronze?
There are a few levels of product experts. Silver is the entry level, which used to be called “Rising Star,” and to be fair it’s just as much an honor to be a Silver Product Expert.
Silver Product Experts get to cut their teeth on issues that users have, and through doing so gain access to a private forum where they can engage with Googlers, other Silver Product Experts, and also get assistance from Gold Product Experts. Then there is Platinum, these distinguished people help mentor and dedicate enormous amounts of time to helping users.
Becoming a Gold Product Expert takes time, and it’s at the discretion of the community manager and others in the Gold Product Expert group if someone is to join the ranks.
It sounds like you really have to know your stuff, and put the hours in, pro bono. Is it all worth it, though? What benefits can you look forward to if you get invited to become a Product Expert?
I’d say it’s definitely worth it. As a Gold Product Expert, I have access to a special forum where I can talk directly to the team. When we see issues in the community or trending problems with Google My Business, we have the ability to get them looked at almost immediately.
We also have the privilege of being able to ask questions indirectly to Google’s product managers. If we need clarification as to why a feature functions the way it does, or if we want to provide input as to how we feel a feature should behave, we can offer that.
Not many can boast of such exclusive access to senior Googlers, so it certainly seems like that’s something worth working towards. How else do you get to communicate with Google?
We have meetings with our community manager via regular Google Hangouts, where we can ask anything or discuss any topic. It could be something as serious enquiring about the progress the spam team is making on a major spam network, or as simple as an update on a specific case someone is working on. Either way, we have access that most do not have.
Then there are the Hangouts that we get to have with Google’s product teams. These are a treat as we get to see product features during their conceptual phase, which is sometimes six months in advance of release.
Wow, that’s early. Does that mean you’re able to influence product development? What’s the process like?
Well, we’ll first be shown a demo, get to ask questions, and provide our feedback. Then when the features are ready, we’re whitelisted and allowed access to play around with the new features.
This is handy for both Google and the PEs as we may see things a Googler may not anticipate, and we always look at things from a business, user and agency point of view.
I personally take pride in knowing that some features in GMB are there because of something that one of my teammates or I suggested!
Learning about new features before they hit all users is pretty significant. We get to break stuff, find out what’s working as intended and what isn’t, and take all that feedback back to the product teams.
In that process, we get to experience some things a long time before the public does, and in some circumstances invite our clients to try it. With Google My Business short names, for example, we had a Google Hangout about that and were given some limited access to the feature.
Another cool perk you have as a Gold Product Expert is an invite to the Trusted Tester program. This is where we get to preview all kinds of neat features that we’re not allowed to talk about. Then there’s the Trusted Verifier program, that grants us the ability to instantly verify a business based on certain circumstances, which, by the way, is a completely free service we can offer but one that’s not available to every business.
That’s a lot of digital contact to have with Google, and a heck of a lot of influence, too! Do you get many chances to speak to Googlers face-to-face?
Yes, there’s a couple of ways we meet up with Googlers “in real life,” so to speak. Well, three if you count the Local U conference.
Firstly, there are regional events like the one we have this year, where we’ll get together in Denver and meet with our community managers and the product teams at Google. These are usually smaller events. Then there’s the more official Product Expert Summit, which I love. We head to the Google campus and meet PEs from all over the world.
It must be nice to be able to finally shake hands with the people you spend so much time chatting with and working online.
Sure, it’s great to meet your virtual compatriots in person, have some drinks, and share some ideas. But there’s also the aspect of sitting around for a few days interacting with Google’s Product Managers. We really maximize our time there and try and learn as much as possible, ensuring we have as much of an impact on the end product as necessary.
Do you ever receive credit for your impact on these products?
I like the fact that, as a Gold Product Expert, I can make an impact that no one even knows about. For example, when something bad happens to a business which gets reported in the news, one of us PEs will usually look to see if the profiles related to the business are getting slammed with a review attack. If we see this, we’ll report it to Google. Then we’re usually given the ability to stop people leaving reviews on these profiles, and even to have the malicious reviews removed.
A great example of PE teamwork was the case of the massive auto accident lawyer spam network I uncovered in January. Quite a few of the PEs, like Jason Brown, Tom Waddington and Joy Hawkins, all worked together to document and track the network. After removing 1,000’s of profiles and contacting Google My Business to show them how bad it was, they enacted some methodologies to help stop the network.
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.
About The Author
Jamie Pitman is Head of Content at local SEO tool providerBrightLocal
. He's been working in Digital Marketing for nearly ten years and has specialized in SEO, content marketing and social media, managing successful marketing projects for clients and employers alike. Over this time he's blogged his heart out, writing over 300 posts on a wide variety of digital marketing topics for various businesses and publications.
via Search Engine Land https://selnd.com/1BDlNnc
June 17, 2019 at 11:01AM
How to Mine the SERPs for SEO, Content & Customer Insights via @RoryT11
The most underutilized resources in SEO are search engine results pages (SERPs).
I don’t just mean looking at where our sites rank for a specific keyword or set of keywords, I mean the actual content of the SERPs.
For every keyword you search in Google where you expand the SERP to show 100 results, you’re going to find, on average, around 3,000 words.
That’s a lot of content, and the reason it has the potential to be so valuable to an SEO is that a lot of it has been algorithmically rewritten or cherry-picked from a page by Google to best address what it thinks the needs of the searcher are.
One recent study showed that Google is rewriting or modifying the meta descriptions displayed in the SERPs 92% of the time.
Ask yourself: why would Google want to do that?
It must take a fair amount of resources when it would just be easier to display the custom meta description assigned to a page.
The answer, in my opinion, is that Google only cares about the searcher – not the poor soul charged with writing a new meta description for a page.
Google cares about creating the best search experience today, so people come back and search again tomorrow.
One way it does that is by selecting the parts of a page it wants to appear in a SERP feature or in SERP-displayed metadata that it thinks best match the context or query-intent a person has when they use the search engine.
With that in mind, the ability to analyze the language of the SERPs at scale has the potential to be an incredibly valuable tactic for an SEO, and not just to improve ranking performance.
This kind of approach can help you better understand the needs and desires of potential customers, and it can help you understand the vocabulary likely to resonate with them and related topics they want to engage with.
In this article, you’ll learn some techniques you can use to do this at scale.
Be warned, these techniques are dependent on Python – but I hope to show this is nothing to be afraid of. In fact, it’s the perfect opportunity to try and learn it.
Don’t Fear Python
I am not a developer, and have no coding background beyond some basic HTML and CSS. I have picked Python up relatively recently, and for that, I have Robin Lord from Distilled to thank.
I cannot recommend enough that you check out his slides on Python and his extremely useful and easily accessible guide on using Jupyter Notebooks – all contained in this handy Dropbox.
For me, Python was something that always seemed difficult to comprehend – I didn’t know where the scripts I was trying to use were going, what was working, what wasn’t and what output I should expect.
If you’re in that situation, read Lord’s guide. It will help you realize that it doesn’t need to be that way and that working with Python in a Jupyter Notebook is actually more straightforward than you might think.
It will also put each technique referenced in this article easily within reach, and give you a platform to conduct your own research and set up some powerful Python automation of your own.
Getting Your SERP Data
As an employee, I’m lucky to have access to Conductor where we can run SERP reports, which use an external API to pull SERP-displayed metadata for a set of keywords.
This is a straightforward way of getting the data we need in a nice clean format we can work with.
It looks like this:
Another way to get this information at scale is to use a custom extraction on the SERPs with a tool like Screaming Frog or DeepCrawl.
I have written about how to do this, but be warned: it is maybe just a tiny little insignificant bit in violation of Google’s terms of service, so do it at your own peril (but remember, proxies are the perfect antidote to this peril).
Alternatively, if you are a fan of irony and think it’s a touch rich that Google says you can’t scrape its content to offer your users a better service, then please, by all means, deploy this technique with glee.
If you aren’t comfortable with this approach, there are also many APIs that are pretty cost-effective, easy to use and provide the SERP data you need to run this kind of analysis.
The final method of getting the SERP data in a clean format is slightly more time-consuming, and you’re going to need to use the Scraper Chrome extension and do it manually for each keyword.
If you’re really going to scale this up and want to work with a reasonably large corpus (a term I’m going to use a lot – it’s just a fancy way of saying a lot of words) to perform your analysis, this final option probably isn’t going to work.
However, if you’re interested in the concept and want to run some smaller tests to make sure the output is valuable and applicable to your own campaigns, I’d say it’s perfectly fine.
Hopefully, at this stage, you’re ready and willing to take the plunge with Python using a Jupyter Notebook, and you’ve got some nicely formatted SERP data to work with.
Let’s get to the interesting stuff.
SERP Data & Linguistic Analysis
As I’ve mentioned above, I’m not a developer, coding expert, or computer scientist.
What I am is someone interested in words, language, and linguistic analysis (the cynics out there might call me a failed journalist trying to scratch out a living in SEO and digital marketing).
That’s why I’ve become fascinated with how real data scientists are using Python, NLP, and NLU to do this type of research.
Put simply, all I’m doing here is leveraging tried and tested methods for linguistic analysis and finding a way to apply them in a way that is relevant to SEO.
For the majority of this article, I’ll be talking about the SERPs, but as I’ll explain at the end, this is just scratching the surface of what is possible (and that’s what makes this so exciting!).
Cleaning Text for Analysis
At this point, I should point out that a very important prerequisite of this type of analysis is ‘clean text’. This type of ‘pre-processing’ is essential in ensuring you get a good quality set of results.
While there are lots of great resources out there about preparing text for analysis, for the sake of levity, you can assume that my text has been through most or all of the below processes:
([‘This is a sentence’])
([‘this’, ‘is’, ‘a’, ‘sentence’])
This might all sound a bit complicated, but don’t let it dissuade you from pursuing this type of research.
I’ll be linking out to resources throughout this article which break down exactly how you apply these processes to your corpus.
NGram Analysis & Co-Occurrence
This first and most simple approach that we can apply to our SERP content is an analysis of nGram co-occurrence. This means we’re counting the number of times a word or combination of words appears within our corpus.
Why is this useful?
Analyzing our SERPs for co-occurring sequences of words can provide a snapshot of what words or phrases Google deems most relevant to the set of keywords we are analyzing.
For example, to create the corpus I’ll be using through this post, I have pulled the top 100 results for 100 keywords around yoga
This is just for illustrative purposes; if I was doing this exercise with more quality control, the structure of this corpus might look slightly different.
All I’m going to use now is the Python counter function, which is going to look for the most commonly occurring combinations of two- and three-word phrases in my corpus.
The output looks like this:
You can already start to see some interesting trends appearing around topics that searchers might be interested in. I could also collect MSV for some of these phrases that I could target as additional campaign keywords.
At this point, you might think that it’s obvious all these co-occurring phrases contain the word yoga as that is the main focus of my dataset.
This would be an astute observation – it’s known as a ‘corpus-specific stopword’, and because I’m working with Python it’s simple to create either a filter or a function that can remove those words.
My output then becomes this:
These two examples can help provide a snapshot of the topics that competitors are covering on their landing pages.
For example, if you wanted to demonstrate content gaps in your landing pages against your top performing competitors, you could use a table like this to illustrate these recurring themes.
Incorporating them is going to make your landing pages more comprehensive, and will create a better user experience.
The best tutorial that I’ve found for creating a counter like the one I’ve used above can be found in the example Jupyter Notebook that Robin Lord has put together (the same one linked to above). It will take you through exactly what you need to do, with examples, to create a table like the one you can see above.
That’s pretty basic though, and isn’t always going to give you results that are actionable.
So what other types of useful analysis can we run?
Part of Speech (PoS) Tagging & Analysis
PoS tagging is defined as:
What this means is that we can assign every word in our SERP corpus a PoS tag based not only on the definition of the word, but also the context with which it appears in a SERP-displayed meta description or page title.
This is powerful, because what it means is that we can drill down into specific PoS categories (verbs, nouns, adjectives etc.), and this can provide valuable insights around how the language of the SERPs is constructed.
Side note – In this example, I am using the NLTK package for PoS tagging. Unfortunately, PoS tagging in NLTK isn’t available in many languages.
If you are interested in pursuing this technique for languages other than English, I recommend looking at TreeTagger, which offers this functionality across a number of different languages.
Using our SERP content (remembering it has been ‘pre-processed’ using some of the methods mentioned earlier in the post) for PoS tagging, we can expect an output like this in our Jupyter Notebook:
You can see each word now has a PoS tag assigned to it. Click here for a glossary of what each of the PoS tags you’ll see stands for.
In isolation, this isn’t particularly useful, so let’s create some visualizations (don’t worry if it seems like I’m jumping ahead here, I’ll link to a guide at the end of this section which shows exactly how to do this) and drill into the results:
I can quickly and easily identify the linguistic trends across my SERPs and I can start to factor that into the approach I take when I optimize landing pages for those terms.
This means that I’m not only going to optimize for the query term by including it a certain number of times on a page (thinking beyond that old school keyword density mindset).
Instead, I’m going to target the context and intent that Google seems to favor based on the clues it’s giving me through the language used in the SERPs.
In this case, those clues are the most commonly occurring nouns, verbs, and adjectives across the results pages.
We know, based on patents Google has around phrase-based indexing, that it has the potential to use “related phrases” as a factor when it is ranking pages.
These are likely to consist of semantically relevant phrases that co-occur on top performing landing pages and help crystalize the meaning of those pages to the search engines.
This type of research might give us some insight into what those related phrases could be, so factoring them into landing pages has the potential to be valuable.
Now, to make all this SERP content really actionable, your analysis needs to be more targeted.
Well, the great thing about developing your own script for this analysis is that it’s really easy to apply filters and segment your data.
For example, with a few keystrokes I can generate an output that will compare Page 1 trends vs. Page 2:
If there are any obvious differences between what I see on Page 1 of the results versus Page 2 (for example “starting” being the most common verb on Page 1 vs “training” on Page 2), then I will drill into this further.
These could be the types of words that I place more emphasis on during on page optimization to give the search engines clearer signals about the context of my landing page and how it matches query-intent.
I can now start to build a picture of what type of language Google chooses to display in the SERPs for the top ranking results across my target vertical.
I can also use this as a hint as to the type of vocabulary that will resonate with searchers looking for my products or services, and incorporate some of those terms into my landing pages accordingly.
I can also categorize my keywords based on structure, intent, or a stage in the buying journey and run the same analysis to compare trends to make my actions more specific to the results I want to achieve.
For example, trends between yoga keywords modified with the word “beginner” versus those that are modified with the word “advanced”.
This will give me more clues about what Google thinks is important to searchers looking for those types of terms, and how I might be able to better optimize for those terms.
If you want to run this kind of analysis for your SERP data, follow this simple walkthrough by Kaggle based on applying PoS tagging to movie titles. It walks you through the process I’ve gone through to create the visuals used in the screenshots above.
Topic Modeling Based on SERP Data
Topic modeling is another really useful technique that can be deployed for our SERP analysis. What it refers to is a process of extracting topics hidden in a corpus of text; in our case the SERPs, for our set of target keywords.
While there are a number of different techniques for topic modeling, the one that seems favored by data scientists is LDA (Latent Dirichlet Allocation), so that is the one I chose to work with.
A great explanation of how LDA for topic modeling works comes from the Analytics Vidhya blog:
Although our keywords are all about ‘yoga’, the LDA mechanism we use assumes that within that corpus there will be a set of other topics.
We can also use the Jupyter Notebook interface to create interactive visuals of these topics and the “keywords” they are built from.
The reason that topic modeling from our SERP corpus can be so valuable to an SEO, content marketer or digital marketer is that the topics are being constructed based on what Google thinks is most relevant to a searcher in our target vertical (remember, Google algorithmically rewrites the SERPs).
With our SERP content corpus, let’s take a look at the output for our yoga keyword (visualized using the PyLDAvis package):
You can find a thorough definition of how this visualization is computed here.
To summarize, in my own painfully unscientific way, the circles represent the different topics found within the corpus (based on clever machine learning voodoo). The further away the circles are, the more distinct those topics are from one another.
The list of terms in the right of the visualization are the words that create these topics. These words are what I use to understand the main topic, and the part of the visualization that has real value.
In the video below, I’ll show you how I can interact with this visual:
At a glance, we’ll be able to see what subtopics Google thinks searchers are most interested in. This can become another important data point for content ideation, and the list of terms the topics are built from can be used for topical on-page optimization.
The data here can also have applications in optimizing content recommendations across a site and internal linking.
For example, if we are creating content around ‘topic cluster 4’ and we have an article about the best beginner yoga poses, we know that someone reading that article might also be interested in a guide to improving posture with yoga.
This is because ‘topic cluster 4’ is comprised of words like this:
I can also export the list of associated terms for my topics in an Excel format, so it’s easy to share with other teams that might find the insights helpful (your content team, for example):
Ultimately, topics are characteristic of the corpus we’re analyzing. Although there’s some debate around the practical application of topic modeling, building a better understanding of the characteristics of the SERPs we’re targeting will help us better optimize for them. That is valuable.
One last point on this, LDA doesn’t label the topics it creates – that’s down to us – so how applicable this research is to our SEO or content campaigns is dependent on how distinct and clear our topics are.
The screenshot above is what a good topic cluster map will look like, but what you want to avoid is something that looks like the next screenshot. The overlapping circles tell us the topics aren’t distinct enough:
You can avoid this by making sure the quality of your corpus is good (i.e. remove stop words, lemmatization, etc.), and by researching how to train your LDA model to identify the ‘cleanest’ topic clusters based on your corpus.
Interested in applying topic modeling to your research? Here is a great tutorial taking you through the entire process.
What Else Can You Do With This Analysis?
While there are some tools already out there that use these kinds of techniques to improve on-page SEO performance, support content teams and provide user insights, I’m an advocate for developing your own scripts/tools.
Why? Because you have more control over the input and output (i.e., you aren’t just popping a keyword into a search bar and taking the results at face value).
With scripts like this you can be more selective with the corpus you use and the results it produces by applying filters to your PoS analysis, or refining your topic modeling approach, for example.
The more important reason is that it allows you to create something that has more than one useful application.
For example, I can create a new corpus out of sub-Reddit comments for the topic or vertical I’m researching.
Doing PoS analysis or topic modeling on a dataset like that can be truly insightful for understanding the language of potential customers or what is likely to resonate with them.
The most obvious alternative use case for this kind of analysis is to create your corpus from content on the top ranking pages, rather than the SERPs themselves.
Again, the likes of Screaming Frog and DeepCrawl make it relatively simple to extract copy from a landing page.
This content can be merged and used as your corpus to gather insights on co-occurring terms and the on-page content structure of top performing landing pages.
If you start to work with some of these techniques for yourself, I’d also suggest you research how to apply a layer of sentiment analysis. This would allow you to look for trends in words with a positive sentiment versus those with a negative sentiment – this can be a useful filter.
I hope this article has given you some inspiration for analyzing the language of the SERPs.
You can get some great insights on:
Featured Image: Unsplash
via Search Engine Journal http://bit.ly/1QNKwvh
June 17, 2019 at 09:24AM
How to Mine Competitor Websites for Untapped Keyword Opportunities via @jeremyknauff
Do you feel like you could generate more revenue if you could just crack the code on driving more organic search traffic to your website?
I have some good news and some bad news.
The bad news first – there isn’t a silver bullet. There’s no “one sneaky trick” that lets you dominate organic search, no matter what some “guru” is trying to tell you on Facebook.
But since you’re reading this article on Search Engine Journal, you probably already know that.
The good news is that you can “crack” the proverbial “code” and earn significant organic search traffic by mining your competitors’ websites to find untapped content keyword opportunities.
This is the foundation that enables you to create original, useful content that your visitors will seek out and engage with.
Most people think that identifying worthwhile topics to write about is relatively straightforward, right?
Just plug some keywords into your favorite keyword research tool, sort through the data that it returns, and then begin planning your content around that.
Well, yes and no.
That’s a good start, but it’s really just scratching the surface.
With this approach, you may come up with a lot of topics, but if this is the only approach you use, you’re competing over the same small pool of topics and leaving a lot of opportunities on the table.
But by leveraging your competitors’ websites, you can uncover a tremendous number of additional topics with sufficient search volume, that you may never have even considered.
The first step is to identify your competitors. I don’t just mean direct competitors – I mean any website that is outranking yours for core topics. If they are getting in front of visitors before you have a chance, they are your competitor from a search perspective.
From here, we’ll need to find out what these websites rank for using SEMrush. Simply enter a competitor’s URL, and then click to “View full report” button in the “TOP ORGANIC KEYWORDS” section, and you’ll be provided with a comprehensive list.
Some of the topics will be predictable. Others might be surprising.
As you begin to analyze the data, you’ll find topics that you’ve never thought of. In some niches, keyword research tools may not have accurate data. I’ve run into this from time to time for obscure topics.
For example, I recently stumbled upon a specific keyword phrase that was widely known and used within a particular industry, but none of the tools showed any search volume for it.
However, I had accurate first-hand knowledge that it alone was responsible for over 6,000 monthly visits. As you might imagine, we immediately targeted this phrase.
I bring this up to highlight the fact that you can’t rely entirely on the data provided by any keyword research tool. You can use that data as a starting point, but then you’ll need to dig deeper to identify hidden opportunities.
Expanding Our Pool
From here, we will expand out into tangentially related websites to identify even more opportunities.
There are a lot of relevant topics that most of your competitors are not writing about, but that your potential customers are interested in.
Often, these topics won’t be tied directly to buying intent. While that might make it seem counterintuitive to target these types of topics, it offers a powerful opportunity because it helps you reach people earlier in the buying process. That gives you a chance to put your brand in front of potential buyers long before your competitors can.
It can also help you to demonstrate greater expertise, and inspire trust.
This is because while most of your competitors are only creating content about their products or services, you’ll be creating more comprehensive content that answers visitors questions at all stages of the buying process.
This shows them that you’re more knowledgeable than your competitors, and also that you care just as much about serving them as you do about selling your products or services.
What we’ll do at this stage is identify:
This could end up being just a few hundred websites or it could be millions, depending on the industry. Either way, we’re not going to just take the data at face value.
Compiling the Data
First, let’s talk about how we’re going to compile the data, then we’ll talk about how to sort it.
Who Are Our Competitors Linking Too?
This is important information because it tells us what they find valuable.
It’s also important because if they’re linking to a particular website, it’s likely that it isn’t a direct competitor to them, which means it also probably isn’t a direct competitor to you.
Hold on to this data because it will be useful outside the scope of researching content topic ideas. It can be a treasure trove of link building opportunities as well.
The easiest and most effective way to compile your competitors outbound links is to run Screaming Frog to crawl their website, and then export that data to a CSV file.
Who Is Linking to Our Competitors?
Next, we need to find the websites that link to them. My preferred tool here is SEMrush. I recommend exporting this data as a CSV file as well.
This is important information because if they are linking to your competitors, they are likely relevant to your website and the topics they’ve written about will probably be of interest to your audience.
Filter the Data
From here, we will filter this data by relevance and quality. Skip the garbage link and article directories, PBNs, and irrelevant or low-quality websites.
Now take the list that remains, and begin dumping that data into SEMrush to find out what topics those websites rank for. Again, this can all be exported as CSV files.
You can simply export all of the data and then remove the irrelevant or inappropriate topics in Excel or Google Sheets, but I prefer to sort the data in SEMrush before exporting. This way, I only export exactly what I want.
In many cases, keyword research tools will have relatively accurate data on search volume, so that’s where I like to start my sorting.
Next, I’ll sort by keyword difficulty, and then by average position.
We’re looking for topics that offer reasonable search volume and face minimal keyword difficulty. We want low-hanging fruit.
If these other sites are ranking for a topic, but not ranking well, that’s a sign that it may be a hidden gem.
I recommend that you merge the data from each CSV file into one. This enables you to use the aggregate data to spot trends you might miss by looking at data from just one website at a time.
If you do this, be sure to annotate which rows are websites your competitors are linking to, and which are websites linking to them.
Plan Your Content
Once you’ve filtered the data, it’s time to start planning your content around it.
While you may be tempted to prioritize topics based simply on search volume, it’s smart to think a little more strategically.
Search volume is certainly a factor, but you’ll also want to look at how difficult it will be to rank for a topic, and it’s value to your business.
You’ll obviously want to make it a top priority to create content around the topics you and your competitors have overlooked. This is generally spread thinly across a broader range of topics.
In other words, you probably aren’t going to find many topics that will drive large volume on their own.
One tactic I like to use is to find phrases that are used frequently on a website, but that don’t have their own page.
On smaller websites, this is something you can do manually. On larger websites, you’ll need to use a more automated method. One approach to this using Screaming Frog is outlined in this Moz article.
You’re going to be creating new content here anyway, so you might as well do it right from the start.
Make sure it’s comprehensive, properly structured, and contains images and video. If you fail to do that, someone is likely to put in just a little more effort and outrank you.
On the other hand, if you can make it seem like too much effort, the majority of people will give up long before they outrank you and a lot of people won’t even try.
Set the Bar Higher
Equally important is to identify topics your competitors rank for with weak content. This might mean:
In cases like these, creating a comprehensive piece of content and building relevant, authoritative links to it can be a game changer.
And, in addition to earning new organic traffic, you’ll also be taking it away from a competitor.
This creates a powerful competitive advantage because web traffic is a zero-sum game. The traffic you take from competitors can help to increase your revenue while choking out theirs.
That’s a great way to gain market share.
Get Ready for Battle
There will also be a core group of topics that your competitors are all constantly fighting over.
What you’ll want to do here is identify any weaknesses in the content that outranks yours for these topics, and work to improve yours until it is superior. You should then create related content, published as subpages to further support it.
For example, a home builder might create a page intended to rank for “Tampa custom home builders,” which explains various components of their homes, such as stucco, windows, and roofing.
They may then create a subpage for roofing, in which they explain roofing in great detail. They may even create a subpage of that, explaining the individual roofing options in greater detail.
https://verygoodbuilders.com/tampa/ https://verygoodbuilders.com/tampa/roofing-options/ https://verygoodbuilders.com/tampa/roofing-options/asphalt-shingles/
This approach can help provide important context to search engines while providing valuable information to visitors. But, like most SEO tactics, it’s easy to take it too far.
In rare cases, some websites may need more than three labels of subpages, but they will be few and far between. In most cases, three will be plenty.
Obviously, these subpages can help you to earn traffic for topics that they cover. But they also help to support their parent pages both by demonstrating a hierarchy to search engines and by providing a relevant piece of useful content to link back to the parent page from.
Featured Image: Created by author, June 2019
via Search Engine Journal http://bit.ly/1QNKwvh
June 17, 2019 at 08:42AM
Paying for SEO: A Method for Increasing Lead Volume by 300% [Webinar] via @brentcsutoras
In the world of digital marketing, it’s essential to do quality work and generate leads at the same time.
In every industry, there are plenty of opportunities to “pay to rank” for competitive keywords.
This is not in the sense of buying links or even search ads, but think of placing oneself on paid lists, review sites, and display ads that rank well for your primary keywords.
Join our next live webinar on Wednesday, June 26 at 2 p.m. ET as Garrett Mehrguth, CEO and co-founder at Directive, shares actionable tips on how to execute a “pay for SEO” strategy that drives qualified leads and delivers results.
In this presentation, we will walk you through:
I will host a live Q&A session following the presentation.
Find out key tactics that you can execute to improve your discoverability. Sign up for this webinar today!
via Search Engine Journal http://bit.ly/1QNKwvh
June 17, 2019 at 08:07AM