Google: Build A Site That If Google Doesn't Rank Well, It Would Be A Search Bug https://ift.tt/2DPNDlS Google's John Mueller said it is wise to think long term with your web site and build a site that is so fantastic that if Google isn't ranking it well for relevant queries that Google would consider it a bug. Thus Google would have to update their algorithms to rank your site for the relevant queries. SEO via Search Engine Roundtable https://ift.tt/1sYxUD0 November 28, 2018 at 07:40AM
0 Comments
https://ift.tt/2P8jMH1
I Wish Google Would Stop Saying They Do Hundreds Or Thousands Of Updates Per Year https://ift.tt/2Sia338 I really wish Google would stop responding to algorithm update questions with "we do hundreds or thousands of changes per year." Like I've been saying for years, I'd bet 95% of those updates have little to do with core ranking but more related to UX, UI and feature changes. SEO via Search Engine Roundtable https://ift.tt/1sYxUD0 November 28, 2018 at 07:19AM
https://ift.tt/2BEPoRn
Why Does Google Confirm Some Core Algorithm Updates & Not Others? https://ift.tt/2zsAufy I always wonder why Google will confirm some core ranking algorithm updates and not others. So I asked John Mueller of Google if he knows the behind the scenes decision making process on when Google confirms a core algorithm update. He said it is mostly around if the non-SEO community is confused, Google may decide to comment. SEO via Search Engine Roundtable https://ift.tt/1sYxUD0 November 28, 2018 at 06:54AM
https://ift.tt/2RiLbIn
Here’s what happened when I followed Googlebot for 3 months https://ift.tt/2DR6Cwq This experiment uncovered no direct way to bypass the First Link Counts Rule with modified links but it was possible to build a structure using Javascript links. Please visit Search Engine Land for the full article. SEO via Search Engine Land https://ift.tt/1BDlNnc November 28, 2018 at 06:40AM
https://ift.tt/2BEbyTF
Google Search Tests Expandable Related Queries https://ift.tt/2Rfl3y8 Google is testing new functionality for the related queries section you see at the footer of the Google search results. This shows Google giving the searcher the ability to click on a related query and then it expands and shows them a single search result for that query, which they can then click on "more results" to see more. SEO via Search Engine Roundtable https://ift.tt/1sYxUD0 November 28, 2018 at 06:33AM
https://ift.tt/2SbRT2I
Google Image Search Tests New Image Preview Frame https://ift.tt/2P9qWLv Google is testing a new design for how they display the previews and thumbnails of an image after you click on an image in the Google Image Search results. Instead of showing it framed out in a large black area, Google is testing showing the image preview on the right side in a white box. SEO via Search Engine Roundtable https://ift.tt/1sYxUD0 November 28, 2018 at 06:17AM
https://ift.tt/2r8VRha
Google’s John Mueller on Why Some Sites Rank by @martinibuster https://ift.tt/2E1Od0H Google's John Mueller discusses what it takes to make an authoritative site with trusted content. The post Google’s John Mueller on Why Some Sites Rank by @martinibuster appeared first on Search Engine Journal. SEO via Search Engine Journal https://ift.tt/1QNKwvh November 28, 2018 at 04:47AM
https://ift.tt/2ReVUTX
Google News Digest: GMB App, New Website Audit Portal, PageSpeed Insights Update, and More https://ift.tt/2PW2CSi There have been many vital updates from Google that have come to our attention in the last couple of weeks, including the availability of Google My Business app, the launch of a new website audit portal, and the unveiling of a major update to the PageSpeed Insights tool. Also, TrueView for Reach and Ad Sequencing are now out of beta and are available to all brands globally. SEO via SEMrush https://ift.tt/1K8Zzbp November 28, 2018 at 03:14AM
https://ift.tt/2BBipgu
Using a New Correlation Model to Predict Future Rankings with Page Authority https://ift.tt/2RptcAc Posted by rjonesx. Correlation studies have been a staple of the search engine optimization community for many years. Each time a new study is released, a chorus of naysayers seem to come magically out of the woodwork to remind us of the one thing they remember from high school statistics — that "correlation doesn't mean causation." They are, of course, right in their protestations and, to their credit, and unfortunate number of times it seems that those conducting the correlation studies have forgotten this simple aphorism. We collect a search result. We then order the results based on different metrics like the number of links. Finally, we compare the orders of the original search results with those produced by the different metrics. The closer they are, the higher the correlation between the two. That being said, correlation studies are not altogether fruitless simply because they don't necessarily uncover causal relationships (ie: actual ranking factors). What correlation studies discover or confirm are correlates. Correlates are simply measurements that share some relationship with the independent variable (in this case, the order of search results on a page). For example, we know that backlink counts are correlates of rank order. We also know that social shares are correlates of rank order. Correlation studies also provide us with direction of the relationship. For example, ice cream sales are positive correlates with temperature and winter jackets are negative correlates with temperature — that is to say, when the temperature goes up, ice cream sales go up but winter jacket sales go down. Finally, correlation studies can help us rule out proposed ranking factors. This is often overlooked, but it is an incredibly important part of correlation studies. Research that provides a negative result is often just as valuable as research that yields a positive result. We've been able to rule out many types of potential factors — like keyword density and the meta keywords tag — using correlation studies. Unfortunately, the value of correlation studies tends to end there. In particular, we still want to know whether a correlate causes the rankings or is spurious. Spurious is just a fancy sounding word for "false" or "fake." A good example of a spurious relationship would be that ice cream sales cause an increase in drownings. In reality, the heat of the summer increases both ice cream sales and people who go for a swim. That swimming can cause drownings. So while ice cream sales is a correlate of drowning, it is *spurious.* It does not cause the drowning. How might we go about teasing out the difference between causal and spurious relationships? One thing we know is that a cause happens before its effect, which means that a causal variable should predict a future change. An alternative model for correlation studiesI propose an alternate methodology for conducting correlation studies. Rather than measure the correlation between a factor (like links or shares) and a SERP, we can measure the correlation between a factor and changes in the SERP over time. The process works like this:
So what are the benefits of this methodology? By looking at change over time, we can see whether the ranking factor (correlate) is a leading or lagging feature. A lagging feature can automatically be ruled out as causal. A leading factor has the potential to be a causal factor. We collect a search result. We record where the search result differs from the expected predictions of a particular variable (like links or social shares). We then collect the same search result 2 weeks later to see if the search engine has corrected the out-of-order results. Following this methodology, we tested 3 different common correlates produced by ranking factors studies: Facebook shares, number of root linking domains, and Page Authority. The first step involved collecting 10,000 SERPs from randomly selected keywords in our Keyword Explorer corpus. We then recorded Facebook Shares, Root Linking Domains, and Page Authority for every URL. We noted every example where 2 adjacent URLs (like positions 2 and 3 or 7 and 8) were flipped with respect to the expected order predicted by the correlating factor. For example, if the #2 position had 30 shares while the #3 position had 50 shares, we noted that pair. Finally, 2 weeks later, we captured the same SERPs and identified the percent of times that Google rearranged the pair of URLs to match the expected correlation. We also randomly selected pairs of URLs to get a baseline percent likelihood that any 2 adjacent URLs would switch positions. Here were the results... The outcomeIt's important to note that it is incredibly rare to expect a leading factor to show up strongly in an analysis like this. While the experimental method is sound, it's not as simple as a factor predicting future — it assumes that in some cases we will know about a factor before Google does. The underlying assumption is that in some cases we have seen a ranking factor (like an increase in links or social shares) before Googlebot has and that in the 2 week period, Google will catch up and correct the incorrectly ordered results. As you can expect, this is a rare occasion. However, with a sufficient number of observations, we should be able to see a statistically significant difference between lagging and leading results. However, the methodology only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google.
Control:In order to create a control, we randomly selected adjacent URL pairs in the first SERP collection and determined the likelihood that the second will outrank the first in the final SERP collection. Approximately 18.93% of the time the worse ranking URL would overtake the better ranking URL. By setting this control, we can determine if any of the potential correlates are leading factors - that is to say that they are potential causes of improved rankings. Facebook Shares:Facebook Shares performed the worst of the three tested variables. Facebook Shares actually performed worse than random (18.31% vs 18.93%), meaning that randomly selected pairs would be more likely to switch than those where shares of the second were higher than the first. This is not altogether surprising as it is the general industry consensus that social signals are lagging factors — that is to say the traffic from higher rankings drives higher social shares, not social shares drive higher rankings. Subsequently, we would expect to see the ranking change first before we would see the increase in social shares. RLDsRaw root linking domain counts performed substantially better than shares at ~20.5%. As I indicated before, this type of analysis is incredibly subtle because it only detects when a factor is both leading and Moz Link Explorer discovered the relevant factor before Google. Nevertheless, this result was statistically significant with a P value <0.0001 and a 95% confidence interval that RLDs will predict future ranking changes around 1.5% greater than random. Page AuthorityBy far, the highest performing factor was Page Authority. At 21.5%, PA correctly predicted changes in SERPs 2.6% better than random. This is a strong indication of a leading factor, greatly outperforming social shares and outperforming the best predictive raw metric, root linking domains.This is not unsurprising. Page Authority is built to predict rankings, so we should expect that it would outperform raw metrics in identifying when a shift in rankings might occur. Now, this is not to say that Google uses Moz Page Authority to rank sites, but rather that Moz Page Authority is a relatively good approximation of whatever link metrics Google is using to determine ranking sites. Concluding thoughtsThere are so many different experimental designs we can use to help improve our research industry-wide, and this is just one of the methods that can help us tease out the differences between causal ranking factors and lagging correlates. Experimental design does not need to be elaborate and the statistics to determine reliability do not need to be cutting edge. While machine learning offers much promise for improving our predictive models, simple statistics can do the trick when we're establishing the fundamentals. Now, get out there and do some great research! Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read! SEO via SEOmoz Blog https://moz.com/blog November 28, 2018 at 02:12AM |
Categories
All
Archives
November 2020
|