maandag 30 september 2013

Video: Expanding your site to more languages

Webmaster Level: Intermediate to Advanced



We filmed a video providing more details about expanding your site to more languages or country-based language variations. The video covers details about rel=”alternate” hreflang and potential implementation on your multilingual and/or multinational site.









Video and slides on expanding your site to more languages





You can watch the entire video or skip to the relevant sections:



Additional resources on hreflang include:



Good luck as you expand your site to more languages!










via Google Webmaster Central Blog http://googlewebmastercentral.blogspot.com/2013/09/video-expanding-your-site-to-more.html


http://seocompanyadvice.com/video-expanding-your-site-to-more-languages/?utm_source=rss&utm_medium=rss&utm_campaign=video-expanding-your-site-to-more-languages

Historic Index Update

We have now updated our Historic Index with data for start of August 2013. Here are the new stats: Historic Index – Unique Pages crawled: 543,749,939,589 Unique URLs: 2,116,030,214,824 Date range: 09 Jan 2008 to 10 Aug 2013 We do apologise for the delay in releasing this update – it was due to growth in […]


The post Historic Index Update appeared first on Majestic SEO Blog.


via Majestic SEO Blog» English http://blog.majesticseo.com/index-updates/historic-index-update-20/


http://seocompanyadvice.com/historic-index-update-2/?utm_source=rss&utm_medium=rss&utm_campaign=historic-index-update-2

Keywords… Provided! By Majestic SEO

Don’t panic. By now, most SEOs will know that Google has moved entirely over to HTTPS, in the process removing all keyword data for analytics systems. Fortunately, this does not affect Majestic SEO as we have never sought to use data gleaned from Google. Majestic Already Has Keyword Data If you put a keyword or […]


The post Keywords… Provided! By Majestic SEO appeared first on Majestic SEO Blog.


via Majestic SEO Blog» English http://blog.majesticseo.com/development/keywords-provided-majestic-seo/


http://seocompanyadvice.com/keywords-provided-by-majestic-seo/?utm_source=rss&utm_medium=rss&utm_campaign=keywords-provided-by-majestic-seo

vrijdag 27 september 2013

Solving the Sub-Domain Equation: Predicting Traffic and Value when Merging Sub-Domains

Posted by russvirante



To sub-domain or not to sub-domain, that is the question. Should you keep your content on separate sub-domains or the same domain? If I do merge my sub-domains, will I gain or lose traffic? How much?


Since my first days in SEO back in 2004, the sub-folder vs. sub-domain debate has echoed through nearly every site architecture discussion in which I have participated. It seems trivial in many respects that we would focus so intently on what essentially boils down to the ordering of words in a URL, especially given that www. itself is a sub-domain. However, for a long time, there has been good reason to consider the question vary carefully. Today I am writing about the problem in general, and I propose a programmatic strategy for answering the sub-domain/sub-folder debate.


For the purposes of this article, let’s assume there is a company named Example Business that sells baseball cards, baseball jerseys and baseball hats. They have two choices for setting up their site architecture.


They can use sub-domains…



Or, they can use directories…



Many of you have probably dealt with the exact question, and for some of you this question has reared its head dozens if not hundreds of times. For those of you less familiar problem, let’s do a brief history on sub-domains, sub-folders, and their interaction with Google’s algo so we can get a feeling of the landscape.


Sub-domains and SEOs: A quick historical recap


First, really quickly, here is the breakdown of your average URL. We are most interested in comparing the sub-domain with the directory to determine which might be better for rankings.


parts of a url


This may date me a bit, either as a Noob or an Old-Timer depending on when you got in the game. I started directly after the Florida update in 2003. At that time, if I recall correctly, the sub-domain / sub-folder debate was not quite as pronounced. Most of the decisions we were making at the time regarding sub-domain had more to do with quick technical solutions (ie: putting one sub-domain on a different machine) than with explicit search optimization.


However, it seemed at that time our goal as SEOs was merely to find one more place to shove a keyword. Whether we used dashes (hell, I bought a double–dashed domain at one point) or sub-domains, Google’s algos seemed to, at least temporarily, value the sub-domain to be keyword rich. Domains were expensive, but sub-domains were free. Many SEOs, myself included, began rolling out sites with tons of unique, keyword-rich sub-domains.


Google wasn’t blind to this manipulation, though, and beginning around 2004, with some degree of effectiveness Google was able to kill off an apparent benefit to sub-domain spam. However, it still seemed to persist to some degree in discussions from 2006, 2007, 2008, and 2009. For a while, there seemed to be a feather in the cap of sub-domains specifically for SEO.


Fast forward a few years and Google introduces a new, wonderful feature called host crowding and indented results. Many of you likely remember this feature, but essentially, if you had two pages from the same host ranking in the top 10, the second would be pulled up directly under the other and given an indent for helpful organization. This was a huge blow to sub-domain strategies. Now ranking positions 1 and 10 on the same host was essentially the same as owning the top two positions, but on separate hosts it was valueless. In this case, it would make sense for “Example Business” to use sub-folders rather than sub-domains. If the content shared the same sub-domain, every time their website had 2 listings in the top 10 for a keyword, the second would be tucked up nicely under the first, effectively jumping multiple positions. If they were on separate sub-domains, they would not get this benefit.


Host Crowding Made Consolidating to a Single Domain Beneficial


Google was not done, however. They have since taken away our beautiful indented listings and deliberate host crowding and, at the same time given us Panda. Initial takes on Panda indicated that sub-domain and topical sub-domain segregation could bring positive results as Panda was applied at a host name level. Now it might make sense for “Example Business” to use sub-domains, especially if segmenting off low quality user generated content.


Given these changes, it is understandable why the sub-domain debate has raged on. While many have tried to discredit the debate altogether, there are legitimate, algorithmic reasons to choose a sub-domain or a sub-folder.


Solving the sub-domain equation


One of the beauties of contemporary SEO is having access to far better data than we’ve ever had. While I do lament the loss of keyword data in Google Analytics, so much other data is available at our fingertips than ever before. We now have the ability to transform intuition by smart SEOs into cold hard math.


When Virante, the company of which I am CTO, was approached a few months ago by a large website to help answer this question, we jumped at the opportunity. I now had the capability of turning my assumptions and my confidences into variables and variances and build a better solution. The client had chosen to go with the subdomain method for many years. They had heard concepts like “Domain Authority” and wondered if their subdomains spread themselves too thin. Should they merge their subdomains together? All of them, or just a few?


Choosing a mathematical model for analysis


OK, now for the fun stuff. There are a lot of things that we as SEOs don’t know, but have a pretty good idea about. We might call these assumptions, gut instincts, experience, intuitions but, in math, we can refer to them as variables. For each of these assumptions, we also have confidence levels. We might be very confident about one assumption of ours (like backlinks improve rankings) and less confident about another (longer content improves rankings). So, we have our variables and we have how confident we are about them. When we don’t know the actual values of these variables (in science we would refer to them as independent variables), Monte Carlo simulations often prove to be one of the most effective mathematical models we can use.



Definition: Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results; i.e., by running simulations many times over in order to calculate those same probabilities heuristically just like actually playing and recording your results in a real casino situation: hence the name. – Wikipedia



With Monte Carlo simulations, we essentially brute force our way to an answer. We come up with all of the possibilities, drop them into a bag, and pick one from the bag over and over again until we have an average result. Or think about it this way. Let’s say I handed you a bag with 10000 marbles and asked you which color of marble in the bag is most common. You could pour them all out and try and count them, or you could shake the bag and then pick out 1 marble at time. Eventually, you would have a good sample of the marbles and be able to estimate that answer without having to count them all.


We can do the same thing here. Instead of asking which color a marble is, we ask “If I merge one URL with another, what is the likelihood that it will receive more traffic from Google?”. We then just have to load all of the variables that go into answering that question into our proverbial bag (a database) and randomly select over and over again to get an estimate.


So here are the details, hopefully you can follow and do this yourself.


Step 1: Determine the keyword landscape


The thing we need to know is every possible keyword for which the client might rank, how much potential traffic is available for that keyword, and how valuable is that keyword in terms of CPC. The CPC value allows us to determine the true value of the traffic, not just the volume. We want to improve rankings for valuable keywords more than random ones. This client in particular is in a very competitive industry that relies on a huge number of mid/long-tail keywords. We built a list of over 46,000 keywords related to their industry using GrepWords (you could use SEMRush to do the same).


Step 2: Determine the search landscape


We now need to know where they actually rank for these keywords and we need to know all the potential sub-domains we might need to test. We queued all 46K keywords with the AuthorityLabs API and within 24 hours we had the top 100 results in Google for each. We then parsed the data and extracted the position and rank of every sub-domain for the site. There were around 25 sub-domains that we discovered, but ultimately chose to only analyze the 9 that made up the majority of non-branded traffic.


Step 3: Determine the link overlap


Finally, we need to know about the links pointing to these sub-domains. If they all have links from the same sites, we might not get any benefit when we merge the sub-domains together. Using Mozscape API Link Metrics call, we pulled down the root linking domains for each site. When we do our Monte Carlo simulation, we can determine how their link profiles overlap and make decisions based on that impact.


Step 4: Create our assumptions


As we have mentioned, there are a lot of things we don’t know, but we have a good idea about. Here we get to add in our assumptions as variables. You will see variables expressed as X and Y in these assumptions. This is where your expertise as an SEO comes into play.





Question 1: If two sub-domains rank for the same keyword in the top 10, what happens to the top ranked keyword?

Assumption 1: X% of the time, the second ranking will be lost as Google values domain diversity.

Example: It turns out that http://baseball-jerseys.example.com and http://baseball-hats.example.com both rank in the top 10 for the keyword “Baseball Hats and Jerseys”. We assume that 30% of the time, the lower of the two rankings will be lost because Google values domain diversity.


Question 2: If two sub-domains rank for the same keyword in the top 10, what happens to the top ranked subdomain?

Assumption 2: Depending on the X% of link overlap, there is a Y% chance of improving 1 position.

Example: It turns out that http://baseball-jerseys.example.com and http://baseball-hats.example.com both rank in the top 10 for the keyword “Baseball Hats and Jerseys”. We assume that 70% of the time, based on X% of link overlap, the top ranking page will move up 1 position.


Question 3: If two sub-domains merge, what happens to all rankings of top ranked subdomain, even when dual rankings are not present?

Assumption 3: Depending on X% of link overlap, there is a Y% chance of improving 1 position.

Example: On keywords where http://baseball-jerseys.example.com and http://baseball-hats.example.com don’t have overlapping keyword rankings, we that 20% of the time, based on X% of link overlap, their keywords will improve 1 position.



These are just some of the questions you might want to include in your modeling method. There might be other factors you want to take into account, and you certainly can. The model can be quite flexible.


Step 5: Try not to set fire to the computer


So now that we have our variables, the idea is to pick the proverbial marble out of the bag. We will create a random scenario using our assumptions, sub-domains and keywords and determine what the result of that single random scenario is. We will then repeat this hundreds of thousands of times to get the average result for each sub-domain grouping.




We essentially need to do the following…



  1. Select a random set of sub-domains.

    For example, it might be sub-domains 1, 2 and 4. It could also be all of the sub-domains.

  2. Determine the link overlap between the sub-domains

  3. Loop through every keyword ranking those sub-domains we determined when building the Keyword and Search Landscape back in Step 2. Then, for each ranking…

    1. Randomly select our answer to #1 (ie: is this the 3 out of 10 times that we will lose rankings?)

    2. Randomly select our answer to #2 (ie: is this the 7 out of 10 times that we will increase rankings?)

    3. Randomly select our answer to #3 (ie: is this the 2 out of 10 times we will increase rankings?)



  4. Find out what our new traffic and search value will be.

    Once you apply those variables above, you can guess what the new ranking will be. Use the Search Volume, CPC, and estimated CTR by ranking to determine what the new traffic and traffic value will be.

  5. Add It Up
    Add up the estimated search volume and the estimated search value for each of the keywords.

  6. Store that result

  7. Repeat hundreds of thousands of times.
    In our case, we ended up repeating around 800,000 times to make sure we had a tight variance around the individual combinations.


Step 6: Analyze the results


OK, so now you have 800,000 results, so what do we do? The first thing we do segment those results by their sub-domain combination. In this case, we had little over 500 different sub-domain combinations. Second, we an average traffic and traffic value for each of those sub-domain combinations from those 800,000 results. We can then graph all those results to see which sub-domain combination had, on average, the highest predicted Traffic and Value.


To be honest, graphs are a terrible way of figuring out the answer, but it is the best tool we have to convey it in a blog post. You can see exactly why below. With over 500 different potential sub-domain combinations, it is difficult to visualize all of them at the same time. In the graph below, you see all of them, with each bar representing the average score for an individual sub-domain combination. For all subsequent graphs, I have taken a random sample of only 50 of the sub-domain combinations so it is easier to visualize.


Big graph


As mentioned previously, one of the things we try and predict is not just the volume of the traffic, but also the value of that traffic by multiplying it by CPC value of each keyword for which they rank. This is important if you care more about valuable commercial terms than just any keyword for which they might rank.



As the graph above exposes, there were some sub-domain combinations that influenced traffic more than value, and vice-versa. With this simulation, we could find a sub-domain combination that maximized the value or the traffic equation. A company that makes money off of display advertising might prefer to look at traffic, while one that makes money off of selling goods would likely pay more attention to the traffic value number.



There were some neat trends that the Monte Carlo simulation revealed. Of the sub-domains tested, 3 in particular tended to have a negative rankings effect on nearly all of the combinations. Each time a good sub-domain was merged, these 3 would intermix with combinations to slightly lower the traffic volume and traffic values. It turned out these 3 sub-domains had very few backlinks and only brand keyword rankings. Subsequently, there was huge keyword overlaps and almost no net link benefit when merged. We were easily able to exclude these from the sub-domain merger plan. We would have never guessed this, or seen this trend, without this kind of mathematical modeling.



Finally, we were able to look closely at sub-domain merger combinations that offered more search value and less search traffic, or vice-versa. Ultimately, though, 3 options vied for the top spot. They were statistically indistinguishable from one another in terms of potential traffic and traffic value. This meant the client wasn’t tied to a single potential solution, they could weigh other factors like the difficulty of merging some sub-domains and internal political concerns.


Modeling uncertainty


As SEOs, there is a ton we don’t know. Over time, we build a huge amount of assumptions and, with those assumptions, levels of confidence for each. I am very confident that a 301 redirect will pass along rankings, but not 100%. I am pretty confident that keyword usage in the title improves rankings, but not 100% confident. The beauty of the Monte Carlo approach is that it allows us to graph our uncertainties.



The graphs you saw above were the averages (means) for each of the sub-domain combinations. There were actually hundreds of different outcomes generated for each of those sub-domain combinations. If we were to plot those different outcomes, they may look like what you see in the image directly above. If I had just made a gut decision and modeled what I thought, without giving a range, I would have come up with only a single data point. Instead, I estimated my uncertainties, turned them into a range of values, and allowed the math to tell me how those uncertainties would play out. We put what we don’t know in the graph, not just what we do know. By graphing all of the possibilities, I can present a more accurate, albeit less specific, answer to my client. Perhaps a better way of putting it is this: when we just go with our gut, we are choosing 1 marble out of the bag and hoping it is the right one.


Takeaways



  1. If you are an agency or consultant, it is time to step up your game. Your gut instinct may be better than anyone else’s, but there are better ways to use your knowledge to get at an answer than just think it through.

  2. Don’t assume that anything in our industry is unknowable. The uncertainty that exists is largely because we, as an industry, have not yet chosen to adopt the tools that are plainly available to us in other sciences that can take into account those uncertainties. Stop looking confused and grab a scientist or statistician to bring on board.

  3. Whenever possible, look to data. As a small business owner or marketer, demand that your provider give you sound, verifiable reasons for making changes.

  4. When in doubt, repeat. Always be testing and always repeat your tests. Making confident, research-driven decisions will give you an advantage over your competition that they can’t hope to undo.


Follow up


This is an exciting time for search marketers. Our industry is rapidly maturing in both its access to data and its usage of improved techniques. If you have any more questions about this, feel free to ask in the comments below or hit me up on twitter (@rjonesx). I’d love to talk through more ideas for improvements you might have!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!



via Moz Blog http://feedproxy.google.com/~r/MozBlog/~3/BkawCAef_Ao/solving-the-subdomain-equation-predicting-traffic-and-value-when-merging-subdomains


http://seocompanyadvice.com/solving-the-sub-domain-equation-predicting-traffic-and-value-when-merging-sub-domains/?utm_source=rss&utm_medium=rss&utm_campaign=solving-the-sub-domain-equation-predicting-traffic-and-value-when-merging-sub-domains

Setting Goals (Not Tools) as the Foundation of Your Marketing – Whiteboard Friday

Posted by MackenzieFogelson


With new tools introduced so regularly, it’s easy for marketers to spend an inordinate amount of time trying to figure out which ones are most effective for their own work. That focus, though, shifts our attention from what really matters: setting the right goals for our companies. In today’s Whiteboard Friday, Mackenzie Fogelson walks us through the five-stage process she uses to make sure her team’s attention is on what really matters.
































For reference, here’s a still of this week’s whiteboard!



Video Transcription





Hey there, Moz community! I’m so excited to be here with you today. I wanted to share something with you that has been really powerful for the businesses we’ve been working with in the last year or so about building community. It’s a concept that we call “goals not tools,” and it works in this pyramid format where you start with your goals, you move on to KPIs, you develop a strategy, you execute that strategy, and then you analyze your data. And this is something that has been really powerful and helped businesses really grow. So I’m going to walk you through it here.


We start down at the bottom with goals. So the deal with goals is that you want to make sure that you’re setting goals for your entire business, not just for SEO or social media or content marketing, because you’re trying to grow your whole business. So keep your focus there. Then once you develop your goals, and those goals might be to improve customer communication or you want to become a thought leader. Whatever your goal is, that’s where you’re going to set it.


Then you move on to determining what your key performance indicators are and what you’re going to use to actually measure the fact that you may or may not be reaching your goals. So in terms of KPIs, it’s really going to depend on your business. When we determine KPIs with companies, we sit down and we have that discussion with them before we develop the strategy, and that helps us to have a very authentic and realistic discussion about expectations and how this is all going to work and what kind of data they’re expecting to see so that we’re proving that we’re actually making a difference in their business.


So once you’ve determined those KPIs, then you move on to developing a creative strategy, a creative way to meet those goals and to measure it the way you’ve determined in your KPIs. So this is your detailed roadmap, and it’s two to three months at a time. A lot of companies will go for maybe 12 months and try to get that high level overview of where they’re going for the year, and that’s fine. Just make sure that you’re not detailing out everything that you’re doing for the next year because it makes it harder to be agile. So we’d recommend two- to-three month iterations at a time. Go through, test things, and see how that works.


During your strategy development you’re also going to select the tools that you’re going to use. Maybe it’s Facebook, maybe it’s SEO, maybe it’s content marketing, maybe it’s email marketing, PPC. There’s all kinds of tools that could be used, and they don’t all have to be digital. So you just need to be creative and determine what you need to plan out so that you can reach the goals that you’ve set.


Then once you’ve got your strategy developed, that’s really some of the hardest part until you get to execution. Then you’re actually doing all the work. You need to be consistent. You need to make sure that you’re staying focused and following that strategy that you’ve set. You also want to test things because you want as much data as possible so that you can determine if things are working or not. So make sure that during execution there are going to be things that come up, emergent things, shiny things, exciting things. So what you’ll have to do is weigh whether those things wait for the next iteration in two to three months, or whether you deviate your plan and you integrate those at the time that they come up.


So once you’re through execution, then really what you’re doing is analyzing that data that you’ve collected. You’re trying to determine: Should we spend more time on something? Should we pull something? Should we determine if something else needs to completely change our plans so that we’re making sure that we’re adding value? So analysis is probably the most important part because you’re always going to want to be looking at the data.


So in this whole process, what we always do is try to make sure that we’re focusing on two questions, and the most important one is: Where can we add more value? So always be thinking about what you’re doing, and if you can’t answer the value question, you know, “Why are we doing this? Does this provide value for our customers or something internal that you’re working on? If you can’t answer that question, it’s probably not something valuable, and you don’t need to spend your time on it. Go somewhere else where you’re adding the value.


Then the last question is where you can make the biggest difference in your business, because that’s what this is all about is growing your business. So if you stay focused on goals, not tools, it’s going to be really easy to do that.


Thanks for having me today, Moz. Hope I helped you out. Let me know in the questions if you need any assistance.



Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!



via Moz Blog http://feedproxy.google.com/~r/MozBlog/~3/AEA5j3ausso/setting-goals-not-tools-as-the-foundation-of-your-marketing-whiteboard-friday


http://seocompanyadvice.com/setting-goals-not-tools-as-the-foundation-of-your-marketing-whiteboard-friday/?utm_source=rss&utm_medium=rss&utm_campaign=setting-goals-not-tools-as-the-foundation-of-your-marketing-whiteboard-friday

donderdag 26 september 2013

On Our Wait-List? You Get a Moz Analytics!

Posted by Anthony Skinner


It is with great pleasure that I announce the wait is over! That’s right, we are now letting people from our wait-list into Moz Analytics!







In many ways, I feel like a not-as-cute version of Oprah Winfrey. I may not be worth 77 million dollars, and I am not giving you a car, but it does feel good to give new subscribers who patiently waited a 30-day free trial of Moz Analytics! Over the next few weeks we will be sending emails inviting people to try out the tools. The invitation is good for seven days, so when you see the email, make sure you click the link and join us right away.




If you’re not on our wait-list, you’ve still got time to get early access. Just head over and sign up!


Before too long, we will open Moz Analytics free trials to the general public. We plan to release improvements and fixes to Moz Analytics every 2-4 weeks. Have questions about the application? Feel free to check out the Moz Help Hub. Feedback or suggestions? Check out the feature request forum.


Otherwise, sit back and enjoy your new ride.


Anthony Skinner

CTO and Oprah Impersonator




Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!



via Moz Blog http://feedproxy.google.com/~r/MozBlog/~3/ME3hSo7kvfU/moz-analytics-wait-list-announcement


http://seocompanyadvice.com/on-our-wait-list-you-get-a-moz-analytics/?utm_source=rss&utm_medium=rss&utm_campaign=on-our-wait-list-you-get-a-moz-analytics

Getting things done with Google Tasks

Someone recently asked me how I manage my to-do list, so I thought I’d write up the software that I use. Fundamentally I use Google Tasks as the backend, but with extensions and apps that improve on the basic functionality in Google Tasks.


Chrome


I use a couple different extensions for Chrome:

- Better Google Tasks is a great Chrome extension. Just click a button in Chrome and you have instant access to all your todo items. I like the extension so much that I donated some money to the author, Chris Wiegman. You can get the Better Google Tasks extension from the Chrome Store.


- I also noticed that on the New Tab page of Chrome, seeing thumbnails of my most visited sites (Techmeme, Hacker News, Nuzzel, Google News, etc.) every time I opened a new tab inevitably led me to click over to those sites. The result? I was wasting more time surfing than I wanted. The solution is a great Chrome extension called New Tab to Tasks. It changes Chrome’s new tab page to be your todo list. That way, I get a nice little signal every time I open a tab: “Hey, remember that you’re supposed to be working on stuff, not goofing off.” Thanks to Scott Graham for writing this Chrome extension.


Oh, and one last Chrome recommendation: if you don’t want *any* distractions on Chrome’s new tab page, consider installing Empty New Tab Page, which makes the Chrome new tab page completely blank.


Android


For Android, I use an app called Tasks. It costs $0.99, but there’s also a free version that starts showing ads after 10 days. I like the Tasks app for Android because it syncs with Google Tasks, has nice widgets, you can easily move tasks up and down, and you can indent tasks underneath each other. I only keep a few todo lists (Home, Work, Grocery, etc.), and to switch between lists you just swipe left or right. Tasks works great for me, but if you have tons of different todo lists then swiping between those lists might get old.


I can already imagine someone asking “Okay, but what about Google Keep?” I’m not opposed to Google Keep, but at this point I’ve found various third-party solutions that interoperate with Google Tasks and work well for me on Chrome and Android. Plus I already have my data in Google Tasks, so for the time being I like these solutions for Google Tasks.


via Matt Cutts: Gadgets, Google, and SEO http://www.mattcutts.com/blog/todo-list-tips/


http://seocompanyadvice.com/getting-things-done-with-google-tasks/?utm_source=rss&utm_medium=rss&utm_campaign=getting-things-done-with-google-tasks

Correlation and Data Transformations

In this article, we will show how data transformations can be an important tool for the proper statistical analysis of data. The association, or correlation, between two variables can be visualised by creating a scatterplot of the data. In certain instances, it may appear that the relationship between the two variables is not linear; in […]


The post Correlation and Data Transformations appeared first on Majestic SEO Blog.


via Majestic SEO Blog» English http://blog.majesticseo.com/research/350634/


http://seocompanyadvice.com/correlation-and-data-transformations/?utm_source=rss&utm_medium=rss&utm_campaign=correlation-and-data-transformations

Improving Search Rank by Optimizing Your Time to First Byte

Posted by Zoompf



Back in August, Zoompf published newly uncovered research findings examining the effect of web performance on Google’s search rankings. Working with Matt Peters from Moz, we tested the performance of over 100,000 websites returned in the search results for 2000 different search queries. In that study, we found a clear correlation between a faster time to first byte (TTFB) and a higher search engine rank. While it could not be outright proven that decreasing TTFB directly caused an increasing search rank, there was enough of a correlation to at least warrant some further discussion of the topic.


The TTFB metric captures how long it takes your browser to receive the first byte of a response from a web server when you request a particular website URL. In the graph captured below from our research results, you can see websites with a faster TTFB in general ranked more highly than websites with a slower one.



We found this to be true not only for general searches with one or two keywords, but also for “long tail” searches of four or five keywords. Clearly this data showed an interesting trend that we wanted to explore further. If you haven’t already checked out our prior article on Moz, we recommend you check it out now, as it provides useful background for this post: How Website Speed Actually Impacts Search Ranking.


In this article, we continue exploring the concept of Time to First Byte (TTFB), providing an overview of what TTFB is and steps you can take to improve this metric and (hopefully) improve your search ranking.


What affects TTFB?


The TTFB metric is affected by 3 components:



  1. The time it takes for your request to propagate through the network to the web server

  2. The time it takes for the web server to process the request and generate the response

  3. The time it takes for the response to propagate back through the network to your browser.


To improve TTFB, you must decrease the amount of time for each of these components. To know where to start, you first need to know how to measure TTFB.


Measuring TTFB


While there are a number of tools to measure TTFB, we’re partial to an open source tool called WebPageTest.


Using WebPageTest is a great way to see where your site performance stands, and whether you even need to apply energy to optimizing your TTFB metric. To use, simply visit http://webpagetest.org, select a location that best fits your user profile, and run a test against your site. In about 30 seconds, WebPageTest will return you a “waterfall” chart showing all the resources your web page loads, with detailed measurements (including TTFB) on the response times of each.


If you look at the very first line of the waterfall chart, the “green” part of the line shows you your “Time to First Byte” for your root HTML page. You don’t want to see a chart that looks like this:


bad-waterfall


In the above example, a full six seconds is getting devoted to the TTFB of the root page! Ideally this should be under 500 ms.


So if you do have a “slow” TTFB, the next step is to determine what is making it slow and what you can do about it. But before we dive into that, we need to take a brief aside to talk about “Latency.”


Latency


Latency is a commonly misunderstood concept. Latency is the amount of time it takes to transmit a single piece of data from one location to another. A common misunderstanding is that if you have a fast internet connection, you should always have low latency.


A fast internet connection is only part of the story: the time it takes to load a page is not just dictated by how fast your connection is, but also how FAR that page is from your browser. The best analogy is to think of your internet connection as a pipe. The higher your connection bandwidth (aka “speed”), the fatter the pipe is. The fatter the pipe, the more data that can be downloaded in parallel. While this is helpful for overall throughput of data, you still have a minimum “distance” that needs to be covered by each specific connection your browser makes.


The figure below helps demonstrate the differences between bandwidth and latency.


latency


As you can see above, the same JPG still has to travel the same “distance” in both the higher and lower bandwidth scenarios, where “distance” is defined by two primary of factors:



  1. The physical distance from A to B. (For example, a user in Atlanta hitting a server in Sydney.)

  2. The number of “hops” between A and B, since internet traffic redirects through an increasing number of routers and switches the further it has to travel.


So while higher bandwidth is most definitely beneficial for overall throughput, you still have to travel the initial “distance” of the connection to load your page, and that’s where latency comes in.


So how do you measure your latency?


Measuring latency and processing time


The best tool to separate latency from server processing time is surprisingly accessible: ping.


The ping tool is pre-installed by default on most Windows, Mac and Linux systems. What ping does is send a very small packet of information over the internet to your destination URL, measuring the amount of time it takes for that information to get there and back. Ping uses virtually no processing overhead on the server side, so measuring your ping response times gives you a good feel for the latency component of TTFB.


In this simple example I measure my ping time between my home computer in Roswell, GA and a nearby server at www.cs.gatech.edu in Atlanta, GA. You can see a screenshot of the ping command below:


ping


Ping continued to test the average response time of the server, and summarized an average response time of 15.8 milliseconds. Ideally you want your ping times to be under 100ms, so this is a good result. (but admittedly the distance traveled here is very small, more about that later).


By subtracting the ping time from your overall TTFB time, you can then break out the network latency components (TTFB parts 1 and 3) from the server back-end processing component (part 2) to properly focus your optimization efforts.


Grading yourself


From the research shown earlier, we found that websites with the top search rankings had TTFB as low as 350 ms, with the higher ranking sites pushing up to 650 ms. We recommend a total TTFB of 500ms or less.


Of that 500ms, a roundtrip network latency of no more than 100ms is recommended. If you have a large number of users coming from another continent, network latency may be as high as 200ms, but if that traffic is important to you, there are additional measures you can take to help here which we’ll get to shortly.


To summarize, your ideal targets for your initial HTML page load should be:



  1. Time to First Byte of 500 ms or less

  2. Roundtrip network latency of 100 ms or less

  3. Back-end processing of 400 ms or less


So if your numbers are higher than this, what can you do about it?


Improving latency with CDNs


The solution to improving latency is pretty simple: Reduce the “distance” between your content and your visitors. If your servers are in Atlanta, but your users are in Sydney, you don’t want your users to request content half way around the world. Instead, you want to move that content as close to your users as possible.


Fortunately, there’s an easy way to do this: move your static content into a Content Delivery Network (CDN). CDNs automatically replicate your content to multiple locations around the world, geographically closer to your users. So now if you publish content in Atlanta, it will automatically copy to a server in Syndey from which your Australian users will download it. As you can see in the diagram below, CDNs make a considerable difference in reducing the distance of your user requests, and hence reduce the latency component of TTFB:


640px-NCDN_-_CDN


To impact TTFB, make sure the CDN you choose can cache the static HTML of your website homepage, and not just dependent resources like images, javascript and CSS, since that is the initial resource the google bot will request and measure TTFB.


There are a number of great CDNs out there including Akamai, Amazon Cloudfront, Cloudflare, and many more.


Optimizing back-end infrastructure performance


The second factor in TTFB is the amount of time the server spends processing the request and generating the response. Essentially the back-end processing time is the performance of all the other “stuff” that makes up your website:



  • The operating system and computer hardware which runs your website and how it is configured

  • The application code that’s running on that hardware (like your CMS) as well as how it is configured

  • Any database queries that the application makes to generate the page, how many queries it makes, the amount of data that is returned, and the configuration of the database


How to optimize the back-end of a website is a huge topic that would (and does) fill several books. I can hardly scratch the surface in this blog post. However, there are a few areas specific to TTFB that I will mention that you should investigate.


A good starting point is to make sure that you have the needed equipment to run your website. If possible, you should skip any form of “shared hosting” for your website. What we mean by shared hosting is utilizing a platform where your site shares the same server resources as other sites from other companies. While cheaper, shared hosting passes on considerable risk to your own website as your server processing speed is now at the mercy of the load and performance of other, unrelated websites. To best protect your server processing assets, insist on using dedicated hosting resources from your cloud provider.


Also, be wary of virtual or “on-demand” hosting systems. These systems will suspend or pause your virtual server if you have not received traffic for a certain period of time. Then, when a new user accesses your site, they will initiate a “resume” activity to spin that server back up for processing. Depending on the provider, this initial resume could take 10 or more seconds to complete. If that first user is the Google search bot, your TTFB metric from that request could be truly awful.


Optimize back-end software performance


Check the configuration of your application or CMS. Are there any features or logging settings that can be disabled? Is it in a “debugging mode?” You want to get rid of nonessential operations that are happening to improve how quickly the site can respond to a request.


If your application or CMS is using an interpreted language like PHP or Ruby, you should investigate ways to decrease execution time. Interpreted languages have a step to convert them into machine understandable code which what is actually executed by the server. Ideally you want the server to do this conversion once, instead of with each incoming request. This is often called “compiling” or “op-code caching” though those names can vary depending on the underline technology. For example, with PHP you can use software like APC to speed up execution. A more extreme example would be Hip Hop, a compiler created and used by Facebook that converts PHP into C code for faster execution.


When possible, utilizing server-side caching is a great way to generate dynamic pages quickly. If your page is loading content that changes infrequently, utilizing a local cache to return those resources is a highly effective way in improving the performance of your page load time.


Effective caching can be done at different levels by different tools and are highly dependent on the technology you are using for the back-end of your website. Some caching software only cache one kind of data, while others do caching at multiple levels. For example, W3 Total Cache is a WordPress plug-in that does both database query caching as well as page caching. Batcache is a WordPress plug-in created by Automattic that does whole page caching. Memcached is a great general object cache that can be used for pretty much anything, but requires more development setup. Regardless of what technology you use, finding ways to reduce the amount of work needed to create the page by reusing previously created fragments can be a big win.


As with any software changes you’d make, make sure to continually test the impact to your TTFB as you incrementally make each change. You can also use Zoompf’s free performance report to identify back-end issues which are effecting performance, such as not using chunked encoding and much more.


Conclusions


As we discussed, TTFB has 3 components: the time it takes for your request to propagate to the web server; the time it takes for the web server to process the request and generate the response; and the time it takes for the response to propagate back to your browser. Latency captures the first and third components of TTFB, and can be measured effectively through tools like WebPageTest and ping. Server processing time is simply the overall TTFB time minus the latency.


We recommend a TTFB time of 500 ms or less. Of that TTFB, no more than 100 ms should be spent on network latency, and no more than 400 ms on back-end processing.


You can improve your latency by moving your content geographically closer to your visitors. A CDN is a great way to accomplish this as long as it can be used to serve your dynamic base HTML page. You can improve the performance of the back-end of your website in a number of ways, usually through better server configuration and caching expensive operations like database calls and code execution that occur when generating the content. We provide a free web performance scanner that can help you identify the root causes of slow TTFB, as well as other performance-impacting areas of your website code, at http://zoompf.com/free.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!



via Moz Blog http://feedproxy.google.com/~r/MozBlog/~3/eWdCHJ7HNWI/improving-search-rank-by-optimizing-your-time-to-first-byte


http://seocompanyadvice.com/improving-search-rank-by-optimizing-your-time-to-first-byte/?utm_source=rss&utm_medium=rss&utm_campaign=improving-search-rank-by-optimizing-your-time-to-first-byte

dinsdag 24 september 2013

When Keyword (not provided) is 100 Percent of Organic Referrals, What Should Marketers Do? – Whiteboard Tuesday

Posted by randfish


For nearly two years, marketers have been frustrated by a steadily increasing percentage of keywords (not provided). Recent changes by Google have sent those numbers soaring. The site Not Provided Count now reports an average of nearly 74% of keywords not provided, and speculation abounds that it won’t be long before 100% of keywords are masked. Without that referral data, our tasks as Internet marketers become far more difficult—but not impossible.


In this special Whiteboard Tuesday, Rand covers what marketers can do to make up for this drastic change, finding data from other sources to stay on top of their SEO efforts.
































For reference, here’s a still image of today’s whiteboard!



Video Transcription




Howdy, Moz fans, and welcome to another edition of Whiteboard Friday! Today I’m going to talk about this extremely troublesome and worrisome problem that Google has expanded “keyword (not provided)” potentially to 100% of all organic referrals. This isn’t necessarily that they’ve flipped the entire switch, and everyone’s going to see it this week, but certainly over the next several months, it’s been suggested, we may receive no keyword data at all in referrals from Google. Very troubling and concerning, obviously, if you’re a web marketer.


I think it should be very troubling and concerning if you’re a web user as well, because marketers don’t use this data to do evil things or invade people’s privacy. Marketers use this data to make the web a better place. The agreement that marketers have always had—that website creators have always had—with search engines, since their inception was, “sure, we’ll let you crawl our sites, you provide us with the keyword data so that we can improve the Internet together. I think this is Google abusing their monopolistic position in the United States. Unfortunately, I don’t really see a way out of it. I don’t think marketers can make a strong enough case politically or to consumer groups to get this removed. Maybe the EU can eventually.


But in any case, let’s deal with the reality that we’re faced with today, which is that keyword not provided may be 100% of your referrals, and so keyword data is essentially gone. We don’t know when Google sends a visit—Bing, to their credit, and to Microsoft’s credit, enduringly has kept that data accessible—but we don’t know when Google sends a visit to our sites and pages, what that person searched for. Previously, we could do some sampling—now we can’t even do that.


There are some big tasks that we use that data for, and so to start with, I want to try and identify the uses for keyword referral data, at least the very important ones as I perceive them—there are certainly many more.


Number one: finding opportunities to improve a page’s performance or its ranking. If you see that a page of yours is receiving a lot of search traffic, or that a keyword is sending a lot of search traffic (or even a little bit of search traffic), but the page is not ranking very well, you know that by improving that page’s ranking you have an opportunity to earn a lot more search traffic. That’s a very valuable thing as a marketer. You can also see if a search query is sending traffic to a page, but that page has a high bounce rate for that traffic, low pages-per-visit, low conversion rate, you know, “hey, I’m not doing a good job serving the visitor; I need to improve how the page addresses that.” That’s one of the key things we use keyword referral data for.


Secondarily: connecting rank improvement efforts—things that we do in the SEO world to move up our rankings—to the traffic growth that we receive from them. This is very important for consultants and for agencies, and for in-house SEOs as well, to show our value to our managers, and our clients—it’s really, really tough to have this data taken away.


C: Understanding how your searchers perceive your brand and your content. When we look down the list of phrases that sent us traffic, we could see things like “oh, this is how people are thinking about my brand, or thinking about this product I launched, or thinking about this content that I’ve put out.” Really challenging to do that nowadays.


And D: uncovering keyword opportunities. We could certainly see, “this is sending a small amount of traffic, this is doing some long-tail stuff, hey—let’s turn this into a broader piece of content. Let’s try and optimize for some of those keyword phrases that we’re barely ranking on.” Or, we have a page that’s not really addressing that keyword phrase that we’re ranking on. We can address that. We can improve that.


So I’m going to try and tackle some relatively simplistic ways, and I’m not going to walk through all the details you would need to do this, but I think many folks in the SEO and marketing sphere will address these over the weeks and months to come.


Starting with A. How do I find opportunities to improve a page’s ranking or its performance with users when I can’t see keyword referral data? How do I know which page people are coming to? Thankfully, we can use the connection—the intersection of a few different sources of data. Pages that are receiving search visits is a big one, and this is going to be used throughout—instead of looking at keyword-level data, we’re going to be looking at page-level data. Which pages received referral visits from Google Search? Thankfully, that’s still data that we do get, and that’ll likely stay with us, because we can always see a referral source, and we know which pages are loaded. So, even if Google Analytics were to remove that, I think a third-party analytics provider would step in.


Pages receiving search visits plus rank-tracking data can get us a little close to this, because we can essentially say, “hey, we know this page is ranking well for these five or ten keywords that we have some reasonable expectation that they have keyword search volume. They’re receiving search visits, and yet they’re not performing well, or they’re not ranking particularly well, so improving them should be able to drive up our search traffic, improving their performance with users should be able to drive up our conversion rate optimization.


Optionally, we could also add in things like Google Webmaster Tools or AdWords data; AdWords data being used on they keyword side to fill in for, “hey, what’s the volume that a keyword is getting,” and Google Webmaster Tools data to be able to see a list of some keywords that maybe are sending us traffic. Dr. Pete wrote a good post recently about the relative accuracy of Google Webmaster Tools, and while unfortunately it’s not as good as any of the other methods, it’s still not awful, and so that data is potentially usable.


This will give us a list of pages that get search visits, or are targeting important search terms, that rank, and that have the potential to improve. So this gets us to the answer to this question. This used to be really simple to get at, now it’s more difficult, but still possible.


B. Connecting our SEO efforts to traffic growth from search. I know this is going to be tremendously hard, and this is probably one of the biggest tolls that this change is taking on SEO folks. Because as SEOs, as marketers, we’ve shown our value by saying, “look, we’re driving up search visits, some of it’s branded, some of it’s unbranded, some of it’s not provided—but you get a rough sense of this. And you really need that percentage: “What percent of the traffic is actually you going and getting us new visitors that never would have found us, versus branded stuff that’s just sort of rising on its own.” Maybe it’s rising because of efforts that marketers are making: investments in content, and in social media, and in email and all these other wonderful things, but it’s hard to know— it’s hard to directly map that.


So here’s one of the ways. Optionally, we can use AdWords to bid on branded terms and phrases. When we do that, you might want to have a relatively broad match on your branded terms and phrases so that you can see keyword volume that is branded from impression data. That gives you a sense of, “what’s the trajectory, here?” If we’re seeing it grow, we can identify “oh, that’s not us driving a bunch of new non-branded new keyword terms and phrases; that’s our brand search increasing.” So we can sort of discount that, or apply that in our reporting effectively. If we see, on the other hand, that it’s staying flat, but that search traffic overall is going up and to the right, then we know that’s unbranded.


Optionally, if we don’t want to be bidding and spending a lot of money with Google AdWords and trying to keep our impression counts high, we can use things like Google Insights or even downloading AdWords volume data estimates month-over-month to be able to track those sorts of things.


Certainly one of the things I would recommend doing even prior to this change is tracking rankings on buckets. Buckets of head terms, versus chunky middle, versus long-tail; so phrases that are getting lots of search volume, a good amount of search volume, and very little search volume. You want to have different buckets of those, so you can see, “oh hey, my rankings are generally improving in this bucket, or that bucket.” Same with branded vs. non-branded; you want to be able to identify and track those separately. Then, compare against visits that you’re seeing to pages that are ranking for those terms. We need to look at the pages that are receiving search traffic from those different buckets.


Again, much more challenging to do these days. But, any time we see the complexity of our practice is increasing, we also have an opportunity, because it means that those of us who are savvy, sophisticated, able to track this data, are far more useful and employable and important. Those organizations that use great marketers are going to receive outsized benefits from doing so.


C: How do I understand and analyze how searchers perceive my brand? What are they searching for that’s leading them to my site? How are they searching for terms related to my brand? Again, we can bid on AdWords terms, like I talked about. You can use keyword suggestion sources like Google Suggest, Ubersuggest, certainly AdWords’s own volume data, SEMRush, etc. to see the keyword expansions related to your brand or the content that’s very closely tied to your brand. And internal site search data. You’ve got a search box up in the top-right hand corner, people are typing in stuff, and you want to see what that “XYZ” is that they’re typing in. Those can help as well, and can provide you some opportunities that lead to D.


D: How do I uncover new keyword opportunities to target? Of course, there’s the classic methodology that we’ve all employed, which is keyword research, but usually we compare that to the terms that are already sending us traffic, and we go look and say, “oh, okay, we’re doing fine for these—we don’t need to worry.” Now, we need to take keyword research tools and add some form of rank-tracking data. That could be from Google Webmaster Tools despite its mediocrity in terms of accuracy. We can use manual rank data—we can search for it ourselves—or we can use automated data.


One of the criticisms for all rank-tracking data is always, “but there’s lots of personalization and geographic localization—these kinds of things that are biasing searches—how do I see all of that?” And the answer is, well, you can’t really. Personalization is going to fluctuate things. It may be sort of included in the Google Webmaster Tools data, but as Dr. Pete showed in his post, it looks a little funky right now.


For localization, you can add the geo in the string to be able to see where you rank in different geographies if you want to track those. That’s something you’ll be able to do in Moz Analytics and probably many of the other keyword tracking tools out there, too.


Optionally—and this is expensive, and I hate to say this is Google being evil, but this is probably what Google wants you to do when they give you “(not provided)”—which is run AdWords campaigns targeting those keywords, so that you can see new expansion opportunities. Areas where, “oh hey, we bid on this, it sent impressions, it sent some traffic, it looks like it’s worthwhile, we’re not ranking for it organically,” and again, you can see that through your rank-tracking data or through pages receiving visits from search, and then targeting those terms.


So, a lot of this data, and a lot of these opportunities are retrievable—they’re just a lot harder. I will say—this is somewhat self-promotional, but I think one of Moz’s missions and obligations as a company to the search marketing world is to try and help replace, repair, and make these processes easier. So, you can guess that over the next 6-12 months that’s going to be a big part of our roadmap: trying to help you folks—and all marketers—get to this data.


For now, these methodologies can and should be helpful to you, and I expect to see lots of great discussion about other ways to go about this in the comments.


Thanks, everyone—take care.




Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!



via Moz Blog http://feedproxy.google.com/~r/MozBlog/~3/3N5M5C4z_Zg/100-percent-keyword-not-provided-whiteboard-tuesday


http://seocompanyadvice.com/when-keyword-not-provided-is-100-percent-of-organic-referrals-what-should-marketers-do-whiteboard-tuesday/?utm_source=rss&utm_medium=rss&utm_campaign=when-keyword-not-provided-is-100-percent-of-organic-referrals-what-should-marketers-do-whiteboard-tuesday

maandag 23 september 2013

Building Your Marketing Funnel with Google Analytics

Posted by dohertyjf


Do you have an idea of the path a user typically takes to convert on your website? Or, are you simply building traffic from one channel (probably organic) and wondering why it’s not converting better? As I’ve grown up as a marketer, I’ve begun to really appreciate the insights that data can provide us on how users interact with our sites, and more importantly, on how they convert and where the experience can be improved to increase our conversion rates, and thereby our top-line revenue from online channels.


I’ve recently been very interested in building a full marketing funnel based on Google Analytics data. While it’s one thing to be able to identify where conversion discrepancies exist, such as low-converting types of visitors, it’s quite another to build a full and informed funnel from your site’s data. In order to do this and have an accurate view of where your conversions are actually coming from, you need to first have the following in place:



  • Email URL tracking: Check out Annie Cushing’s thoughts here in slides 11-14. (Actually, look at the whole deck.)

  • Social network tracking (tagging parameters and using a shortener to see clickthroughs by link)

  • Display tagging

  • Referral links tagged (or at least be aware of HTTPS sites linking to you, like Medium)

  • Paid search campaigns tagged

  • Tagging on affiliates (if applicable)


You can build your campaigns here using Google’s tool.


What’s a funnel?


Before we get too far into the meat of this post, I want to make sure we’re all talking about the same thing. I’m not referring to one of these. Rather, I’m referring to one of these:



The funnel is typically broken into three sections:



  • Top of funnel (TOFU)

  • Middle of funnel (MOFU)

  • Bottom of funnel (BOFU)


The goal of this post is going to walk you through how to identify the channels that are performing best for you in each of these areas. Once you know those, you know where to invest depending on your company’s needs or priorities. Also, knowing the different areas to which you can contribute will help endear you to the people running those channels, which will help you avoid being siloed as “the SEO.” Instead, you will start to be seen as part of the marketing team, which is what you are.


Another note: I’m not teaching you how to integrate into other marketing channels in this post. Stephanie Chang did a great job of it back in July when she wrote An Introduction to Integrated Marketing and SEO: How It Works and Why It Matters. Have a read there after you’re finished here.


Understanding attribution


You may already know this, but Google Analytics offers multi-channel attribution tools within the “Conversions” section:



In the “Assisted Conversions” section, you will see a number of columns. The ones to pay attention to are:



  • Assisted Conversions

  • Last Click/Direct Conversions


It’s important to understand the difference between assisted conversions and last click/direct conversions. According to Google’s own Answer Bin, a channel gets credit for an assisted conversion for any touch that they bring to the site where the interaction was not the one that led directly to a conversion. Google says:



This is the number (and monetary value) of sales and conversions the channel assisted. If a channel appears anywhere—except as the final interaction—on a conversion path, it is considered an assist for that conversion. The higher these numbers, the more important the assist role of the channel.



On the other side, a last click or direct conversion is a touch on the site that led directly to a sale. These are your closer, aka bottom-of-funnel channels. Google says:



This is the number (and monetary value) of sales and conversions the channel closed or completed. The final click or direct visit before a conversion gets Last Interaction credit for that conversion. The higher these numbers, the more important the channel’s role in driving completion of sales and conversions.



Make sense? Great. Let’s build a funnel.


Identifying channels based on funnel level


As I said above, we’re going to use Google Analytics to identify the channels in the different levels of your funnel. If you use a different Analytics platform, like Omniture or Piwik, write a guide using that and I’ll be happy to share it out.


Top of funnel


The top of your marketing funnel is where the first interactions with your brand take place. This is typically attributed to search or organic, but is that really the case for your website?


First, let’s identify the most common channels that people use to discover your site. To do this, go to Content > Site Content > Landing Pages. Set your secondary dimension to “Medium.” You’ll see something like this:



Now, export this data to Excel (I’ve provided a spreadsheet at the end that you can plug this data into) and pivot it to see which mediums are driving your best traffic. If you want to get super fancy, break it down by type of page as well.



Here’s how that pivot table is set up:



For the site shown in these screenshots it is indeed PPC and organic search. But just knowing the channel isn’t enough, so let’s take it a step further to see where the different channels are driving traffic. You’ll either need to manually classify your pages (if you have relatively few like in my example) or write an Excel script to do this automagically.


I now know that referral is the primary driver of traffic and that the majority goes to the homepage. One specific referral, which I tagged with a Medium of “Link,” sends the best traffic directly to conversion pages (which might not necessarily be the best place for people to land for their first interaction):



Middle of funnel


The middle of your funnel is the area where people are moving from a first brand interaction to an initial sale, or if they have already made a purchase, towards another sale. What we’re looking for in the data here is channels that are not necessarily our primary first- or last-touch drivers. Rather, these are the channels where the 2nd, 3rd, and 4th-time visitors come from in order to interact with your content again.


We can figure out the most popular and most effective middle-of-funnel channels a couple of different ways. The first, and by far the easiest, is by comparing different types of attribution to discover which channels get more credit based on first click, linear (where each channel gets equal credit), and last-click. To learn what each of the different attribution models really means, check out the Google support page.


By sorting the Model Comparison Tool in Analytics by Linear (high to low), you can find the channels that perform best when given equal credit independent of where they are in the funnel.



But this doesn’t give us great insight into which channels perform best in the middle. Rather, it’s telling us which channels account for the most revenue overall (which is still important to know), and the place doesn’t matter. In the above example, for Distilled that’s Direct, then email, organic search, and referral, in that order.


To find which channels are the most popular for your users to come back, we need to do some manipulation in Excel (my favorite tool) to clean out the first- and last-touch interactions in the Top Conversion Paths report.



What you want to do now is expand the number of rows in Analytics to account for as many of your paths as possible. For most sites the 5,000-row limit in Analytics will suffice.


Download all of your conversion paths into Excel. You’ll have one column with the complete paths, followed by the following columns:



  • Conversions

  • Conversion Value


To wrangle the data into the format we need, I also added the following columns:




  • Steps in Conversion Path

  • First Touch

  • All Middle

  • Last Touch

  • $/Conversion


If you’re a visual person, this screenshot may help you out to see how the sheet is set up:


Note: the hardest part here is figuring out what your cutoff is for conversion amount. For Distilled, for example, I removed anything under $30, because we don’t do anything with the data underneath that. I also picked a minimum threshold for the number of conversions that channel brought.

In Distilled’s case, five seemed pertinent because it gives enough to get a decent idea of $/conversion but also eliminates the very long (20+) conversion paths that we’re not going to optimize for anyways. However, also keep in mind that the length of the path matters. For example, Distilled’s median # of steps before a conversion is eight. With fewer than eight steps, our average per conversion is 30% higher than it is with eight+ steps in the funnel.

So, to clean up the data, I removed the following:




  • Paths with conversions < 5

  • Paths with conversion value < $30

  • Paths with (unavailable) in the path

  • Paths with more than 15 steps in the path



After you clean up the data, it will pull into the “Common Middle” sheet within the Excel workbook I link to below. Then, you can see pretty quickly which channels are driving the most middle conversions, and which middle paths give the best $/conversion:



Here’s the setup for that pivot table:




Once again, this will automagically work for you in the Excel sheet.


Bottom of funnel


The bottom of the funnel is the last touch that occurs before someone buys. These channels are incredibly important to know about because you can then build your strategy around how to get people into those channels and convert them later.



This one is easy to find. It doesn’t take tricky Excel functions. It doesn’t involve crazy data analysis.

Assuming you have Analytics set up correctly, you can find this data in Conversion > Attribution > Model Comparison Tool. When you set the Model to Last Interaction, you’ll see something like this:




For Distilled, you can see that our highest last-touch channels are direct, then email, then organic search.


Applying the data


Remember this funnel from the beginning?






Based off the data, I now see that for Distilled, the sections of our funnel look this way:


  • Top

    • Direct

    • Organic Search

    • Social



  • Middle

    • Organic to Organic

    • Direct to Email

    • Direct to Organic



  • Bottom

    • Email

    • Organic

    • Direct




Now we can build out a marketing plan depending on our needs.


Excel sheet


I promised you an Excel sheet that I have put together for you. Note that it does not automatically clean out your very long conversion paths, but use the parameters given above to narrow down your data to make it actionable if that makes sense for your business.


That said, you can download the spreadsheet here .


Bonus Excel sheet to find profitability by # of touches


I mentioned above about finding the number of touches that perform best for you. Here is a quick and dirty spreadsheet that allows you to do just this. Basically, the sheet looks at the number of touches and averages the conversion amount for each bucket. You can see the results on the far right.


To use this sheet for yourself, download your Multi Channel Funnel groupings in Analytics (you need to have ecommerce enabled) and enter your data into the sheet.



Download this bonus spreadsheet here .


Example and conclusion


If we are trying to convert more people to DistilledU, through that goal I know that Organic converts best for us on the last touch. This means that we need to invest in content that drives people towards a conversion through organic, so either blog content with a call to action or larger content teaching people SEO. We know that email converts 4th best for DU, but it works well higher in the funnel to convert people eventually. Therefore, we need to get more people onto our DistilledU email list.


Direct traffic converts well, of course; people are coming to the site because they know about it. Therefore we need to get top-of-mind and convert them into email and RSS subscribers so that they become familiar with our content and eventually buy through email or search.


We’ve built our funnel. You should go and build yours. I’d love to hear what insights you have.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!



via Moz Blog http://feedproxy.google.com/~r/MozBlog/~3/uVX-RJ6nG3o/building-your-marketing-funnel-with-google-analytics


http://seocompanyadvice.com/building-your-marketing-funnel-with-google-analytics/?utm_source=rss&utm_medium=rss&utm_campaign=building-your-marketing-funnel-with-google-analytics