Connect with us

SEO

Website Traffic Data Analysis with Google

Published

on

Website Traffic Data Analysis with Google

Website traffic is considered one of the main (if not the main) indicators of web success. But there is more to just attracting traffic- website owners and SEO professionals can benefit from doing website traffic data analysis and understanding website statistics and trends. Whether you’re still growing your site or already focusing on acquiring your target market- traffic data analysis can aid in your SEO and marketing strategies. What’s even better is that this can easily be done with free tools such as Google Analytics and Google Search Console.

What are Google Analytics and Google Search Console?

Google Analytics (GA) and Google Search Console (GSC) are two of Google’s own free tools that provide webmasters the ability to track website traffic attribution, search terms, and behavior data. Although there are a ton of options for more “feature-packed” tools that can help marketers to track these things plus other SEO factors of your website, these two already provide everything you need in understanding your website’s progress- both on the big picture and on an intricate standpoint. My team at SEO Hacker also trusts these two free tools from Google providing the necessary information for analyzing a website’s data on traffic and overall SEO performance.

 

Google Analytics

Google Analytics can be considered your main tool for knowing the entire story behind your website and being able to piece it together so you have a clear understanding of everything that’s going on on your website. It’s focused on providing information that is specific to what your website receives and how a user interacts with it.

 

Google Search Console

On the other hand, Google Search Console provides information about your website specifically in the aspect of how it is performing from Google’s search results. Here, you can find information about your website with respect to the following:

 

  • Search Performance
  • Google Indexing
  • Google Page Experience
  • Enhancements and security issues

 

To put it simply, it presents data that allows you to see your website from Google’s perspective.

 

Google Analytics Vs Google Search Console

When it comes to analyzing your website’s data, these two free tools offered by Google provide different sets of information for us SEO professionals and marketers to utilize. The only similarity they have based on the data sets found in their respective dashboards can be stemmed from; Google Analytics providing user traffic data that can provide you an idea of your website’s authority in Google Search, and Google Search Console provides search data that can translate to traffic coming into your website.

 

It is very important to acknowledge that these tools track traffic differently. There will always be certain discrepancies in data but you’ll notice that the numbers presented are still very close to each other. What’s important to take note of is that these tools can give you what you need to know about a website’s growth and how it’s keeping up with certain trends.

 

To learn how to set up Google Analytics, click here. To learn how to set up Google Search Console, click here.

 

How To Approach Website Traffic Data Analysis

Whether you’re providing website traffic data analysis for a newly SEO-optimized site or a well-seasoned website with multiple SEO strategies implemented; growth should always be your priority. Information is king. And with the information that Google Analytics and Google Search Console provide, it’s all about how well you understand that information and what you can do with it.

 

How To Do Initial Website Audits with Google Search Console

Let’s assume that you’ve already laid out the SEO foundation of your website prior to development. Before your website traffic data analysis, the first tool that you should be closely monitoring must be Google Search Console. Starting with the Index Coverage report.

 

Review Coverage Section

google search console coverage section

The Index Coverage report is where you can find how much of your web page is being crawled and indexed by Google. The term “indexed” describes web pages that are stored in Google Search’s database, which allows them to be retrieved when a search query is punched in by a person using Google. There are also instances wherein Google crawled your web page, but decided to exclude it from its database. The report marks these pages as “Excluded”.

 

google search console coverage section excluded pages list

There will always be certain instances wherein an important web page that’s essential in your strategy is marked as Excluded. These are the pages that Google decided to be excluded from the list of pages that can be searched by users. As an SEO professional, this provides you with information for your next action. What should you optimize more? Should you add content to the page? Build better topical authority? Whatever the reason may be, this is a good signal for you to know what to change and improve for your website’s SEO.

 

google search console coverage section error pages list

The Index Coverage report also provides information for pages with SEO-related errors within them. Web pages marked with Error are more penalizing for SEO than a page marked as Excluded. Errors such as your website’s broken pages (404), pages that are set as no-index, and all other errors affecting Google’s capability in indexing your pages are listed here.

 

I highly recommend immediately taking action when receiving this information and then looking for ways to prevent it from happening. Learn more about the common GSC errors and how to fix them by clicking here!

 

Additionally, although the data found here are not directly considered “traffic”, they still play an important role in your website traffic data analysis. Web pages that Google Search Console identifies as having issues will not be indexed. Having non-indexed pages on your website will result in not being searched by users. Which will ultimately lead to less organic traffic coming into your website.

 

Review Performance Section

The last section that you should focus on is the Performance section of Google Search Console. This section provides you with an overview of how often users interact with your website on the  Google engine results pages (SERPs).

 

google search console performance section

Impressions represent how many users are able to see your website within the search results pages, while Clicks provide how many click-throughs it received from those users. The main point of analyzing this data is to provide you with a basic idea of how well your website is performing in Google’s Search Results Pages. When compared to Google Analytics website traffic data, the information is almost proportional to each other. This is mainly because whenever a website receives a high number of impressions and clicks, this is also an indication of website traffic increase.

 

How To Do Search Traffic Analysis with Google Analytics

google analytics main dashboard

Now that you’ve taken the necessary steps in optimizing your website based on Google Search Console, it’s time to further go into website traffic data analysis by understanding how it can affect performance and growth. This can be done by analyzing the user traffic data presented in Google Analytics. To start, the majority of the search traffic data that we need can be found in the Reports section of the tool. From there, proceed to navigate into Traffic Acquisition.

 

google analytics traffic acquisition dashboard

From there, proceed to navigate into Traffic Acquisition. The first thing you’ll notice with the tool is that traffic data is categorized into different types. Since our main focus for website traffic data analysis is SEO, we’ll only tackle Organic Search traffic for this article. 

 

The main indicator in understanding website growth using Google Analytics is how much organic search traffic is coming into the website. This will tell you the specific number of people going into your website coming from Google search results- very similar to Google Search Console’s method of acquiring data. The only difference is that you can actually check which of your web pages people are landing on. This information can easily be taken by doing the following:

 

  1. Add filters to the tabulated data by interacting with the “+” icon, then clicking Landing Page to display what pages the users are clicking within Google Search results
    google analytics filter web traffic
    google analytics filter web traffic through landing page filter
  2. Then input “organic search” in the top left-most search bar of the data to allow the table to only display organic search traffic
    google analytics filter organic search traffic

After having successfully accomplished these instructions, the data presented in the table should appear similar to this,


google analytics filtered list of organic search traffic based on the landing page

The data appearing within the table is now comprised of the list of web pages that are garnering organic search traffic. Since the data is sorted from highest organic traffic to lowest, website traffic data analysis suggests that these web pages are considered the most visible in Google search results. But what’s most important is that through this organized information, you can also acknowledge which of your web pages are the least visible within Google search results. 

 

Only by acknowledging these pages can you employ a strategy focused on improving the SEO of your weakest pages. Maybe consider doing a content gap analysis for these pages and see what you can improve. Learn about it here in our Content Gap Analysis guide

 

As SEO Professionals, it should always be recognized that SEO is holistic by nature. Having the majority of your pages visible online equates to an overall improvement of website SEO authority. The information that Google Analytics present, provides everything you need in optimizing and restrategizing for website growth. So how can I further improve my analysis with both Google Analytics and Google Search Console?

 

Analyzing Website Traffic with Google Analytics and Google Search Console

One of the most volatile types of data in the world of digital marketing is organic search traffic. A lot of factors can affect the search traffic coming into your website. Some of the main factors of volatility in traffic include search algorithms, seasonal trends, an unexpected and unchecked bug in your website, and website server volatility. And all of these can be quickly figured out upon checking the GA and GSC tools. To further expound on the importance of website traffic data analysis, consider the following example.

 

data comparison for website traffic data analysis

 

One of our informational websites is receiving a decline in organic search traffic in comparison to the previous period. It lost an estimated 800 users in 1 month. To every website owner or SEO professional, this is considered an alarming loss in website traffic. Since the issue at hand mainly involves organic search traffic, the best tool to utilize next would be Google Search Console

 

google search console impressions and clicks comparison analysis

Based on the data presented, it appears that it is only expected to experience website traffic since there is a particular loss in impressions for the website. And since there is a visible loss in impressions, this would also affect the number of clicks the website is receiving within Google search results. What does this mean for our website traffic data analysis?

 

Since there are fewer impressions coming to the website, your web pages are appearing less in Google search engine results pages. Luckily, Google Search Console also provides data on the search queries that users typed to discover your website. 

google search console search query list

Based on the keywords shown, it is quite noticeable how particular keywords received a major drop in impressions. This means that the website is either not ranking for the keyword or that the search query is lower for that particular period. And based on manually testing all these keywords by searching them on Google, I was able to discover that the website is ranking on the first page on all of them. And a website that ranks on the first page of Google is guaranteed impression data.

 

Based on all the data that I’ve gathered, we can only conclude that the website is undergoing a particular season wherein its keywords are being searched less by users. Rankings are going well, but there’s less traffic coming into the website. These are the kinds of conclusions that are only possible to hypothesize on when you employ tools like GA and GSC to provide the necessary information for website traffic data analysis.

 

Key Takeaway

It is very easy to be overwhelmed with data and information. But at the same time, as SEO professionals, we should also be grateful that we can easily acquire them through Google Analytics and Google Search Console. There would absolutely be no point in strategizing and executing optimizations if we have no information to prove our efforts. 

 

It’s only a matter of acknowledging the issue at hand, identifying relevant data to collect, and properly translating that data for you to develop the planned course of action for your website. This exactly represents the importance of knowing how to do even the basic website traffic data analysis.

 

Interested in an in-depth guide for GA and GSC? Check out our Google Analytics Tutorial for Beginners and Google Search Console Guide!

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

8% Of Automattic Employees Choose To Resign

Published

on

By

8% Of Automattic Employees Choose To Resign

WordPress co-founder and Automattic CEO announced today that he offered Automattic employees the chance to resign with a severance pay and a total of 8.4 percent. Mullenweg offered $30,000 or six months of salary, whichever one is higher, with a total of 159 people taking his offer.

Reactions Of Automattic Employees

Given the recent controversies created by Mullenweg, one might be tempted to view the walkout as a vote of no-confidence in Mullenweg. But that would be a mistake because some of the employees announcing their resignations either praised Mullenweg or simply announced their resignation while many others tweeted how happy they are to stay at Automattic.

One former employee tweeted that he was sad about recent developments but also praised Mullenweg and Automattic as an employer.

He shared:

“Today was my last day at Automattic. I spent the last 2 years building large scale ML and generative AI infra and products, and a lot of time on robotics at night and on weekends.

I’m going to spend the next month taking a break, getting married, and visiting family in Australia.

I have some really fun ideas of things to build that I’ve been storing up for a while. Now I get to build them. Get in touch if you’d like to build AI products together.”

Another former employee, Naoko Takano, is a 14 year employee, an organizer of WordCamp conferences in Asia, a full-time WordPress contributor and Open Source Project Manager at Automattic announced on X (formerly Twitter) that today was her last day at Automattic with no additional comment.

She tweeted:

“Today was my last day at Automattic.

I’m actively exploring new career opportunities. If you know of any positions that align with my skills and experience!”

Naoko’s role at at WordPress was working with the global WordPress community to improve contributor experiences through the Five for the Future and Mentorship programs. Five for the Future is an important WordPress program that encourages organizations to donate 5% of their resources back into WordPress. Five for the Future is one of the issues Mullenweg had against WP Engine, asserting that they didn’t donate enough back into the community.

Mullenweg himself was bittersweet to see those employees go, writing in a blog post:

“It was an emotional roller coaster of a week. The day you hire someone you aren’t expecting them to resign or be fired, you’re hoping for a long and mutually beneficial relationship. Every resignation stings a bit.

However now, I feel much lighter. I’m grateful and thankful for all the people who took the offer, and even more excited to work with those who turned down $126M to stay. As the kids say, LFG!”

Read the entire announcement on Mullenweg’s blog:

Automattic Alignment

Featured Image by Shutterstock/sdx15

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

YouTube Extends Shorts To 3 Minutes, Adds New Features

Published

on

By

YouTube Extends Shorts To 3 Minutes, Adds New Features

YouTube expands Shorts to 3 minutes, adds templates, AI tools, and the option to show fewer Shorts on the homepage.

  • YouTube Shorts will allow 3-minute videos.
  • New features include templates, enhanced remixing, and AI-generated video backgrounds.
  • YouTube is adding a Shorts trends page and comment previews.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

How To Stop Filter Results From Eating Crawl Budget

Published

on

By

How To Find The Right Long-tail Keywords For Articles

Today’s Ask An SEO question comes from Michal in Bratislava, who asks:

“I have a client who has a website with filters based on a map locations. When the visitor makes a move on the map, a new URL with filters is created. They are not in the sitemap. However, there are over 700,000 URLs in the Search Console (not indexed) and eating crawl budget.

What would be the best way to get rid of these URLs? My idea is keep the base location ‘index, follow’ and newly created URLs of surrounded area with filters switch to ‘noindex, no follow’. Also mark surrounded areas with canonicals to the base location + disavow the unwanted links.”

Great question, Michal, and good news! The answer is an easy one to implement.

First, let’s look at what you’re trying and apply it to other situations like ecommerce and publishers. This way, more people can benefit. Then, go into your strategies above and end with the solution.

What Crawl Budget Is And How Parameters Are Created That Waste It

If you’re not sure what Michal is referring to with crawl budget, this is a term some SEO pros use to explain that Google and other search engines will only crawl so many pages on your website before it stops.

If your crawl budget is used on low-value, thin, or non-indexable pages, your good pages and new pages may not be found in a crawl.

If they’re not found, they may not get indexed or refreshed. If they’re not indexed, they cannot bring you SEO traffic.

This is why optimizing a crawl budget for efficiency is important.

Michal shared an example of how “thin” URLs from an SEO point of view are created as customers use filters.

The experience for the user is value-adding, but from an SEO standpoint, a location-based page would be better. This applies to ecommerce and publishers, too.

Ecommerce stores will have searches for colors like red or green and products like t-shirts and potato chips.

These create URLs with parameters just like a filter search for locations. They could also be created by using filters for size, gender, color, price, variation, compatibility, etc. in the shopping process.

The filtered results help the end user but compete directly with the collection page, and the collection would be the “non-thin” version.

Publishers have the same. Someone might be on SEJ looking for SEO or PPC in the search box and get a filtered result. The filtered result will have articles, but the category of the publication is likely the best result for a search engine.

These filtered results can be indexed because they get shared on social media or someone adds them as a comment on a blog or forum, creating a crawlable backlink. It might also be an employee in customer service responded to a question on the company blog or any other number of ways.

The goal now is to make sure search engines don’t spend time crawling the “thin” versions so you can get the most from your crawl budget.

The Difference Between Indexing And Crawling

There’s one more thing to learn before we go into the proposed ideas and solutions – the difference between indexing and crawling.

  • Crawling is the discovery of new pages within a website.
  • Indexing is adding the pages that are worthy of showing to a person using the search engine to the database of pages.

Pages can get crawled but not indexed. Indexed pages have likely been crawled and will likely get crawled again to look for updates and server responses.

But not all indexed pages will bring in traffic or hit the first page because they may not be the best possible answer for queries being searched.

Now, let’s go into making efficient use of crawl budgets for these types of solutions.

Using Meta Robots Or X Robots

The first solution Michal pointed out was an “index,follow” directive. This tells a search engine to index the page and follow the links on it. This is a good idea, but only if the filtered result is the ideal experience.

From what I can see, this would not be the case, so I would recommend making it “noindex,follow.”

Noindex would say, “This is not an official page, but hey, keep crawling my site, you’ll find good pages in here.”

And if you have your main menu and navigational internal links done correctly, the spider will hopefully keep crawling them.

Canonicals To Solve Wasted Crawl Budget

Canonical links are used to help search engines know what the official page to index is.

If a product exists in three categories on three separate URLs, only one should be “the official” version, so the two duplicates should have a canonical pointing to the official version. The official one should have a canonical link that points to itself. This applies to the filtered locations.

If the location search would result in multiple city or neighborhood pages, the result would likely be a duplicate of the official one you have in your sitemap.

Have the filtered results point a canonical back to the main page of filtering instead of being self-referencing if the content on the page stays the same as the original category.

If the content pulls in your localized page with the same locations, point the canonical to that page instead.

In most cases, the filtered version inherits the page you searched or filtered from, so that is where the canonical should point to.

If you do both noindex and have a self-referencing canonical, which is overkill, it becomes a conflicting signal.

The same applies to when someone searches for a product by name on your website. The search result may compete with the actual product or service page.

With this solution, you’re telling the spider not to index this page because it isn’t worth indexing, but it is also the official version. It doesn’t make sense to do this.

Instead, use a canonical link, as I mentioned above, or noindex the result and point the canonical to the official version.

Disavow To Increase Crawl Efficiency

Disavowing doesn’t have anything to do with crawl efficiency unless the search engine spiders are finding your “thin” pages through spammy backlinks.

The disavow tool from Google is a way to say, “Hey, these backlinks are spammy, and we don’t want them to hurt us. Please don’t count them towards our site’s authority.”

In most cases, it doesn’t matter, as Google is good at detecting spammy links and ignoring them.

You do not want to add your own site and your own URLs to the disavow tool. You’re telling Google your own site is spammy and not worth anything.

Plus, submitting backlinks to disavow won’t prevent a spider from seeing what you want and do not want to be crawled, as it is only for saying a link from another site is spammy.

Disavowing won’t help with crawl efficiency or saving crawl budget.

How To Make Crawl Budgets More Efficient

The answer is robots.txt. This is how you tell specific search engines and spiders what to crawl.

You can include the folders you want them to crawl by marketing them as “allow,” and you can say “disallow” on filtered results by disallowing the “?” or “&” symbol or whichever you use.

If some of those parameters should be crawled, add the main word like “?filter=location” or a specific parameter.

Robots.txt is how you define crawl paths and work on crawl efficiency. Once you’ve optimized that, look at your internal links. A link from one page on your site to another.

These help spiders find your most important pages while learning what each is about.

Internal links include:

  • Breadcrumbs.
  • Menu navigation.
  • Links within content to other pages.
  • Sub-category menus.
  • Footer links.

You can also use a sitemap if you have a large site, and the spiders are not finding the pages you want with priority.

I hope this helps answer your question. It is one I get a lot – you’re not the only one stuck in that situation.

More resources: 


Featured Image: Paulo Bobita/Search Engine Journal

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending