Connect with us

SEO

Bulk Loading Performance Tests With PageSpeed Insights API & Python

Published

on

Bulk Loading Performance Tests With PageSpeed Insights API & Python

Google offers PageSpeed Insights API to help SEO pros and developers by mixing real-world data with simulation data,  providing load performance timing data related to web pages.

The difference between the Google PageSpeed Insights (PSI) and Lighthouse is that PSI involves both real-world and lab data, while Lighthouse performs a page loading simulation by modifying the connection and user-agent of the device.

Another point of difference is that PSI doesn’t supply any information related to web accessibility, SEO, or progressive web apps (PWAs), while Lighthouse provides all of the above.

Thus, when we use PageSpeed Insights API for the bulk URL loading performance test, we won’t have any data for accessibility.

However, PSI provides more information related to the page speed performance, such as “DOM Size,” “Deepest DOM Child Element,” “Total Task Count,” and “DOM Content Loaded” timing.

One more advantage of the PageSpeed Insights API is that it gives the “observed metrics” and “actual metrics” different names.

In this guide, you will learn:

  • How to create a production-level Python Script.
  • How to use APIs with Python.
  • How to construct data frames from API responses.
  • How to analyze the API responses.
  • How to parse URLs and process URL requests’ responses.
  • How to store the API responses with proper structure.

An example output of the Page Speed Insights API call with Python is below.

Screenshot from author, June 2022

Libraries For Using PageSpeed Insights API With Python

The necessary libraries to use PSI API with Python are below.

  • Advertools retrieves testing URLs from the sitemap of a website.
  • Pandas is to construct the data frame and flatten the JSON output of the API.
  • Requests are to make a request to the specific API endpoint.
  • JSON is to take the API response and put it into the specifically related dictionary point.
  • Datetime is to modify the specific output file’s name with the date of the moment.
  • URLlib is to parse the test subject website URL.

How To Use PSI API With Python?

To use the PSI API with Python, follow the steps below.

  • Get a PageSpeed Insights API key.
  • Import the necessary libraries.
  • Parse the URL for the test subject website.
  • Take the Date of Moment for file name.
  • Take URLs into a list from a sitemap.
  • Choose the metrics that you want from PSI API.
  • Create a For Loop for taking the API Response for all URLs.
  • Construct the data frame with chosen PSI API metrics.
  • Output the results in the form of XLSX.

1. Get PageSpeed Insights API Key

Use the PageSpeed Insights API Documentation to get the API Key.

Click the “Get a Key” button below.

psi api key Image from developers.google.com, June 2022

Choose a project that you have created in Google Developer Console.

google developer console api projectImage from developers.google.com, June 2022

Enable the PageSpeed Insights API on that specific project.

page speed insights api enableImage from developers.google.com, June 2022

You will need to use the specific API Key in your API Requests.

2. Import The Necessary Libraries

Use the lines below to import the fundamental libraries.

    import advertools as adv
    import pandas as pd
    import requests
    import json
    from datetime import datetime
    from urllib.parse import urlparse

3. Parse The URL For The Test Subject Website

To parse the URL of the subject website, use the code structure below.

  domain = urlparse(sitemap_url)
  domain = domain.netloc.split(".")[1]

The “domain” variable is the parsed version of the sitemap URL.

The “netloc” represents the specific URL’s domain section. When we split it with the “.” it takes the “middle section” which represents the domain name.

Here, “0” is for “www,” “1” for “domain name,” and “2” is for “domain extension,” if we split it with “.”

4. Take The Date Of Moment For File Name

To take the date of the specific function call moment, use the “datetime.now” method.

Datetime.now provides the specific time of the specific moment. Use the “strftime” with the “%Y”, “”%m”, and “%d” values. “%Y” is for the year. The “%m” and “%d” are numeric values for the specific month and the day.

 date = datetime.now().strftime("%Y_%m_%d")

5. Take URLs Into A List From A Sitemap

To take the URLs into a list form from a sitemap file, use the code block below.

   sitemap = adv.sitemap_to_df(sitemap_url)
   sitemap_urls = sitemap["loc"].to_list()

If you read the Python Sitemap Health Audit, you can learn further information about the sitemaps.

6. Choose The Metrics That You Want From PSI API

To choose the PSI API response JSON properties, you should see the JSON file itself.

It is highly relevant to the reading, parsing, and flattening of JSON objects.

It is even related to Semantic SEO, thanks to the concept of “directed graph,” and “JSON-LD” structured data.

In this article, we won’t focus on examining the specific PSI API Response’s JSON hierarchies.

You can see the metrics that I have chosen to gather from PSI API. It is richer than the basic default output of PSI API, which only gives the Core Web Vitals Metrics, or Speed Index-Interaction to Next Paint, Time to First Byte, and First Contentful Paint.

Of course, it also gives “suggestions” by saying “Avoid Chaining Critical Requests,” but there is no need to put a sentence into a data frame.

In the future, these suggestions, or even every individual chain event, their KB and MS values can be taken into a single column with the name “psi_suggestions.”

For a start, you can check the metrics that I have chosen, and an important amount of them will be first for you.

PSI API Metrics, the first section is below.

    fid = []
    lcp = []
    cls_ = []
    url = []
    fcp = []
    performance_score = []
    total_tasks = []
    total_tasks_time = []
    long_tasks = []
    dom_size = []
    maximum_dom_depth = []
    maximum_child_element = []
    observed_fcp  = []
    observed_fid = []
    observed_lcp = []
    observed_cls = []
    observed_fp = []
    observed_fmp = []
    observed_dom_content_loaded = []
    observed_speed_index = []
    observed_total_blocking_time = []
    observed_first_visual_change = []
    observed_last_visual_change = []
    observed_tti = []
    observed_max_potential_fid = []

This section includes all the observed and simulated fundamental page speed metrics, along with some non-fundamental ones, like “DOM Content Loaded,” or “First Meaningful Paint.”

The second section of PSI Metrics focuses on possible byte and time savings from the unused code amount.

    render_blocking_resources_ms_save = []
    unused_javascript_ms_save = []
    unused_javascript_byte_save = []
    unused_css_rules_ms_save = []
    unused_css_rules_bytes_save = []

A third section of the PSI metrics focuses on server response time, responsive image usage benefits, or not, using harms.

    possible_server_response_time_saving = []
    possible_responsive_image_ms_save = []

Note: Overall Performance Score comes from “performance_score.”

7. Create A For Loop For Taking The API Response For All URLs

The for loop is to take all of the URLs from the sitemap file and use the PSI API for all of them one by one. The for loop for PSI API automation has several sections.

The first section of the PSI API for loop starts with duplicate URL prevention.

In the sitemaps, you can see a URL that appears multiple times. This section prevents it.

for i in sitemap_urls[:9]:
         # Prevent the duplicate "/" trailing slash URL requests to override the information.
         if i.endswith("/"):
               r = requests.get(f"https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={i}&strategy=mobile&locale=en&key={api_key}")
         else:
               r = requests.get(f"https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={i}/&strategy=mobile&locale=en&key={api_key}")

Remember to check the “api_key” at the end of the endpoint for PageSpeed Insights API.

Check the status code. In the sitemaps, there might be non-200 status code URLs; these should be cleaned.

         if r.status_code == 200:
               #print(r.json())
               data_ = json.loads(r.text)
               url.append(i)

The next section appends the specific metrics to the specific dictionary that we have created before “_data.”

               fcp.append(data_["loadingExperience"]["metrics"]["FIRST_CONTENTFUL_PAINT_MS"]["percentile"])
               fid.append(data_["loadingExperience"]["metrics"]["FIRST_INPUT_DELAY_MS"]["percentile"])
               lcp.append(data_["loadingExperience"]["metrics"]["LARGEST_CONTENTFUL_PAINT_MS"]["percentile"])
               cls_.append(data_["loadingExperience"]["metrics"]["CUMULATIVE_LAYOUT_SHIFT_SCORE"]["percentile"])
               performance_score.append(data_["lighthouseResult"]["categories"]["performance"]["score"] * 100)

Next section focuses on “total task” count, and DOM Size.

               total_tasks.append(data_["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["numTasks"])
               total_tasks_time.append(data_["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["totalTaskTime"])
               long_tasks.append(data_["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["numTasksOver50ms"])
               dom_size.append(data_["lighthouseResult"]["audits"]["dom-size"]["details"]["items"][0]["value"])

The next section takes the “DOM Depth” and “Deepest DOM Element.”

               maximum_dom_depth.append(data_["lighthouseResult"]["audits"]["dom-size"]["details"]["items"][1]["value"])
               maximum_child_element.append(data_["lighthouseResult"]["audits"]["dom-size"]["details"]["items"][2]["value"])

The next section takes the specific observed test results during our Page Speed Insights API.

               observed_dom_content_loaded.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedDomContentLoaded"])
               observed_fid.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedDomContentLoaded"])
               observed_lcp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["largestContentfulPaint"])
               observed_fcp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["firstContentfulPaint"])
               observed_cls.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["totalCumulativeLayoutShift"])
               observed_speed_index.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedSpeedIndex"])
               observed_total_blocking_time.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["totalBlockingTime"])
               observed_fp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedFirstPaint"])
               observed_fmp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["firstMeaningfulPaint"])
               observed_first_visual_change.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedFirstVisualChange"])
               observed_last_visual_change.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedLastVisualChange"])
               observed_tti.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["interactive"])
               observed_max_potential_fid.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["maxPotentialFID"])

The next section takes the Unused Code amount and the wasted bytes, in milliseconds along with the render-blocking resources.

               render_blocking_resources_ms_save.append(data_["lighthouseResult"]["audits"]["render-blocking-resources"]["details"]["overallSavingsMs"])
               unused_javascript_ms_save.append(data_["lighthouseResult"]["audits"]["unused-javascript"]["details"]["overallSavingsMs"])
               unused_javascript_byte_save.append(data_["lighthouseResult"]["audits"]["unused-javascript"]["details"]["overallSavingsBytes"])
               unused_css_rules_ms_save.append(data_["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["overallSavingsMs"])
               unused_css_rules_bytes_save.append(data_["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["overallSavingsBytes"])

The next section is to provide responsive image benefits and server response timing.

               possible_server_response_time_saving.append(data_["lighthouseResult"]["audits"]["server-response-time"]["details"]["overallSavingsMs"])      
               possible_responsive_image_ms_save.append(data_["lighthouseResult"]["audits"]["uses-responsive-images"]["details"]["overallSavingsMs"])

The next section is to make the function continue to work in case there is an error.

         else:
           continue

Example Usage Of Page Speed Insights API With Python For Bulk Testing

To use the specific code blocks, put them into a Python function.

Run the script, and you will get 29 page speed-related metrics in the columns below.

pagespeed insights apiScreenshot from author, June 2022

Conclusion

PageSpeed Insights API provides different types of page loading performance metrics.

It demonstrates how Google engineers perceive the concept of page loading performance, and possibly use these metrics as a ranking, UX, and quality-understanding point of view.

Using Python for bulk page speed tests gives you a snapshot of the entire website to help analyze the possible user experience, crawl efficiency, conversion rate, and ranking improvements.

More resources:


Featured Image: Dundanim/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

8% Of Automattic Employees Choose To Resign

Published

on

By

8% Of Automattic Employees Choose To Resign

WordPress co-founder and Automattic CEO announced today that he offered Automattic employees the chance to resign with a severance pay and a total of 8.4 percent. Mullenweg offered $30,000 or six months of salary, whichever one is higher, with a total of 159 people taking his offer.

Reactions Of Automattic Employees

Given the recent controversies created by Mullenweg, one might be tempted to view the walkout as a vote of no-confidence in Mullenweg. But that would be a mistake because some of the employees announcing their resignations either praised Mullenweg or simply announced their resignation while many others tweeted how happy they are to stay at Automattic.

One former employee tweeted that he was sad about recent developments but also praised Mullenweg and Automattic as an employer.

He shared:

“Today was my last day at Automattic. I spent the last 2 years building large scale ML and generative AI infra and products, and a lot of time on robotics at night and on weekends.

I’m going to spend the next month taking a break, getting married, and visiting family in Australia.

I have some really fun ideas of things to build that I’ve been storing up for a while. Now I get to build them. Get in touch if you’d like to build AI products together.”

Another former employee, Naoko Takano, is a 14 year employee, an organizer of WordCamp conferences in Asia, a full-time WordPress contributor and Open Source Project Manager at Automattic announced on X (formerly Twitter) that today was her last day at Automattic with no additional comment.

She tweeted:

“Today was my last day at Automattic.

I’m actively exploring new career opportunities. If you know of any positions that align with my skills and experience!”

Naoko’s role at at WordPress was working with the global WordPress community to improve contributor experiences through the Five for the Future and Mentorship programs. Five for the Future is an important WordPress program that encourages organizations to donate 5% of their resources back into WordPress. Five for the Future is one of the issues Mullenweg had against WP Engine, asserting that they didn’t donate enough back into the community.

Mullenweg himself was bittersweet to see those employees go, writing in a blog post:

“It was an emotional roller coaster of a week. The day you hire someone you aren’t expecting them to resign or be fired, you’re hoping for a long and mutually beneficial relationship. Every resignation stings a bit.

However now, I feel much lighter. I’m grateful and thankful for all the people who took the offer, and even more excited to work with those who turned down $126M to stay. As the kids say, LFG!”

Read the entire announcement on Mullenweg’s blog:

Automattic Alignment

Featured Image by Shutterstock/sdx15

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

YouTube Extends Shorts To 3 Minutes, Adds New Features

Published

on

By

YouTube Extends Shorts To 3 Minutes, Adds New Features

YouTube expands Shorts to 3 minutes, adds templates, AI tools, and the option to show fewer Shorts on the homepage.

  • YouTube Shorts will allow 3-minute videos.
  • New features include templates, enhanced remixing, and AI-generated video backgrounds.
  • YouTube is adding a Shorts trends page and comment previews.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

How To Stop Filter Results From Eating Crawl Budget

Published

on

By

How To Find The Right Long-tail Keywords For Articles

Today’s Ask An SEO question comes from Michal in Bratislava, who asks:

“I have a client who has a website with filters based on a map locations. When the visitor makes a move on the map, a new URL with filters is created. They are not in the sitemap. However, there are over 700,000 URLs in the Search Console (not indexed) and eating crawl budget.

What would be the best way to get rid of these URLs? My idea is keep the base location ‘index, follow’ and newly created URLs of surrounded area with filters switch to ‘noindex, no follow’. Also mark surrounded areas with canonicals to the base location + disavow the unwanted links.”

Great question, Michal, and good news! The answer is an easy one to implement.

First, let’s look at what you’re trying and apply it to other situations like ecommerce and publishers. This way, more people can benefit. Then, go into your strategies above and end with the solution.

What Crawl Budget Is And How Parameters Are Created That Waste It

If you’re not sure what Michal is referring to with crawl budget, this is a term some SEO pros use to explain that Google and other search engines will only crawl so many pages on your website before it stops.

If your crawl budget is used on low-value, thin, or non-indexable pages, your good pages and new pages may not be found in a crawl.

If they’re not found, they may not get indexed or refreshed. If they’re not indexed, they cannot bring you SEO traffic.

This is why optimizing a crawl budget for efficiency is important.

Michal shared an example of how “thin” URLs from an SEO point of view are created as customers use filters.

The experience for the user is value-adding, but from an SEO standpoint, a location-based page would be better. This applies to ecommerce and publishers, too.

Ecommerce stores will have searches for colors like red or green and products like t-shirts and potato chips.

These create URLs with parameters just like a filter search for locations. They could also be created by using filters for size, gender, color, price, variation, compatibility, etc. in the shopping process.

The filtered results help the end user but compete directly with the collection page, and the collection would be the “non-thin” version.

Publishers have the same. Someone might be on SEJ looking for SEO or PPC in the search box and get a filtered result. The filtered result will have articles, but the category of the publication is likely the best result for a search engine.

These filtered results can be indexed because they get shared on social media or someone adds them as a comment on a blog or forum, creating a crawlable backlink. It might also be an employee in customer service responded to a question on the company blog or any other number of ways.

The goal now is to make sure search engines don’t spend time crawling the “thin” versions so you can get the most from your crawl budget.

The Difference Between Indexing And Crawling

There’s one more thing to learn before we go into the proposed ideas and solutions – the difference between indexing and crawling.

  • Crawling is the discovery of new pages within a website.
  • Indexing is adding the pages that are worthy of showing to a person using the search engine to the database of pages.

Pages can get crawled but not indexed. Indexed pages have likely been crawled and will likely get crawled again to look for updates and server responses.

But not all indexed pages will bring in traffic or hit the first page because they may not be the best possible answer for queries being searched.

Now, let’s go into making efficient use of crawl budgets for these types of solutions.

Using Meta Robots Or X Robots

The first solution Michal pointed out was an “index,follow” directive. This tells a search engine to index the page and follow the links on it. This is a good idea, but only if the filtered result is the ideal experience.

From what I can see, this would not be the case, so I would recommend making it “noindex,follow.”

Noindex would say, “This is not an official page, but hey, keep crawling my site, you’ll find good pages in here.”

And if you have your main menu and navigational internal links done correctly, the spider will hopefully keep crawling them.

Canonicals To Solve Wasted Crawl Budget

Canonical links are used to help search engines know what the official page to index is.

If a product exists in three categories on three separate URLs, only one should be “the official” version, so the two duplicates should have a canonical pointing to the official version. The official one should have a canonical link that points to itself. This applies to the filtered locations.

If the location search would result in multiple city or neighborhood pages, the result would likely be a duplicate of the official one you have in your sitemap.

Have the filtered results point a canonical back to the main page of filtering instead of being self-referencing if the content on the page stays the same as the original category.

If the content pulls in your localized page with the same locations, point the canonical to that page instead.

In most cases, the filtered version inherits the page you searched or filtered from, so that is where the canonical should point to.

If you do both noindex and have a self-referencing canonical, which is overkill, it becomes a conflicting signal.

The same applies to when someone searches for a product by name on your website. The search result may compete with the actual product or service page.

With this solution, you’re telling the spider not to index this page because it isn’t worth indexing, but it is also the official version. It doesn’t make sense to do this.

Instead, use a canonical link, as I mentioned above, or noindex the result and point the canonical to the official version.

Disavow To Increase Crawl Efficiency

Disavowing doesn’t have anything to do with crawl efficiency unless the search engine spiders are finding your “thin” pages through spammy backlinks.

The disavow tool from Google is a way to say, “Hey, these backlinks are spammy, and we don’t want them to hurt us. Please don’t count them towards our site’s authority.”

In most cases, it doesn’t matter, as Google is good at detecting spammy links and ignoring them.

You do not want to add your own site and your own URLs to the disavow tool. You’re telling Google your own site is spammy and not worth anything.

Plus, submitting backlinks to disavow won’t prevent a spider from seeing what you want and do not want to be crawled, as it is only for saying a link from another site is spammy.

Disavowing won’t help with crawl efficiency or saving crawl budget.

How To Make Crawl Budgets More Efficient

The answer is robots.txt. This is how you tell specific search engines and spiders what to crawl.

You can include the folders you want them to crawl by marketing them as “allow,” and you can say “disallow” on filtered results by disallowing the “?” or “&” symbol or whichever you use.

If some of those parameters should be crawled, add the main word like “?filter=location” or a specific parameter.

Robots.txt is how you define crawl paths and work on crawl efficiency. Once you’ve optimized that, look at your internal links. A link from one page on your site to another.

These help spiders find your most important pages while learning what each is about.

Internal links include:

  • Breadcrumbs.
  • Menu navigation.
  • Links within content to other pages.
  • Sub-category menus.
  • Footer links.

You can also use a sitemap if you have a large site, and the spiders are not finding the pages you want with priority.

I hope this helps answer your question. It is one I get a lot – you’re not the only one stuck in that situation.

More resources: 


Featured Image: Paulo Bobita/Search Engine Journal

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending