Connect with us

SEO

Bulk Loading Performance Tests With PageSpeed Insights API & Python

Published

on

Bulk Loading Performance Tests With PageSpeed Insights API & Python

Google offers PageSpeed Insights API to help SEO pros and developers by mixing real-world data with simulation data,  providing load performance timing data related to web pages.

The difference between the Google PageSpeed Insights (PSI) and Lighthouse is that PSI involves both real-world and lab data, while Lighthouse performs a page loading simulation by modifying the connection and user-agent of the device.

Another point of difference is that PSI doesn’t supply any information related to web accessibility, SEO, or progressive web apps (PWAs), while Lighthouse provides all of the above.

Thus, when we use PageSpeed Insights API for the bulk URL loading performance test, we won’t have any data for accessibility.

However, PSI provides more information related to the page speed performance, such as “DOM Size,” “Deepest DOM Child Element,” “Total Task Count,” and “DOM Content Loaded” timing.

One more advantage of the PageSpeed Insights API is that it gives the “observed metrics” and “actual metrics” different names.

In this guide, you will learn:

  • How to create a production-level Python Script.
  • How to use APIs with Python.
  • How to construct data frames from API responses.
  • How to analyze the API responses.
  • How to parse URLs and process URL requests’ responses.
  • How to store the API responses with proper structure.

An example output of the Page Speed Insights API call with Python is below.

Screenshot from author, June 2022

Libraries For Using PageSpeed Insights API With Python

The necessary libraries to use PSI API with Python are below.

  • Advertools retrieves testing URLs from the sitemap of a website.
  • Pandas is to construct the data frame and flatten the JSON output of the API.
  • Requests are to make a request to the specific API endpoint.
  • JSON is to take the API response and put it into the specifically related dictionary point.
  • Datetime is to modify the specific output file’s name with the date of the moment.
  • URLlib is to parse the test subject website URL.

How To Use PSI API With Python?

To use the PSI API with Python, follow the steps below.

  • Get a PageSpeed Insights API key.
  • Import the necessary libraries.
  • Parse the URL for the test subject website.
  • Take the Date of Moment for file name.
  • Take URLs into a list from a sitemap.
  • Choose the metrics that you want from PSI API.
  • Create a For Loop for taking the API Response for all URLs.
  • Construct the data frame with chosen PSI API metrics.
  • Output the results in the form of XLSX.

1. Get PageSpeed Insights API Key

Use the PageSpeed Insights API Documentation to get the API Key.

Click the “Get a Key” button below.

psi api key Image from developers.google.com, June 2022

Choose a project that you have created in Google Developer Console.

google developer console api projectImage from developers.google.com, June 2022

Enable the PageSpeed Insights API on that specific project.

page speed insights api enableImage from developers.google.com, June 2022

You will need to use the specific API Key in your API Requests.

2. Import The Necessary Libraries

Use the lines below to import the fundamental libraries.

    import advertools as adv
    import pandas as pd
    import requests
    import json
    from datetime import datetime
    from urllib.parse import urlparse

3. Parse The URL For The Test Subject Website

To parse the URL of the subject website, use the code structure below.

  domain = urlparse(sitemap_url)
  domain = domain.netloc.split(".")[1]

The “domain” variable is the parsed version of the sitemap URL.

The “netloc” represents the specific URL’s domain section. When we split it with the “.” it takes the “middle section” which represents the domain name.

Here, “0” is for “www,” “1” for “domain name,” and “2” is for “domain extension,” if we split it with “.”

4. Take The Date Of Moment For File Name

To take the date of the specific function call moment, use the “datetime.now” method.

Datetime.now provides the specific time of the specific moment. Use the “strftime” with the “%Y”, “”%m”, and “%d” values. “%Y” is for the year. The “%m” and “%d” are numeric values for the specific month and the day.

 date = datetime.now().strftime("%Y_%m_%d")

5. Take URLs Into A List From A Sitemap

To take the URLs into a list form from a sitemap file, use the code block below.

   sitemap = adv.sitemap_to_df(sitemap_url)
   sitemap_urls = sitemap["loc"].to_list()

If you read the Python Sitemap Health Audit, you can learn further information about the sitemaps.

6. Choose The Metrics That You Want From PSI API

To choose the PSI API response JSON properties, you should see the JSON file itself.

It is highly relevant to the reading, parsing, and flattening of JSON objects.

It is even related to Semantic SEO, thanks to the concept of “directed graph,” and “JSON-LD” structured data.

In this article, we won’t focus on examining the specific PSI API Response’s JSON hierarchies.

You can see the metrics that I have chosen to gather from PSI API. It is richer than the basic default output of PSI API, which only gives the Core Web Vitals Metrics, or Speed Index-Interaction to Next Paint, Time to First Byte, and First Contentful Paint.

Of course, it also gives “suggestions” by saying “Avoid Chaining Critical Requests,” but there is no need to put a sentence into a data frame.

In the future, these suggestions, or even every individual chain event, their KB and MS values can be taken into a single column with the name “psi_suggestions.”

For a start, you can check the metrics that I have chosen, and an important amount of them will be first for you.

PSI API Metrics, the first section is below.

    fid = []
    lcp = []
    cls_ = []
    url = []
    fcp = []
    performance_score = []
    total_tasks = []
    total_tasks_time = []
    long_tasks = []
    dom_size = []
    maximum_dom_depth = []
    maximum_child_element = []
    observed_fcp  = []
    observed_fid = []
    observed_lcp = []
    observed_cls = []
    observed_fp = []
    observed_fmp = []
    observed_dom_content_loaded = []
    observed_speed_index = []
    observed_total_blocking_time = []
    observed_first_visual_change = []
    observed_last_visual_change = []
    observed_tti = []
    observed_max_potential_fid = []

This section includes all the observed and simulated fundamental page speed metrics, along with some non-fundamental ones, like “DOM Content Loaded,” or “First Meaningful Paint.”

The second section of PSI Metrics focuses on possible byte and time savings from the unused code amount.

    render_blocking_resources_ms_save = []
    unused_javascript_ms_save = []
    unused_javascript_byte_save = []
    unused_css_rules_ms_save = []
    unused_css_rules_bytes_save = []

A third section of the PSI metrics focuses on server response time, responsive image usage benefits, or not, using harms.

    possible_server_response_time_saving = []
    possible_responsive_image_ms_save = []

Note: Overall Performance Score comes from “performance_score.”

7. Create A For Loop For Taking The API Response For All URLs

The for loop is to take all of the URLs from the sitemap file and use the PSI API for all of them one by one. The for loop for PSI API automation has several sections.

The first section of the PSI API for loop starts with duplicate URL prevention.

In the sitemaps, you can see a URL that appears multiple times. This section prevents it.

for i in sitemap_urls[:9]:
         # Prevent the duplicate "/" trailing slash URL requests to override the information.
         if i.endswith("/"):
               r = requests.get(f"https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={i}&strategy=mobile&locale=en&key={api_key}")
         else:
               r = requests.get(f"https://www.googleapis.com/pagespeedonline/v5/runPagespeed?url={i}/&strategy=mobile&locale=en&key={api_key}")

Remember to check the “api_key” at the end of the endpoint for PageSpeed Insights API.

Check the status code. In the sitemaps, there might be non-200 status code URLs; these should be cleaned.

         if r.status_code == 200:
               #print(r.json())
               data_ = json.loads(r.text)
               url.append(i)

The next section appends the specific metrics to the specific dictionary that we have created before “_data.”

               fcp.append(data_["loadingExperience"]["metrics"]["FIRST_CONTENTFUL_PAINT_MS"]["percentile"])
               fid.append(data_["loadingExperience"]["metrics"]["FIRST_INPUT_DELAY_MS"]["percentile"])
               lcp.append(data_["loadingExperience"]["metrics"]["LARGEST_CONTENTFUL_PAINT_MS"]["percentile"])
               cls_.append(data_["loadingExperience"]["metrics"]["CUMULATIVE_LAYOUT_SHIFT_SCORE"]["percentile"])
               performance_score.append(data_["lighthouseResult"]["categories"]["performance"]["score"] * 100)

Next section focuses on “total task” count, and DOM Size.

               total_tasks.append(data_["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["numTasks"])
               total_tasks_time.append(data_["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["totalTaskTime"])
               long_tasks.append(data_["lighthouseResult"]["audits"]["diagnostics"]["details"]["items"][0]["numTasksOver50ms"])
               dom_size.append(data_["lighthouseResult"]["audits"]["dom-size"]["details"]["items"][0]["value"])

The next section takes the “DOM Depth” and “Deepest DOM Element.”

               maximum_dom_depth.append(data_["lighthouseResult"]["audits"]["dom-size"]["details"]["items"][1]["value"])
               maximum_child_element.append(data_["lighthouseResult"]["audits"]["dom-size"]["details"]["items"][2]["value"])

The next section takes the specific observed test results during our Page Speed Insights API.

               observed_dom_content_loaded.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedDomContentLoaded"])
               observed_fid.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedDomContentLoaded"])
               observed_lcp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["largestContentfulPaint"])
               observed_fcp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["firstContentfulPaint"])
               observed_cls.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["totalCumulativeLayoutShift"])
               observed_speed_index.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedSpeedIndex"])
               observed_total_blocking_time.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["totalBlockingTime"])
               observed_fp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedFirstPaint"])
               observed_fmp.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["firstMeaningfulPaint"])
               observed_first_visual_change.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedFirstVisualChange"])
               observed_last_visual_change.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["observedLastVisualChange"])
               observed_tti.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["interactive"])
               observed_max_potential_fid.append(data_["lighthouseResult"]["audits"]["metrics"]["details"]["items"][0]["maxPotentialFID"])

The next section takes the Unused Code amount and the wasted bytes, in milliseconds along with the render-blocking resources.

               render_blocking_resources_ms_save.append(data_["lighthouseResult"]["audits"]["render-blocking-resources"]["details"]["overallSavingsMs"])
               unused_javascript_ms_save.append(data_["lighthouseResult"]["audits"]["unused-javascript"]["details"]["overallSavingsMs"])
               unused_javascript_byte_save.append(data_["lighthouseResult"]["audits"]["unused-javascript"]["details"]["overallSavingsBytes"])
               unused_css_rules_ms_save.append(data_["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["overallSavingsMs"])
               unused_css_rules_bytes_save.append(data_["lighthouseResult"]["audits"]["unused-css-rules"]["details"]["overallSavingsBytes"])

The next section is to provide responsive image benefits and server response timing.

               possible_server_response_time_saving.append(data_["lighthouseResult"]["audits"]["server-response-time"]["details"]["overallSavingsMs"])      
               possible_responsive_image_ms_save.append(data_["lighthouseResult"]["audits"]["uses-responsive-images"]["details"]["overallSavingsMs"])

The next section is to make the function continue to work in case there is an error.

         else:
           continue

Example Usage Of Page Speed Insights API With Python For Bulk Testing

To use the specific code blocks, put them into a Python function.

Run the script, and you will get 29 page speed-related metrics in the columns below.

pagespeed insights apiScreenshot from author, June 2022

Conclusion

PageSpeed Insights API provides different types of page loading performance metrics.

It demonstrates how Google engineers perceive the concept of page loading performance, and possibly use these metrics as a ranking, UX, and quality-understanding point of view.

Using Python for bulk page speed tests gives you a snapshot of the entire website to help analyze the possible user experience, crawl efficiency, conversion rate, and ranking improvements.

More resources:


Featured Image: Dundanim/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

4 Ways To Try The New Model From Mistral AI

Published

on

By

4 Ways To Try The New Model From Mistral AI

In a significant leap in large language model (LLM) development, Mistral AI announced the release of its newest model, Mixtral-8x7B.

What Is Mixtral-8x7B?

Mixtral-8x7B from Mistral AI is a Mixture of Experts (MoE) model designed to enhance how machines understand and generate text.

Imagine it as a team of specialized experts, each skilled in a different area, working together to handle various types of information and tasks.

A report published in June reportedly shed light on the intricacies of OpenAI’s GPT-4, highlighting that it employs a similar approach to MoE, utilizing 16 experts, each with around 111 billion parameters, and routes two experts per forward pass to optimize costs.

This approach allows the model to manage diverse and complex data efficiently, making it helpful in creating content, engaging in conversations, or translating languages.

Mixtral-8x7B Performance Metrics

Mistral AI’s new model, Mixtral-8x7B, represents a significant step forward from its previous model, Mistral-7B-v0.1.

It’s designed to understand better and create text, a key feature for anyone looking to use AI for writing or communication tasks.

This latest addition to the Mistral family promises to revolutionize the AI landscape with its enhanced performance metrics, as shared by OpenCompass.

Mixtral-8x7B: 4 Ways To Try The New Model From Mistral AI

What makes Mixtral-8x7B stand out is not just its improvement over Mistral AI’s previous version, but the way it measures up to models like Llama2-70B and Qwen-72B.

mixtral-8x7b performance metrics compared to llama 2 open source ai modelsmixtral-8x7b performance metrics compared to llama 2 open source ai models

It’s like having an assistant who can understand complex ideas and express them clearly.

One of the key strengths of the Mixtral-8x7B is its ability to handle specialized tasks.

For example, it performed exceptionally well in specific tests designed to evaluate AI models, indicating that it’s good at general text understanding and generation and excels in more niche areas.

This makes it a valuable tool for marketing professionals and SEO experts who need AI that can adapt to different content and technical requirements.

The Mixtral-8x7B’s ability to deal with complex math and coding problems also suggests it can be a helpful ally for those working in more technical aspects of SEO, where understanding and solving algorithmic challenges are crucial.

This new model could become a versatile and intelligent partner for a wide range of digital content and strategy needs.

How To Try Mixtral-8x7B: 4 Demos

You can experiment with Mistral AI’s new model, Mixtral-8x7B, to see how it responds to queries and how it performs compared to other open-source models and OpenAI’s GPT-4.

Please note that, like all generative AI content, platforms running this new model may produce inaccurate information or otherwise unintended results.

User feedback for new models like this one will help companies like Mistral AI improve future versions and models.

1. Perplexity Labs Playground

In Perplexity Labs, you can try Mixtral-8x7B along with Meta AI’s Llama 2, Mistral-7b, and Perplexity’s new online LLMs.

In this example, I ask about the model itself and notice that new instructions are added after the initial response to extend the generated content about my query.

mixtral-8x7b perplexity labs playgroundScreenshot from Perplexity, December 2023mixtral-8x7b perplexity labs playground

While the answer looks correct, it begins to repeat itself.

mixtral-8x7b errorsScreenshot from Perplexity Labs, December 2023mixtral-8x7b errors

The model did provide an over 600-word answer to the question, “What is SEO?”

Again, additional instructions appear as “headers” to seemingly ensure a comprehensive answer.

what is seo by mixtral-8x7bScreenshot from Perplexity Labs, December 2023what is seo by mixtral-8x7b

2. Poe

Poe hosts bots for popular LLMs, including OpenAI’s GPT-4 and DALL·E 3, Meta AI’s Llama 2 and Code Llama, Google’s PaLM 2, Anthropic’s Claude-instant and Claude 2, and StableDiffusionXL.

These bots cover a wide spectrum of capabilities, including text, image, and code generation.

The Mixtral-8x7B-Chat bot is operated by Fireworks AI.

poe bot for mixtral-8x7b firebaseScreenshot from Poe, December 2023poe bot for mixtral-8x7b firebase

It’s worth noting that the Fireworks page specifies it is an “unofficial implementation” that was fine-tuned for chat.

When asked what the best backlinks for SEO are, it provided a valid answer.

mixtral-8x7b poe best backlinks responseScreenshot from Poe, December 2023mixtral-8x7b poe best backlinks response

Compare this to the response offered by Google Bard.

Mixtral-8x7B: 4 Ways To Try The New Model From Mistral AIMixtral-8x7B: 4 Ways To Try The New Model From Mistral AI

Mixtral-8x7B: 4 Ways To Try The New Model From Mistral AIScreenshot from Google Bard, December 2023Mixtral-8x7B: 4 Ways To Try The New Model From Mistral AI

3. Vercel

Vercel offers a demo of Mixtral-8x7B that allows users to compare responses from popular Anthropic, Cohere, Meta AI, and OpenAI models.

vercel mixtral-8x7b demo compare gpt-4Screenshot from Vercel, December 2023vercel mixtral-8x7b demo compare gpt-4

It offers an interesting perspective on how each model interprets and responds to user questions.

mixtral-8x7b vs cohere on best resources for learning seoScreenshot from Vercel, December 2023mixtral-8x7b vs cohere on best resources for learning seo

Like many LLMs, it does occasionally hallucinate.

mixtral-8x7b hallucinationsScreenshot from Vercel, December 2023mixtral-8x7b hallucinations

4. Replicate

The mixtral-8x7b-32 demo on Replicate is based on this source code. It is also noted in the README that “Inference is quite inefficient.”

Mixtral-8x7B: 4 Ways To Try The New Model From Mistral AIScreenshot from Replicate, December 2023Mixtral-8x7B: 4 Ways To Try The New Model From Mistral AI

In the example above, Mixtral-8x7B describes itself as a game.

Conclusion

Mistral AI’s latest release sets a new benchmark in the AI field, offering enhanced performance and versatility. But like many LLMs, it can provide inaccurate and unexpected answers.

As AI continues to evolve, models like the Mixtral-8x7B could become integral in shaping advanced AI tools for marketing and business.


Featured image: T. Schneider/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

OpenAI Investigates ‘Lazy’ GPT-4 Complaints On Google Reviews, X

Published

on

By

OpenAI Investigates 'Lazy' GPT-4 Complaints On Google Reviews, X

OpenAI, the company that launched ChatGPT a little over a year ago, has recently taken to social media to address concerns regarding the “lazy” performance of GPT-4 on social media and Google Reviews.

Screenshot from X, December 2023OpenAI Investigates ‘Lazy’ GPT-4 Complaints On Google Reviews, X

This move comes after growing user feedback online, which even includes a one-star review on the company’s Google Reviews.

OpenAI Gives Insight Into Training Chat Models, Performance Evaluations, And A/B Testing

OpenAI, through its @ChatGPTapp Twitter account, detailed the complexities involved in training chat models.

chatgpt openai a/b testingScreenshot from X, December 2023chatgpt openai a/b testing

The organization highlighted that the process is not a “clean industrial process” and that variations in training runs can lead to noticeable differences in the AI’s personality, creative style, and political bias.

Thorough AI model testing includes offline evaluation metrics and online A/B tests. The final decision to release a new model is based on a data-driven approach to improve the “real” user experience.

OpenAI’s Google Review Score Affected By GPT-4 Performance, Billing Issues

This explanation comes after weeks of user feedback about GPT-4 becoming worse on social media networks like X.

Complaints also appeared in OpenAI’s community forums.

openai community forums gpt-4 user feedbackScreenshot from OpenAI, December 2023openai community forums gpt-4 user feedback

The experience led one user to leave a one-star rating for OpenAI via Google Reviews. Other complaints regarded accounts, billing, and the artificial nature of AI.

openai google reviews star rating Screenshot from Google Reviews, December 2023openai google reviews star rating

A recent user on Product Hunt gave OpenAI a rating that also appears to be related to GPT-4 worsening.

openai reviewsScreenshot from Product Hunt, December 2023openai reviews

GPT-4 isn’t the only issue that local reviewers complain about. On Yelp, OpenAI has a one-star rating for ChatGPT 3.5 performance.

The complaint:

yelp openai chatgpt reviewScreenshot from Yelp, December 2023yelp openai chatgpt review

In related OpenAI news, the review with the most likes aligns with recent rumors about a volatile workplace, alleging that OpenAI is a “Cutthroat environment. Not friendly. Toxic workers.”

google review for openai toxic workersScreenshot from Google Reviews, December 2023google review for openai toxic workers

The reviews voted the most helpful on Glassdoor about OpenAI suggested that employee frustration and product development issues stem from the company’s shift in focus on profits.

openai employee review on glassdooropenai employee review on glassdoor

openai employee reviewsScreenshots from Glassdoor, December 2023openai employee reviews

This incident provides a unique outlook on how customer and employee experiences can impact any business through local reviews and business ratings platforms.

openai inc google business profile local serps google reviewsScreenshot from Google, December 2023openai inc google business profile local serps google reviews

Google SGE Highlights Positive Google Reviews

In addition to occasional complaints, Google reviewers acknowledged the revolutionary impact of OpenAI’s technology on various fields.

The most positive review mentions about the company appear in Google SGE (Search Generative Experience).

Google SGE response on OpenAIScreenshot from Google SGE, December 2023Google SGE response on OpenAI

Conclusion

OpenAI’s recent insights into training chat models and response to public feedback about GPT-4 performance illustrate AI technology’s dynamic and evolving nature and its impact on those who depend on the AI platform.

Especially the people who just received an invitation to join ChatGPT Plus after being waitlisted while OpenAI paused new subscriptions and upgrades. Or those developing GPTs for the upcoming GPT Store launch.

As AI advances, professionals in these fields must remain agile, informed, and responsive to technological developments and the public’s reception of these advancements.


Featured image: Tada Images/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

ChatGPT Plus Upgrades Paused; Waitlisted Users Receive Invites

Published

on

By

ChatGPT Plus Upgrades Paused; Waitlisted Users Receive Invites

ChatGPT Plus subscriptions and upgrades remain paused after a surge in demand for new features created outages.

Some users who signed up for the waitlist have received invites to join ChatGPT Plus.

Screenshot from Gmail, December 2023ChatGPT Plus Upgrades Paused; Waitlisted Users Receive Invites

This has resulted in a few shares of the link that is accessible for everyone. For now.

RELATED: GPT Store Set To Launch In 2024 After ‘Unexpected’ Delays

In addition to the invites, signs that more people are getting access to GPTs include an introductory screen popping up on free ChatGPT accounts.

ChatGPT Plus Upgrades Paused; Waitlisted Users Receive InvitesScreenshot from ChatGPT, December 2023ChatGPT Plus Upgrades Paused; Waitlisted Users Receive Invites

Unfortunately, they still aren’t accessible without a Plus subscription.

chatgpt plus subscriptions upgrades paused waitlistScreenshot from ChatGPT, December 2023chatgpt plus subscriptions upgrades paused waitlist

You can sign up for the waitlist by clicking on the option to upgrade in the left sidebar of ChatGPT on a desktop browser.

ChatGPT Plus Upgrades Paused; Waitlisted Users Receive InvitesScreenshot from ChatGPT, December 2023ChatGPT Plus Upgrades Paused; Waitlisted Users Receive Invites

OpenAI also suggests ChatGPT Enterprise for those who need more capabilities, as outlined in the pricing plans below.

ChatGPT Plus Upgrades Paused; Waitlisted Users Receive InvitesScreenshot from OpenAI, December 2023ChatGPT Plus Upgrades Paused; Waitlisted Users Receive Invites

Why Are ChatGPT Plus Subscriptions Paused?

According to a post on X by OpenAI’s CEO Sam Altman, the recent surge in usage following the DevDay developers conference has led to capacity challenges, resulting in the decision to pause ChatGPT Plus signups.

The decision to pause new ChatGPT signups follows a week where OpenAI services – including ChatGPT and the API – experienced a series of outages related to high-demand and DDoS attacks.

Demand for ChatGPT Plus resulted in eBay listings supposedly offering one or more months of the premium subscription.

When Will ChatGPT Plus Subscriptions Resume?

So far, we don’t have any official word on when ChatGPT Plus subscriptions will resume. We know the GPT Store is set to open early next year after recent boardroom drama led to “unexpected delays.”

Therefore, we hope that OpenAI will onboard waitlisted users in time to try out all of the GPTs created by OpenAI and community builders.

What Are GPTs?

GPTs allow users to create one or more personalized ChatGPT experiences based on a specific set of instructions, knowledge files, and actions.

Search marketers with ChatGPT Plus can try GPTs for helpful content assessment and learning SEO.

There are also GPTs for analyzing Google Search Console data.

And GPTs that will let you chat with analytics data from 20 platforms, including Google Ads, GA4, and Facebook.

Google search has indexed hundreds of public GPTs. According to an alleged list of GPT statistics in a GitHub repository, DALL-E, the top GPT from OpenAI, has received 5,620,981 visits since its launch last month. Included in the top 20 GPTs is Canva, with 291,349 views.

 

Weighing The Benefits Of The Pause

Ideally, this means that developers working on building GPTs and using the API should encounter fewer issues (like being unable to save GPT drafts).

But it could also mean a temporary decrease in new users of GPTs since they are only available to Plus subscribers – including the ones I tested for learning about ranking factors and gaining insights on E-E-A-T from Google’s Search Quality Rater Guidelines.

custom gpts for seoScreenshot from ChatGPT, November 2023custom gpts for seo

Featured image: Robert Way/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending