Connect with us

SEO

How To Use IndexNow API With Python For Bulk Indexing

Published

on

How To Use IndexNow API With Python For Bulk Indexing

IndexNow is a protocol developed by Microsoft Bing and adopted by Yandex that enables webmasters and SEO pros to easily notify search engines when a webpage has been updated via an API.

And today, Microsoft announced that it is making the protocol easier to implement by ensuring that submitted URLs are shared between search engines.

Given its positive implications and the promise of a faster indexing experience for publishers, the IndexNow API should be on every SEO professional’s radar.

Using Python for automating URL submission to the IndexNow API or making an API request to the IndexNow API for bulk URL indexing can make managing IndexNow more efficient for you.

In this tutorial, you’ll learn how to do just that, with step-by-step instructions for using the IndexNow API to submit URLs to Microsoft Bing in bulk with Python.

Advertisement

Note: The IndexNow API is similar to Google’s Indexing API with only one difference: the Google Indexing API is only for job advertisements or broadcasting web pages that contain a video object within it.

Google announced that they will test the IndexNow API but hasn’t updated us since.

Bulk Indexing Using IndexNow API with Python: Getting Started

Below are the necessities to understand and implement the IndexNow API tutorial.

Below are the Python packages and libraries that will be used for the Python IndexNow API tutorial.

  • Advertools (must).
  • Pandas (must).
  • Requests (must).
  • Time (optional).
  • JSON (optional).

Before getting started, reading the basics can help you to understand this IndexNow API and Python tutorial better. We will be using an API Key and a .txt file to provide authentication along with specific HTTP Headers.

IndexNow API Usage Steps with Python.

1. Import The Python Libraries

To use the necessary Python libraries, we will use the “import” command.

  • Advertools will be used for sitemap URL extraction.
  • Requests will be used for making the GET and POST requests.
  • Pandas will be used for taking the URLs in the sitemap into a list object.
  • The “time” module is to prevent a “Too much request” error with the “sleep()” method.
  • JSON is for possibly modifying the POST JSON object if needed.

Below, you will find all of the necessary import lines for the IndexNow API tutorial.

import advertools as adv
import pandas as pd
import requests
import json
import time

2. Extracting The Sitemap URLs With Python

To extract the URLs from a sitemap file, different web scraping methods and libraries can be used such as Requests or Scrapy.

But to keep things simple and efficient, I will use my favorite Python SEO package – Advertools.

Advertisement

With only a single line of code, all of the URLs within a sitemap can be extracted.

sitemap_urls = adv.sitemap_to_df("https://www.example.com/sitemap_index.xml")

The “sitemap_to_df” method of the Advertools can extract all the URLs and other sitemap-related tags such as “lastmod” or “priority.”

Below, you can see the output of the “adv.sitemap_to_df” command.

Sitemap URL Extraction for IndexNow API UsageSitemap URL Extraction can be done via Advertools’ “sitemap_to_df” method.

All of the URLs and dates are specified within the “sitemap_urls” variable.

Since sitemaps are useful sources for search engines and SEOs, Advertools’ sitemap_to_df method can be used for many different tasks including a Sitemap Python Audit.

But that’s a topic for another time.

3. Take The URLs Into A List Object With “to_list()”

Python’s Pandas library has a method for taking a data frame column (data series) into a list object, to_list().

Advertisement

Below is an example usage:

sitemap_urls["loc"].to_list()

Below, you can see the result:

Sitemap URL ListingPandas’ “to_list” method can be used with Advertools for listing the URLs.

All URLs within the sitemap are in a Python list object.

4. Understand The URL Syntax Of IndexNow API Of Microsoft Bing

Let’s take a look at the URL syntax of the IndexNow API.

Here’s an example:

https://<searchengine>/indexnow?url=url-changed&key=your-key

The URL syntax represents the variables and their relations to each other within the RFC 3986 standards.

  • The <searchengine> represents the search engine name that you will use the IndexNow API for.
  • “?url=” parameter is to determine the URL that will be submitted to the search engine via IndexNow API.
  • “&key=” is the API Key that will be used within the IndexNow API.
  • “&keyLocation=” is to provide an authenticity that shows that you are the owner of the website that IndexNow API will be used for.

The “&keyLocation” will bring us to the API Key and its “.txt” version.

5. Gather The API Key For IndexNow And Upload It To The Root

You’ll need a valid key to use the IndexNow API.

Advertisement

Use this link to generate the Microsoft Bing IndexNow API Key.

IndexNow API Key Taking There is no limit for generating the IndexNow API Key.

Clicking the “Generate” button creates an IndexNow API Key.

When you click on the download button, it will download the “.txt” version of the IndexNow API Key.

IndexNow API Key GenerationIndexNow API Key can be generated by Microsoft Bing’s stated address.
txt version of IndexNow API KeyDownloaded IndexNow API Key as txt file.

The TXT version of the API key will be the file name and as well as within the text file.

IndexNow API Key in TXT FileIndexNow API Key in TXT File should be the same with the name of the file, and the actual API Key value.

The next step is uploading this TXT file to the root of the website’s server.

Since I use FileZilla for my FTP, I have uploaded it easily to my web server’s root.

Root Server and IndexNow API Set upBy putting the .txt file into the web server’s root folder, the IndexNow API setup can be completed.

The next step is performing a simple for a loop example for submitting all of the URLs within the sitemap.

6. Submit The URLs Within The Sitemap With Python To IndexNow API

To submit a single URL to the IndexNow, you can use a single “requests.get()” instance. But to make it more useful, we will use a for a loop.

To submit URLs in bulk to the IndexNow API with Python, follow the steps below:

  1. Create a key variable with the IndexNow API Key value.
  2. Replace the <searchengine> section with the search engine that you want to submit URLs (Microsoft Bing, or Yandex, for now).
  3. Assign all of the URLs from the sitemap within a list to a variable.
  4. Use the “txt” file within the root of the web server with its URL value.
  5. Place the URL, key, and key location URL within the string manipulation value.
  6. Start your for a loop, and use the “requests.get()” for all of the URLs within the sitemap.

Below, you can see the implementation:

key = "22bc7c564b334f38b0b1ed90eec8f2c5"
url = sitemap_urls["loc"].to_list()
for i in url:
          endpoint = f"https://bing.com/indexnow?url={i}&key={key}&keyLocation={location}"
          response = requests.get(endpoint)
          print(i)
          print(endpoint)
          print(response.status_code, response.content)
          #time.sleep(5)

If you’re concerned about sending too many requests to the IndexNow API, you can use the Python time module to make the script wait between every request.

Advertisement

Here you can see the output of the script:

IndexNow API Automation ScriptThe empty string as the request’s response body represents the success of the IndexNow API request according to Microsoft Bing’s IndexNow documentation.

The 200 Status Code means that the request was successful.

With the for a loop, I have submitted 194 URLs to Microsoft Bing.

According to the IndexNow Documentation, the HTTP 200 Response Code signals that the search engine is aware of the change in the content or the new content. But it doesn’t necessarily guarantee indexing.

For instance, I have used the same script for another website. After 120 seconds, Microsoft Bing says that 31 results are found. And conveniently, it shows four pages.

The only problem is that on the first page there are only two results, and it says that the URLs are blocked by Robots.txt even if the blocking was removed before submission.

This can happen if the robots.txt was changed to remove some URLs before using the IndexNow API because it seems that Bing does not check the Robots.txt again.

Advertisement

Thus, if you previously blocked them, they try to index your website but still use the previous version of the robots.txt file.

Bing IndexNow API ResultsIt shows what will happen if you use IndexNow API by blocking Bingbot via Robots.txt.

On the second page, there is only one result:

IndexNow Bing Paginated ResultMicrosoft Bing might use a different indexation and pagination method than Google. The second page shows only one among the 31 results.

On the third page, there is no result, and it shows the Microsoft Bing Translate for translating the string within the search bar.

Microsoft Bing TranslateIt shows sometimes, Microsoft Bing infers the “site” search operator as a part of the query.

When I checked Google Analytics, it shows that Bing still hadn’t crawled the website or indexed it. I know this is true as I also checked the log files.

Google and Bing Indexing ProcessesBelow, you will see the Bing Webmaster Tool’s report for the example website:

Bing Webmaster Tools Report

It says that I submitted 38 URLs.

The next step will involve the bulk request with the POST Method and a JSON object.

7. Perform An HTTP Post Request To The IndexNow API

To perform an HTTP post request to the IndexNow API for a set of URLs, a JSON object should be used with specific properties.

  • Host property represents the search engine hostname.
  • Key represents the API Key.
  • Key represents the location of the API Key’s txt file within the web server.
  • urlList represents the URL set that will be submitted to the IndexNow API.
  • Headers represent the POST Request Headers that will be used which are “Content-type” and “charset.”

Since this is a POST request, the “requests.post” will be used instead of the “requests.get().”

Below, you will find an example of a set of URLs submitted to Microsoft Bing’s IndexNow API.

data = {
  "host": "www.bing.com",
  "key": "22bc7c564b334f38b0b1ed90eec8f2c5",
  "keyLocation": "https://www.example.com/22bc7c564b334f38b0b1ed90eec8f2c5.txt",
  "urlList": [
    'https://www.example.com/technical-seo/http-header/',
    'https://www.example.com/python-seo/nltk/lemmatize',
    'https://www.example.com/pagespeed/broser-hints/preload',
    'https://www.example.com/python-seo/nltk/stemming',
    'https://www.example.com/python-seo/categorize-queries/',
    'https://www.example.com/python-seo/nltk/tokenization',
    'https://www.example.com/review/oncrawl/',
    'https://www.example.com/technical-seo/hreflang/',
    'https://www.example.com/technical-seo/multilingual-seo/'
      ]
}
headers = {"Content-type":"application/json", "charset":"utf-8"}
r = requests.post("https://bing.com/", data=data, headers=headers)
r.status_code, r.content

In the example above, we have performed a POST Request to index a set of URLs.

Advertisement

We have used the “data” object for the “data parameter of requests.post,” and the headers object for the “headers” parameter.

Since we POST a JSON object, the request should have the “content-type: application/json” key and value with the “charset:utf-8.”

After I make the POST request, 135 seconds later, my live logfile analysis dashboard started to show the immediate hits from the Bingbot.

Bingbot Log File Analysis

8. Create Custom Function For IndexNow API To Make Time

Creating a custom function for IndexNow API is useful to decrease the time that will be spent on the code preparation.

Thus, I have created two different custom Python functions to use the IndexNow API for bulk requests and individual requests.

Below, you will find an example for only the bulk requests to the IndexNow API.

Advertisement

The custom function for bulk requests is called “submit_url_set.”

Even if you just fill in the parameters, still you will be able to use it properly.

def submit_url_set(set_:list, key, location, host="https://www.bing.com", headers={"Content-type":"application/json", "charset":"utf-8"}):
     key = "22bc7c564b334f38b0b1ed90eec8f2c5"
     set_ = sitemap_urls["loc"].to_list()
     data = {
     "host": "www.bing.com",
     "key": key,
     "keyLocation": "https://www.example.com/22bc7c564b334f38b0b1ed90eec8f2c5.txt",
     "urlList": set_
     }
     r = requests.post(host, data=data, headers=headers)
     return r.status_code

An explanation of this custom function:

  • The “Set_” parameter is to provide a list of URLs.
  • “Key” parameter is to provide an IndexNow API Key.
  • “Location” parameter is to provide the location of the IndexNow API Key’s txt file within the web server.
  • “Host” is to provide the search engine host address.
  • “Headers” is to provide the headers that are necessary for the IndexNow API.

I have defined some of the parameters with default values such as “host” for Microsoft Bing. If you want to use it for Yandex, you will need to state it while calling the function.

Below is an example usage:

submit_url_set(set_=sitemap_urls["loc"].to_list(), key="22bc7c564b334f38b0b1ed90eec8f2c5", location="https://www.example.com/22bc7c564b334f38b0b1ed90eec8f2c5.txt")

If you want to extract sitemap URLs with a different method, or if you want to use the IndexNow API for a different URL set, you will need to change “set_” parameter value.

Below, you will see an example of the Custom Python function for the IndexNow API for only individual requests.

Advertisement
def submit_url(url, location, key = "22bc7c564b334f38b0b1ed90eec8f2c5"):
     key = "22bc7c564b334f38b0b1ed90eec8f2c5"
     url = sitemap_urls["loc"].to_list()
     for i in url:
          endpoint = f"https://bing.com/indexnow?url={i}&key={key}&keyLocation={location}"
          response = requests.get(endpoint)
          print(i)
          print(endpoint)
          print(response.status_code, response.content)
          #time.sleep(5)

Since this is for a loop, you can submit more URLs one by one. The search engine can prioritize these types of requests differently.

Some of the bulk requests will include non-important URLs, the individual requests might be seen as more reasonable.

If you want to include the sitemap URL extraction within the function, you should include Advertools naturally into the functions themselves.

Tips For Using The IndexNow API With Python

An Overview of How The IndexNow API Works, Capabilities & Uses

  • The IndexNow API doesn’t guarantee that your website or the URLs that you submitted will be indexed.
  • You should only submit URLs that are new or for which the content has changed.
  • The IndexNow API impacts the crawl budget.
  • Microsoft Bing has a threshold for the URL Content Quality and Calculation of the Crawl Need for a URL. If the submitted URL is not good enough, they may not crawl it.
  • You can submit up to 10,000 URLs.
  • The IndexNow API suggests submitting URLs even if the website is small.
  • Submitting the same pages many times within a day can block the IndexNow API from crawling the redundant URLs or the source.
  • The IndexNow API is useful for sites where the content changes frequently, like every 10 minutes.
  • IndexNow API is useful for pages that are gone and are returning a 404 response code. It lets the search engine know that the URLs are gone.
  • IndexNow API can be used for notifying of new 301 or 302 redirects.
  • The 200 Status Response Code means that the search engine is aware of the submitted URL.
  • The 429 Status Code means that you made too many requests to the IndexNow API.
  • If you put a “txt” file that contains the IndexNow API Key into a subfolder, the IndexNow API can be used only for that subfolder.
  • If you have two different CMS, you can use two different IndexNow API Keys for two different site sections
  • Subdomains need to use a different IndexNow API key.
  • Even if you already use a sitemap, using IndexNow API is useful because it efficiently tells the search engines of website changes and reduces unnecessary bot crawling.
  • All search engines that adopt the IndexNow API (Microsoft Bing and Yandex) share the URLs that are submitted between each other.
IndexNow API Infographic SEOIndexNow API Documentation and usage tips can be found above.

In this IndexNow API tutorial and guideline with Python, we have examined a new search engine technology.

Instead of waiting to be crawled, publishers can notify the search engines to crawl when there is a need.

IndexNow reduces the use of search engine data center resources, and now you know how to use Python to make the process more efficient, too.

More resources:

Advertisement

An Introduction To Python & Machine Learning For Technical SEO

How to Use Python to Monitor & Measure Website Performance

Advanced Technical SEO: A Complete Guide


Featured Image: metamorworks/Shutterstock




Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

How Compression Can Be Used To Detect Low Quality Pages

Published

on

By

Compression can be used by search engines to detect low-quality pages. Although not widely known, it's useful foundational knowledge for SEO.

The concept of Compressibility as a quality signal is not widely known, but SEOs should be aware of it. Search engines can use web page compressibility to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords, making it useful knowledge for SEO.

Although the following research paper demonstrates a successful use of on-page features for detecting spam, the deliberate lack of transparency by search engines makes it difficult to say with certainty if search engines are applying this or similar techniques.

What Is Compressibility?

In computing, compressibility refers to how much a file (data) can be reduced in size while retaining essential information, typically to maximize storage space or to allow more data to be transmitted over the Internet.

TL/DR Of Compression

Compression replaces repeated words and phrases with shorter references, reducing the file size by significant margins. Search engines typically compress indexed web pages to maximize storage space, reduce bandwidth, and improve retrieval speed, among other reasons.

This is a simplified explanation of how compression works:

  • Identify Patterns:
    A compression algorithm scans the text to find repeated words, patterns and phrases
  • Shorter Codes Take Up Less Space:
    The codes and symbols use less storage space then the original words and phrases, which results in a smaller file size.
  • Shorter References Use Less Bits:
    The “code” that essentially symbolizes the replaced words and phrases uses less data than the originals.

A bonus effect of using compression is that it can also be used to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords.

Research Paper About Detecting Spam

This research paper is significant because it was authored by distinguished computer scientists known for breakthroughs in AI, distributed computing, information retrieval, and other fields.

Advertisement

Marc Najork

One of the co-authors of the research paper is Marc Najork, a prominent research scientist who currently holds the title of Distinguished Research Scientist at Google DeepMind. He’s a co-author of the papers for TW-BERT, has contributed research for increasing the accuracy of using implicit user feedback like clicks, and worked on creating improved AI-based information retrieval (DSI++: Updating Transformer Memory with New Documents), among many other major breakthroughs in information retrieval.

Dennis Fetterly

Another of the co-authors is Dennis Fetterly, currently a software engineer at Google. He is listed as a co-inventor in a patent for a ranking algorithm that uses links, and is known for his research in distributed computing and information retrieval.

Those are just two of the distinguished researchers listed as co-authors of the 2006 Microsoft research paper about identifying spam through on-page content features. Among the several on-page content features the research paper analyzes is compressibility, which they discovered can be used as a classifier for indicating that a web page is spammy.

Detecting Spam Web Pages Through Content Analysis

Although the research paper was authored in 2006, its findings remain relevant to today.

Then, as now, people attempted to rank hundreds or thousands of location-based web pages that were essentially duplicate content aside from city, region, or state names. Then, as now, SEOs often created web pages for search engines by excessively repeating keywords within titles, meta descriptions, headings, internal anchor text, and within the content to improve rankings.

Section 4.6 of the research paper explains:

Advertisement

“Some search engines give higher weight to pages containing the query keywords several times. For example, for a given query term, a page that contains it ten times may be higher ranked than a page that contains it only once. To take advantage of such engines, some spam pages replicate their content several times in an attempt to rank higher.”

The research paper explains that search engines compress web pages and use the compressed version to reference the original web page. They note that excessive amounts of redundant words results in a higher level of compressibility. So they set about testing if there’s a correlation between a high level of compressibility and spam.

They write:

“Our approach in this section to locating redundant content within a page is to compress the page; to save space and disk time, search engines often compress web pages after indexing them, but before adding them to a page cache.

…We measure the redundancy of web pages by the compression ratio, the size of the uncompressed page divided by the size of the compressed page. We used GZIP …to compress pages, a fast and effective compression algorithm.”

High Compressibility Correlates To Spam

The results of the research showed that web pages with at least a compression ratio of 4.0 tended to be low quality web pages, spam. However, the highest rates of compressibility became less consistent because there were fewer data points, making it harder to interpret.

Figure 9: Prevalence of spam relative to compressibility of page.

The researchers concluded:

Advertisement

“70% of all sampled pages with a compression ratio of at least 4.0 were judged to be spam.”

But they also discovered that using the compression ratio by itself still resulted in false positives, where non-spam pages were incorrectly identified as spam:

“The compression ratio heuristic described in Section 4.6 fared best, correctly identifying 660 (27.9%) of the spam pages in our collection, while misidentifying 2, 068 (12.0%) of all judged pages.

Using all of the aforementioned features, the classification accuracy after the ten-fold cross validation process is encouraging:

95.4% of our judged pages were classified correctly, while 4.6% were classified incorrectly.

More specifically, for the spam class 1, 940 out of the 2, 364 pages, were classified correctly. For the non-spam class, 14, 440 out of the 14,804 pages were classified correctly. Consequently, 788 pages were classified incorrectly.”

The next section describes an interesting discovery about how to increase the accuracy of using on-page signals for identifying spam.

Insight Into Quality Rankings

The research paper examined multiple on-page signals, including compressibility. They discovered that each individual signal (classifier) was able to find some spam but that relying on any one signal on its own resulted in flagging non-spam pages for spam, which are commonly referred to as false positive.

Advertisement

The researchers made an important discovery that everyone interested in SEO should know, which is that using multiple classifiers increased the accuracy of detecting spam and decreased the likelihood of false positives. Just as important, the compressibility signal only identifies one kind of spam but not the full range of spam.

The takeaway is that compressibility is a good way to identify one kind of spam but there are other kinds of spam that aren’t caught with this one signal. Other kinds of spam were not caught with the compressibility signal.

This is the part that every SEO and publisher should be aware of:

“In the previous section, we presented a number of heuristics for assaying spam web pages. That is, we measured several characteristics of web pages, and found ranges of those characteristics which correlated with a page being spam. Nevertheless, when used individually, no technique uncovers most of the spam in our data set without flagging many non-spam pages as spam.

For example, considering the compression ratio heuristic described in Section 4.6, one of our most promising methods, the average probability of spam for ratios of 4.2 and higher is 72%. But only about 1.5% of all pages fall in this range. This number is far below the 13.8% of spam pages that we identified in our data set.”

So, even though compressibility was one of the better signals for identifying spam, it still was unable to uncover the full range of spam within the dataset the researchers used to test the signals.

Combining Multiple Signals

The above results indicated that individual signals of low quality are less accurate. So they tested using multiple signals. What they discovered was that combining multiple on-page signals for detecting spam resulted in a better accuracy rate with less pages misclassified as spam.

Advertisement

The researchers explained that they tested the use of multiple signals:

“One way of combining our heuristic methods is to view the spam detection problem as a classification problem. In this case, we want to create a classification model (or classifier) which, given a web page, will use the page’s features jointly in order to (correctly, we hope) classify it in one of two classes: spam and non-spam.”

These are their conclusions about using multiple signals:

“We have studied various aspects of content-based spam on the web using a real-world data set from the MSNSearch crawler. We have presented a number of heuristic methods for detecting content based spam. Some of our spam detection methods are more effective than others, however when used in isolation our methods may not identify all of the spam pages. For this reason, we combined our spam-detection methods to create a highly accurate C4.5 classifier. Our classifier can correctly identify 86.2% of all spam pages, while flagging very few legitimate pages as spam.”

Key Insight:

Misidentifying “very few legitimate pages as spam” was a significant breakthrough. The important insight that everyone involved with SEO should take away from this is that one signal by itself can result in false positives. Using multiple signals increases the accuracy.

What this means is that SEO tests of isolated ranking or quality signals will not yield reliable results that can be trusted for making strategy or business decisions.

Takeaways

We don’t know for certain if compressibility is used at the search engines but it’s an easy to use signal that combined with others could be used to catch simple kinds of spam like thousands of city name doorway pages with similar content. Yet even if the search engines don’t use this signal, it does show how easy it is to catch that kind of search engine manipulation and that it’s something search engines are well able to handle today.

Here are the key points of this article to keep in mind:

Advertisement
  • Doorway pages with duplicate content is easy to catch because they compress at a higher ratio than normal web pages.
  • Groups of web pages with a compression ratio above 4.0 were predominantly spam.
  • Negative quality signals used by themselves to catch spam can lead to false positives.
  • In this particular test, they discovered that on-page negative quality signals only catch specific types of spam.
  • When used alone, the compressibility signal only catches redundancy-type spam, fails to detect other forms of spam, and leads to false positives.
  • Combing quality signals improves spam detection accuracy and reduces false positives.
  • Search engines today have a higher accuracy of spam detection with the use of AI like Spam Brain.

Read the research paper, which is linked from the Google Scholar page of Marc Najork:

Detecting spam web pages through content analysis

Featured Image by Shutterstock/pathdoc

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

New Google Trends SEO Documentation

Published

on

By

Google publishes new documentation for how to use Google Trends for search marketing

Google Search Central published new documentation on Google Trends, explaining how to use it for search marketing. This guide serves as an easy to understand introduction for newcomers and a helpful refresher for experienced search marketers and publishers.

The new guide has six sections:

  1. About Google Trends
  2. Tutorial on monitoring trends
  3. How to do keyword research with the tool
  4. How to prioritize content with Trends data
  5. How to use Google Trends for competitor research
  6. How to use Google Trends for analyzing brand awareness and sentiment

The section about monitoring trends advises there are two kinds of rising trends, general and specific trends, which can be useful for developing content to publish on a site.

Using the Explore tool, you can leave the search box empty and view the current rising trends worldwide or use a drop down menu to focus on trends in a specific country. Users can further filter rising trends by time periods, categories and the type of search. The results show rising trends by topic and by keywords.

To search for specific trends users just need to enter the specific queries and then filter them by country, time, categories and type of search.

The section called Content Calendar describes how to use Google Trends to understand which content topics to prioritize.

Advertisement

Google explains:

“Google Trends can be helpful not only to get ideas on what to write, but also to prioritize when to publish it. To help you better prioritize which topics to focus on, try to find seasonal trends in the data. With that information, you can plan ahead to have high quality content available on your site a little before people are searching for it, so that when they do, your content is ready for them.”

Read the new Google Trends documentation:

Get started with Google Trends

Featured Image by Shutterstock/Luis Molinero

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

All the best things about Ahrefs Evolve 2024

Published

on

All the best things about Ahrefs Evolve 2024

Hey all, I’m Rebekah and I am your Chosen One to “do a blog post for Ahrefs Evolve 2024”.

What does that entail exactly? I don’t know. In fact, Sam Oh asked me yesterday what the title of this post would be. “Is it like…Ahrefs Evolve 2024: Recap of day 1 and day 2…?” 

Even as I nodded, I couldn’t get over how absolutely boring that sounded. So I’m going to do THIS instead: a curation of all the best things YOU loved about Ahrefs’ first conference, lifted directly from X.

Let’s go!

OUR HUGE SCREEN

CONFERENCE VENUE ITSELF

It was recently named the best new skyscraper in the world, by the way.

 

OUR AMAZING SPEAKER LINEUP – SUPER INFORMATIVE, USEFUL TALKS!

 

Advertisement

GREAT MUSIC

 

AMAZING GOODIES

 

SELFIE BATTLE

Some background: Tim and Sam have a challenge going on to see who can take the most number of selfies with all of you. Last I heard, Sam was winning – but there is room for a comeback yet!

 

THAT BELL

Everybody’s just waiting for this one.

 

STICKER WALL

AND, OF COURSE…ALL OF YOU!

 

Advertisement

There’s a TON more content on LinkedIn – click here – but I have limited time to get this post up and can’t quite figure out how to embed LinkedIn posts so…let’s stop here for now. I’ll keep updating as we go along!



Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending