Connect with us

SEO

Crawl Me Maybe? How Website Crawlers Work

Published

on

Crawl Me Maybe? How Website Crawlers Work

You might have heard of website crawling before — you may even have a vague idea of what it’s about — but do you know why it’s important, or what differentiates it from web crawling? (yes, there is a difference!) 

Search engines are increasingly ruthless when it comes to the quality of the sites they allow into the search results.

If you don’t grasp the basics of optimizing for web crawlers (and eventual users), your organic traffic may well pay the price.

A good website crawler can show you how to protect and even enhance your site’s visibility.

Here’s what you need to know about both web crawlers and site crawlers.

A web crawler is a software program or script that automatically scours the internet, analyzing and indexing web pages.

Also known as a web spider or spiderbot, web crawlers assess a page’s content to decide how to prioritize it in their indexes.

Googlebot, Google’s web crawler, meticulously browses the web, following links from page to page, gathering data, and processing content for inclusion in Google’s search engine.

How do web crawlers impact SEO?

Web crawlers analyze your page and decide how indexable or rankable it is, which ultimately determines your ability to drive organic traffic.

If you want to be discovered in search results, then it’s important you ready your content for crawling and indexing.

Did you know?

AhrefsBot is a web crawler that:

  • Visits over 8 billion web pages every 24 hours
  • Updates every 15–30 minutes
  • Is the #1 most active SEO crawler (and 4th most active crawler worldwide)

There are roughly seven stages to web crawling:

1. URL Discovery

When you publish your page (e.g. to your sitemap), the web crawler discovers it and uses it as a ‘seed’ URL. Just like seeds in the cycle of germination, these starter URLs allow the crawl and subsequent crawling loops to begin.

2. Crawling

After URL discovery, your page is scheduled and then crawled. Content like meta tags, images, links, and structured data are downloaded to the search engine’s servers, where they await parsing and indexing.

3. Parsing

Parsing essentially means analysis. The crawler bot extracts the data it’s just crawled to determine how to index and rank the page.

3a. The URL Discovery Loop

Also during the parsing phase, but worthy of its own subsection, is the URL discovery loop. This is when newly discovered links (including links discovered via redirects) are added to a queue of URLs for the crawler to visit. These are effectively new ‘seed’ URLs, and steps 1–3 get repeated as part of the ‘URL discovery loop’.

4. Indexing

While new URLs are being discovered, the original URL gets indexed. Indexing is when search engines store the data collected from web pages. It enables them to quickly retrieve relevant results for user queries.

5. Ranking

Indexed pages get ranked in search engines based on quality, relevance to search queries, and ability to meet certain other ranking factors. These pages are then served to users when they perform a search.

6. Crawl ends

Eventually the entire crawl (including the URL rediscovery loop) ends based on factors like time allocated, number of pages crawled, depth of links followed etc.

7. Revisiting

Crawlers periodically revisit the page to check for updates, new content, or changes in structure.

Graphic showing a 7 step flow diagram of how web crawlers workGraphic showing a 7 step flow diagram of how web crawlers work

As you can probably guess, the number of URLs discovered and crawled in this process grows exponentially in just a few hops.

A graphic visualizing website crawlers following links exponentiallyA graphic visualizing website crawlers following links exponentially

Search engine web crawlers are autonomous, meaning you can’t trigger them to crawl or switch them on/off at will.

You can, however, notify crawlers of site updates via:

XML sitemaps

An XML sitemap is a file that lists all the important pages on your website to help search engines accurately discover and index your content.

Google’s URL inspection tool

You can ask Google to consider recrawling your site content via its URL inspection tool in Google Search Console. You may get a message in GSC if Google knows about your URL but hasn’t yet crawled or indexed it. If so, find out how to fix “Discovered — currently not indexed”.

IndexNow

Instead of waiting for bots to re-crawl and index your content, you can use IndexNow to automatically ping search engines like Bing, Yandex, Naver, Seznam.cz, and Yep, whenever you:

  • Add new pages
  • Update existing content
  • Remove outdated pages
  • Implement redirects

You can set up automatic IndexNow submissions via Ahrefs Site Audit.

screenshot of IndexNow API key in Ahrefs Site Auditscreenshot of IndexNow API key in Ahrefs Site Audit

Search engine crawling decisions are dynamic and a little obscure.

Although we don’t know the definitive criteria Google uses to determine when or how often to crawl content, we’ve deduced three of the most important areas.

This is based on breadcrumbs dropped by Google, both in support documentation and during rep interviews.

1. Prioritize quality

Google PageRank evaluates the number and quality of links to a page, considering them as “votes” of importance.

Pages earning quality links are deemed more important and are ranked higher in search results.

PageRank is a foundational part of Google’s algorithm. It makes sense then that the quality of your links and content plays a big part in how your site is crawled and indexed.

To judge your site’s quality, Google looks at factors such as:

To assess the pages on your site with the most links, check out the Best by Links report.

Pay attention to the “First seen”, “Last check” column, which reveals which pages have been crawled most often, and when.

Ahrefs Best by Links report highlighting first seen last check columnAhrefs Best by Links report highlighting first seen last check column

2. Keep things fresh

According to Google’s Senior Search Analyst, John Mueller

Search engines recrawl URLs at different rates, sometimes it’s multiple times a day, sometimes it’s once every few months.

John MuellerJohn Mueller

But if you regularly update your content, you’ll see crawlers dropping by more often.

Search engines like Google want to deliver accurate and up-to-date information to remain competitive and relevant, so updating your content is like dangling a carrot on a stick.

You can examine just how quickly Google processes your updates by checking your crawl stats in Google Search Console.

While you’re there, look at the breakdown of crawling “By purpose” (i.e. percent split of pages refreshed vs pages newly discovered). This will also help you work out just how often you’re encouraging web crawlers to revisit your site.

1724066766 671 Crawl Me Maybe How Website Crawlers Work1724066766 671 Crawl Me Maybe How Website Crawlers Work

To find specific pages that need updating on your site, head to the Top Pages report in Ahrefs Site Explorer, then:

  1. Set the traffic filter to “Declined”
  2. Set the comparison date to the last year or two
  3. Look at Content Changes status and update pages with only minor changes
3 part process of updating pages based on content changes in Ahrefs3 part process of updating pages based on content changes in Ahrefs

Top Pages shows you the content on your site driving the most organic traffic. Pushing updates to these pages will encourage crawlers to visit your best content more often, and (hopefully) boost any declining traffic.

3. Refine your site structure

Offering a clear site structure via a logical sitemap, and backing that up with relevant internal links will help crawlers:

  • Better navigate your site
  • Understand its hierarchy
  • Index and rank your most valuable content

Combined, these factors will also please users, since they support easy navigation, reduced bounce rates, and increased engagement.

Below are some more elements that can potentially influence how your site gets discovered and prioritized in crawling:

Graphic showing the factors that can affect web crawl discoverabilityGraphic showing the factors that can affect web crawl discoverability

Web crawlers like Google crawl the entire internet, and you can’t control which sites they visit, or how often.

But you can use website crawlers, which are like your own private bots.

Ask them to crawl your website to find and fix important SEO problems, or study your competitors’ site, turning their biggest weaknesses into your opportunities.

Site crawlers essentially simulate search performance. They help you understand how a search engine’s web crawlers might interpret your pages, based on their:

  • Structure
  • Content
  • Meta data
  • Page load speed
  • Errors
  • Etc

Example: Ahrefs Site Audit

The Ahrefs Site Audit crawler powers the tools: RankTracker, Projects, and Ahrefs’ main website crawling tool: Site Audit.

Site Audit helps SEOs to:

  • Analyze 170+ technical SEO issues
  • Conduct on-demand crawls, with live site performance data
  • Assess up to 170k URLs a minute
  • Troubleshoot, maintain, and improve their visibility in search engines

From URL discovery to revisiting, website crawlers operate very similarly to web crawlers – only instead of indexing and ranking your page in the SERPs, they store and analyze it in their own database.

You can crawl your site either locally or remotely. Desktop crawlers like ScreamingFrog let you download and customize your site crawl, while cloud-based tools like Ahrefs Site Audit perform the crawl without using your computer’s resources – helping you work collaboratively on fixes and site optimization.

If you want to scan entire websites in real time to detect technical SEO problems, configure a crawl in Site Audit.

It will give you visual data breakdowns, site health scores, and detailed fix recommendations to help you understand how a search engine interprets your site.

1. Set up your crawl

Navigate to the Site Audit tab and choose an existing project, or set one up.

Screenshot of import/add project page in Ahrefs Site AuditScreenshot of import/add project page in Ahrefs Site Audit

A project is any domain, subdomain, or URL you want to track over time.

Once you’ve configured your crawl settings – including your crawl schedule and URL sources – you can start your audit and you’ll be notified as soon as it’s complete.

Here are some things you can do right away.

2. Diagnose top errors

The Top Issues overview in Site Audit shows you your most pressing errors, warnings, and notices, based on the number of URLs affected.

1724066766 700 Crawl Me Maybe How Website Crawlers Work1724066766 700 Crawl Me Maybe How Website Crawlers Work

Working through these as part of your SEO roadmap will help you:

1. Spot errors (red icons) impacting crawling – e.g.

  • HTTP status code/client errors
  • Broken links
  • Canonical issues

2. Optimize your content and rankings based on warnings (yellow) – e.g.

  • Missing alt text
  • Links to redirects
  • Overly long meta descriptions

3. Maintain steady visibility with notices (blue icon) – e.g.

  • Organic traffic drops
  • Multiple H1s
  • Indexable pages not in sitemap

Filter issues

You can also prioritize fixes using filters.

Say you have thousands of pages with missing meta descriptions. Make the task more manageable and impactful by targeting high traffic pages first.

  1. Head to the Page Explorer report in Site Audit
  2. Select the advanced filter dropdown
  3. Set an internal pages filter
  4. Select an ‘And’ operator
  5. Select ‘Meta description’ and ‘Not exists’
  6. Select ‘Organic traffic > 100’
Screenshot of how to find pages with missing meta descriptions, over 100 organic traffic, in Ahrefs Page ExplorerScreenshot of how to find pages with missing meta descriptions, over 100 organic traffic, in Ahrefs Page Explorer

Crawl the most important parts of your site

Segment and zero-in on the most important pages on your site (e.g. subfolders or subdomains) using Site Audit’s 200+ filters – whether that’s your blog, ecommerce store, or even pages that earn over a certain traffic threshold.

Screenshot of Ahrefs Site Audit pointing out configure segment optionScreenshot of Ahrefs Site Audit pointing out configure segment option

3. Expedite fixes

If you don’t have coding experience, then the prospect of crawling your site and implementing fixes can be intimidating.

If you do have dev support, issues are easier to remedy, but then it becomes a matter of bargaining for another person’s time.

We’ve got a new feature on the way to help you solve for these kinds of headaches.

Coming soon, Patches are fixes you can make autonomously in Site Audit.

Screenshot of Ahrefs Patches tool calling out the Patch It featureScreenshot of Ahrefs Patches tool calling out the Patch It feature

Title changes, missing meta descriptions, site-wide broken links – when you face these kinds of errors you can hit “Patch it” to publish a fix directly to your website, without having to pester a dev.

And if you’re unsure of anything, you can roll-back your patches at any point.

Screenshot of Ahrefs Patches tool calling out drafts, published, and unpublished statusesScreenshot of Ahrefs Patches tool calling out drafts, published, and unpublished statuses

4. Spot optimization opportunities

Auditing your site with a website crawler is as much about spotting opportunities as it is about fixing bugs.

Improve internal linking

The Internal Link Opportunities report in Site Audit shows you relevant internal linking suggestions, by taking the top 10 keywords (by traffic) for each crawled page, then looking for mentions of them on your other crawled pages.

‘Source’ pages are the ones you should link from, and ‘Target’ pages are the ones you should link to.

Screenshot of Internal Link Opportunities report in Ahrefs Site Audit highlighting source page and target pageScreenshot of Internal Link Opportunities report in Ahrefs Site Audit highlighting source page and target page

The more high quality connections you make between your content, the easier it will be for Googlebot to crawl your site.

Final thoughts

Understanding website crawling is more than just an SEO hack – it’s foundational knowledge that directly impacts your traffic and ROI.

Knowing how crawlers work means knowing how search engines “see” your site, and that’s half the battle when it comes to ranking.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Google’s “Information Gain” Patent For Ranking Web Pages

Published

on

By

Google was recently granted a patent on an information gain score for ranking web pages

Google was recently granted a patent on ranking web pages, which may offer insights into how AI Overviews ranks content. The patent describes a method for ranking pages based on what a user might be interested in next.

Contextual Estimation Of Link Information Gain

The name of the patent is Contextual Estimation Of Link Information Gain, it was filed in 2018 and granted in June 2024. It’s about calculating a ranking score called Information Gain that is used to rank a second set of web pages that are likely to be of interest to a user as a slightly different follow-up topic related to a previous question.

The patent starts with general descriptions then adds layers of specifics over the course of paragraphs.  An analogy can be that it’s like a pizza. It starts out as a mozzarella pizza, then they add mushrooms, so now it’s a mushroom pizza. Then they add onions, so now it’s a mushroom and onion pizza. There are layers of specifics that build up to the entire context.

So if you read just one section of it, it’s easy to say, “It’s clearly a mushroom pizza” and be completely mistaken about what it really is.

There are layers of context but what it’s building up to is:

  • Ranking a web page that is relevant for what a user might be interested in next.
  • The context of the invention is an automated assistant or chatbot
  • A search engine plays a role in a way that seems similar to Google’s AI Overviews

Information Gain And SEO: What’s Really Going On?

A couple of months ago I read a comment on social media asserting that “Information Gain” was a significant factor in a recent Google core algorithm update.  That mention surprised me because I’d never heard of information gain before. I asked some SEO friends about it and they’d never heard of it either.

What the person on social media had asserted was something like Google was using an “Information Gain” score to boost the ranking of web pages that had more information than other web pages. So the idea was that it was important to create pages that have more information than other pages, something along those lines.

So I read the patent and discovered that “Information Gain” is not about ranking pages with more information than other pages. It’s really about something that is more profound for SEO because it might help to understand one dimension of how AI Overviews might rank web pages.

TL/DR Of The Information Gain Patent

What the information gain patent is really about is even more interesting because it may give an indication of how AI Overviews (AIO) ranks web pages that a user might be interested next.  It’s sort of like introducing personalization by anticipating what a user will be interested in next.

The patent describes a scenario where a user makes a search query and the automated assistant or chatbot provides an answer that’s relevant to the question. The information gain scoring system works in the background to rank a second set of web pages that are relevant to a what the user might be interested in next. It’s a new dimension in how web pages are ranked.

The Patent’s Emphasis on Automated Assistants

There are multiple versions of the Information Gain patent dating from 2018 to 2024. The first version is similar to the last version with the most significant difference being the addition of chatbots as a context for where the information gain invention is used.

The patent uses the phrase “automated assistant” 69 times and uses the phrase “search engine” only 25 times.  Like with AI Overviews, search engines do play a role in this patent but it’s generally in the context of automated assistants.

As will become evident, there is nothing to suggest that a web page containing more information than the competition is likelier to be ranked higher in the organic search results. That’s not what this patent talks about.

General Description Of Context

All versions of the patent describe the presentation of search results within the context of an automated assistant and natural language question answering. The patent starts with a general description and progressively becomes more specific. This is a feature of patents in that they apply for protection for the widest contexts in which the invention can be used and become progressively specific.

The entire first section (the Abstract) doesn’t even mention web pages or links. It’s just about the information gain score within a very general context:

“An information gain score for a given document is indicative of additional information that is included in the document beyond information contained in documents that were previously viewed by the user.”

That is a nutshell description of the patent, with the key insight being that the information gain scoring happens on pages after the user has seen the first search results.

More Specific Context: Automated Assistants

The second paragraph in the section titled “Background” is slightly more specific and adds an additional layer of context for the invention because it mentions  links. Specifically, it’s about a user that makes a search query and receives links to search results – no information gain score calculated yet.

The Background section says:

“For example, a user may submit a search request and be provided with a set of documents and/or links to documents that are responsive to the submitted search request.”

The next part builds on top of a user having made a search query:

“Also, for example, a user may be provided with a document based on identified interests of the user, previously viewed documents of the user, and/or other criteria that may be utilized to identify and provide a document of interest. Information from the documents may be provided via, for example, an automated assistant and/or as results to a search engine. Further, information from the documents may be provided to the user in response to a search request and/or may be automatically served to the user based on continued searching after the user has ended a search session.”

That last sentence is poorly worded.

Here’s the original sentence:

“Further, information from the documents may be provided to the user in response to a search request and/or may be automatically served to the user based on continued searching after the user has ended a search session.”

Here’s how it makes more sense:

“Further, information from the documents may be provided to the user… based on continued searching after the user has ended a search session.”

The information provided to the user is “in response to a search request and/or may be automatically served to the user”

It’s a little clearer if you put parentheses around it:

Further, information from the documents may be provided to the user (in response to a search request and/or may be automatically served to the user) based on continued searching after the user has ended a search session.

Takeaways:

  • The patent describes identifying documents that are relevant to the “interests of the user” based on “previously viewed documents” “and/or other criteria.”
  • It sets a general context of an automated assistant “and/or” a search engine
  • Information from the documents that are based on “previously viewed documents” “and/or other criteria” may be shown after the user continues searching.

More Specific Context: Chatbot

The patent next adds an additional layer of context and specificity by mentioning how chatbots can “extract” an answer from a web page (“document”) and show that as an answer. This is about showing a summary that contains the answer, kind of like featured snippets, but within the context of a chatbot.

The patent explains:

“In some cases, a subset of information may be extracted from the document for presentation to the user. For example, when a user engages in a spoken human-to-computer dialog with an automated assistant software process (also referred to as “chatbots,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” “conversational agents,” “virtual assistants,” etc.), the automated assistant may perform various types of processing to extract salient information from a document, so that the automated assistant can present the information in an abbreviated form.

As another example, some search engines will provide summary information from one or more responsive and/or relevant documents, in addition to or instead of links to responsive and/or relevant documents, in response to a user’s search query.”

The last sentence sounds like it’s describing something that’s like a featured snippet or like AI Overviews where it provides a summary. The sentence is very general and ambiguous because it uses “and/or” and “in addition to or instead of” and isn’t as specific as the preceding sentences. It’s an example of a patent being general for legal reasons.

Ranking The Next Set Of Search Results

The next section is called the Summary and it goes into more details about how the Information Gain score represents how likely the user will be interested in the next set of documents. It’s not about ranking search results, it’s about ranking the next set of search results (based on a related topic).

It states:

“An information gain score for a given document is indicative of additional information that is included in the given document beyond information contained in other documents that were already presented to the user.”

Ranking Based On Topic Of Web Pages

It then talks about presenting the web page in a browser, audibly reading the relevant part of the document or audibly/visually presenting a summary of the document (“audibly/visually presenting salient information extracted from the document to the user, etc.”)

But the part that’s really interesting is when it next explains using a topic of the web page as a representation of the the content, which is used to calculate the information gain score.

It describes many different ways of extracting the representation of what the page is about. But what’s important is that it’s describes calculating the Information Gain score based on a representation of what the content is about, like the topic.

“In some implementations, information gain scores may be determined for one or more documents by applying data indicative of the documents, such as their entire contents, salient extracted information, a semantic representation (e.g., an embedding, a feature vector, a bag-of-words representation, a histogram generated from words/phrases in the document, etc.) across a machine learning model to generate an information gain score.”

The patent goes on to describe ranking a first set of documents and using the Information Gain scores to rank additional sets of documents that anticipate follow up questions or a progression within a dialog of what the user is interested in.

The automated assistant can in some implementations query a search engine and then apply the Information Gain rankings to the multiple sets of search results (that are relevant to related search queries).

There are multiple variations of doing the same thing but in general terms this is what it describes:

“Based on the information gain scores, information contained in one or more of the new documents may be selectively provided to the user in a manner that reflects the likely information gain that can be attained by the user if the user were to be presented information from the selected documents.”

What All Versions Of The Patent Have In Common

All versions of the patent share general similarities over which more specifics are layered in over time (like adding onions to a mushroom pizza). The following are the baseline of what all the versions have in common.

Application Of Information Gain Score

All versions of the patent describe applying the information gain score to a second set of documents that have additional information beyond the first set of documents. Obviously, there is no criteria or information to guess what the user is going search for when they start a search session. So information gain scores are not applied to the first search results.

Examples of passages that are the same for all versions:

  • A second set of documents is identified that is also related to the topic of the first set of documents but that have not yet been viewed by the user.
  • For each new document in the second set of documents, an information gain score is determined that is indicative of, for the new document, whether the new document includes information that was not contained in the documents of the first set of documents…

Automated Assistants

All four versions of the patent refer to automated assistants that show search results in response to natural language queries.

The 2018 and 2023 versions of the patent both mention search engines 25 times. The 2o18 version mentions “automated assistant” 74 times and the latest version mentions it 69 times.

They all make references to “conversational agents,” “interactive personal assistants,” “intelligent personal assistants,” “personal voice assistants,” and “virtual assistants.”

It’s clear that the emphasis of the patent is on automated assistants, not the organic search results.

Dialog Turns

Note: In everyday language we use the word dialogue. In computing they the spell it dialog.

All versions of the patents refer to a way of interacting with the system in the form of a dialog, specifically a dialog turn. A dialog turn is the back and forth that happens when a user asks a question using natural language, receives an answer and then asks a follow up question or another question altogether. This can be natural language in text, text to speech (TTS), or audible.

The main aspect the patents have in common is the back and forth in what is called a “dialog turn.” All versions of the patent have this as a context.

Here’s an example of how the dialog turn works:

“Automated assistant client 106 and remote automated assistant 115 can process natural language input of a user and provide responses in the form of a dialog that includes one or more dialog turns. A dialog turn may include, for instance, user-provided natural language input and a response to natural language input by the automated assistant.

Thus, a dialog between the user and the automated assistant can be generated that allows the user to interact with the automated assistant …in a conversational manner.”

Problems That Information Gain Scores Solve

The main feature of the patent is to improve the user experience by understanding the additional value that a new document provides compared to documents that a user has already seen. This additional value is what is meant by the phrase Information Gain.

There are multiple ways that information gain is useful and one of the ways that all versions of the patent describes is in the context of an audio response and how a long-winded audio response is not good, including in a TTS (text to speech) context).

The patent explains the problem of a long-winded response:

“…and so the user may wait for substantially all of the response to be output before proceeding. In comparison with reading, the user is able to receive the audio information passively, however, the time taken to output is longer and there is a reduced ability to scan or scroll/skip through the information.”

The patent then explains how information gain can speed up answers by eliminating redundant (repetitive) answers or if the answer isn’t enough and forces the user into another dialog turn.

This part of the patent refers to the information density of a section in a web page, a section that answers the question with the least amount of words. Information density is about how “accurate,” “concise,” and “relevant”‘ the answer is for relevance and avoiding repetitiveness. Information density is important for audio/spoken answers.

This is what the patent says:

“As such, it is important in the context of an audio output that the output information is relevant, accurate and concise, in order to avoid an unnecessarily long output, a redundant output, or an extra dialog turn.

The information density of the output information becomes particularly important in improving the efficiency of a dialog session. Techniques described herein address these issues by reducing and/or eliminating presentation of information a user has already been provided, including in the audio human-to-computer dialog context.”

The idea of “information density” is important in a general sense because it communicates better for users but it’s probably extra important in the context of being shown in chatbot search results, whether it’s spoken or not. Google AI Overviews shows snippets from a web page but maybe more importantly, communicating in a concise manner is the best way to be on topic and make it easy for a search engine to understand content.

Search Results Interface

All versions of the Information Gain patent are clear that the invention is not in the context of organic search results. It’s explicitly within the context of ranking web pages within a natural language interface of an automated assistant and an AI chatbot.

However, there is a part of the patent that describes a way of showing users with the second set of results within a “search results interface.” The scenario is that the user sees an answer and then is interested in a related topic. The second set of ranked web pages are shown in a “search results interface.”

The patent explains:

“In some implementations, one or more of the new documents of the second set may be presented in a manner that is selected based on the information gain stores. For example, one or more of the new documents can be rendered as part of a search results interface that is presented to the user in response to a query that includes the topic of the documents, such as references to one or more documents. In some implementations, these search results may be ranked at least in part based on their respective information gain scores.”

…The user can then select one of the references and information contained in the particular document can be presented to the user. Subsequently, the user may return to the search results and the references to the document may again be provided to the user but updated based on new information gain scores for the documents that are referenced.

In some implementations, the references may be reranked and/or one or more documents may be excluded (or significantly demoted) from the search results based on the new information gain scores that were determined based on the document that was already viewed by the user.”

What is a search results interface? I think it’s just an interface that shows search results.

Let’s pause here to underline that it should be clear at this point that the patent is not about ranking web pages that are comprehensive about a topic. The overall context of the invention is showing documents within an automated assistant.

A search results interface is just an interface, it’s never described as being organic search results, it’s just an interface.

There’s more that is the same across all versions of the patent but the above are the important general outlines and context of it.

Claims Of The Patent

The claims section is where the scope of the actual invention is described and for which they are seeking legal protection over. It is mainly focused on the invention and less so on the context. Thus, there is no mention of a search engines, automated assistants, audible responses, or TTS (text to speech) within the Claims section. What remains is the context of search results interface which presumably covers all of the contexts.

Context: First Set Of Documents

It starts out by outlining the context of the invention. This context is receiving a query, identifying the topic, and ranking a first group of relevant web pages (documents) and selecting at least one of them as being relevant and either showing the document or communicating the information from the document (like a summary).

“1. A method implemented using one or more processors, comprising: receiving a query from a user, wherein the query includes a topic; identifying a first set of documents that are responsive to the query, wherein the documents of the set of documents are ranked, and wherein a ranking of a given document of the first set of documents is indicative of relevancy of information included in the given document to the topic; selecting, based on the rankings and from the documents of the first set of documents, a most relevant document providing at least a portion of the information from the most relevant document to the user;”

Context: Second Set Of Documents

Then what immediately follows is the part about ranking a second set of documents that contain additional information. This second set of documents is ranked using the information gain scores to show more information after showing a relevant document from the first group.

This is how it explains it:

“…in response to providing the most relevant document to the user, receiving a request from the user for additional information related to the topic; identifying a second set of documents, wherein the second set of documents includes at one or more of the documents of the first set of documents and does not include the most relevant document; determining, for each document of the second set, an information gain score, wherein the information gain score for a respective document of the second set is based on a quantity of new information included in the respective document of the second set that differs from information included in the most relevant document; ranking the second set of documents based on the information gain scores; and causing at least a portion of the information from one or more of the documents of the second set of documents to be presented to the user, wherein the information is presented based on the information gain scores.”

Granular Details

The rest of the claims section contains granular details about the concept of Information Gain, which is a ranking of documents based on what the user already has seen and represents a related topic that the user may be interested in. The purpose of these details is to lock them in for legal protection as part of the invention.

Here’s an example:

The method of claim 1, wherein identifying the first set comprises:
causing to be rendered, as part of a search results interface that is presented to the user in response to a previous query that includes the topic, references to one or more documents of the first set;
receiving user input that that indicates selection of one of the references to a particular document of the first set from the search results interface, wherein at least part of the particular document is provided to the user in response to the selection;

To make an analogy, it’s describing how to make the pizza dough, clean and cut the mushrooms, etc. It’s not important for our purposes to understand it as much as the general view of what the patent is about.

Information Gain Patent

An opinion was shared on social media that this patent has something to do with ranking web pages in the organic search results, I saw it, read the patent and discovered that’s not how the patent works. It’s a good patent and it’s important to correctly understand it. I analyzed multiple versions of the patent to see what they  had in common and what was different.

A careful reading of the patent shows that it is clearly focused on anticipating what the user may want to see based on what they have already seen. To accomplish this the patent describes the use of an Information Gain score for ranking web pages that are on topics that are related to the first search query but not specifically relevant to that first query.

The context of the invention is generally automated assistants, including chatbots. A search engine could be used as part of finding relevant documents but the context is not solely an organic search engine.

This patent could be applicable to the context of AI Overviews. I would not limit the context to AI Overviews as there are additional contexts such as spoken language in which Information Gain scoring could apply. Could it apply in additional contexts like Featured Snippets? The patent itself is not explicit about that.

Read the latest version of Information Gain patent:

Contextual estimation of link information gain

Featured Image by Shutterstock/Khosro

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

A Complete Guide for Digital Marketers

Published

on

By

A Complete Guide for Digital Marketers

Imagine browsing your favorite blog and spotting a visually engaging ad that seamlessly fits the content and stands out just enough to grab your attention.

That’s the power of display advertising at work.

If you’re a digital marketing professional, you’ve likely heard of display networks as part of PPC advertising.

But are you using this channel to the best of your abilities?

In a world where advertising changes daily, it can be difficult to keep up with the best ways to optimize display ads.

In this in-depth guide, we’ll explain display advertising, its different types, and how it differs from search. We’ll also provide strategies and tools to help you take your display ads to the next level.

What Is Display Advertising?

Display advertising is a type of online advertising that typically uses images or videos to showcase your brand.

Thanks to responsive display ads, this ad format becomes much more personalized and can include elements like:

  • Text.
  • Images.
  • Videos.
  • Logos.

Potential customers see these ads while browsing the internet, using other mobile apps, social media platforms, or even connected TV devices.

Display ads are meant to capture the user’s attention in a way that doesn’t disrupt their experience. At the same time, they also encourage them to take action.

While display ads are typically associated with top-of-funnel marketing, advertisers use these ads across the buyer’s entire user journey. Brands can use display ads for:

  • Brand awareness.
  • Product-specific marketing.
  • Promotional sales.
  • Promoting specific content or services.
  • And much more.

Types Of Display Ads

By understanding the various types of display ads available, you can choose the right format to align with your marketing goals and effectively reach your target audience.

Each type offers unique advantages and can be used strategically to maximize engagement and conversions.

Responsive Display Ads

Unique to the Google Display Network, responsive display ads automatically adjust their size, appearance, and format to fit available ad spaces.

Advertisers provide assets such as images, headlines, logos, and descriptions, and Google uses machine learning to create the best possible combinations for different placements, unique to each user.

This flexibility allows responsive display ads to reach a broader audience and perform well across a wide range of devices and websites.

Banner Ads

Banner ads are considered a more traditional type of display advertising.

Banner ads appear across websites and apps and are placed at the top, bottom, or sides of webpages.

They’re typically static in format but can also use animation to catch the user’s eye without being too disruptive to their experience.

Interstitial Ads

Interstitial ads are full-screen ads that cover the whole screen of a webpage or an app.

They typically show up during natural transition points of a web session, like waiting for content to load or going between app screens.

They’re meant to be highly engaging but should be used strategically and sparingly to not overwhelm or annoy the user.

Rich Media Ads

Rich media display ads offer a more interactive experience with a potential customer.

What makes them interactive compared to the other display advertising types?

The beauty of this ad type is the combination of video, image, audio, and clickable elements to engage a user more fully.

Native Ads

The opposite of rich media ads would be native ads. This ad type is meant to blend seamlessly with the content and overall design of a webpage.

Native ads are meant to be non-disruptive to the user experience because they can match the look and feel of the content surrounding the ad.

By blending in more cohesively, it can help increase engagement rates.

Retargeting Ads

Retargeting display ads are intended to re-engage past website or app users who haven’t taken the desired action.

This ad type can look like any of the above-mentioned ad formats, or it could show dynamic content based on the user’s previous browsing history.

Unlike standard display ads, retargeting ads aren’t meant to scale broadly. They have a specific intended audience to invite them back to make a purchase.

Display Advertising Vs. Search Advertising

Display ads and search ads are both essential components of a sound digital marketing strategy.

However, they both serve different purposes and are meant to complement each other – not compete.

Below are the key main differentiators between display and search ads:

  • Targeting. While display ads typically use targeting like demographics, interests, and browsing behavior, search ads are primarily keyword-based and what they search for.
  • Intent. Display ads can help create demand by focusing on awareness and product consideration. Search ads, on the other hand, are intended to capture existing demand.
  • Ad format. Display ads are more visual in nature and utilize elements like images, videos, text, and logos. Search ads are primarily text-based with headlines and descriptions.
  • Reach. Display ads can reach broader audiences across the internet and are easier to scale. Search ads are limited to the specific search engines and their search partner networks, if applicable.

Display Advertising Examples

Display Ads come in many different shapes and sizes. Below are a few examples of ads found across the web in a variety of sizes.

Example: Leaderboard Display Ad

The example below was taken as I was browsing People.com. This ad for US Bank appeared at the top of the page before the hero content.

Example: Skyscraper Display Ad

This example was taken as I was browsing Business Insider. An ad for Oracle Netsuite showed on the right-hand side of the page on a desktop device.

A skyscraper ad example on a desktop site.Source: Businessinsider.com, screenshot taken by author, July 2024

Example: Mobile Display Ad

I found this ad when reading a blog post on Southern Living on my mobile device. A display ad for Best Buy was inserted between paragraphs of the blog post.

A mobile display ad example on a phone.Source; Southernliving.com, screenshot taken by author July 2024

Display Advertising Strategy

Just like any other campaign type, display advertising should be driven by a sound strategy.

Let’s take a look at some of the key components of crafting a display advertising strategy.

1. Define Clear Goals

It’s important to establish the objective of each display campaign, such as brand awareness, lead generation, or sales.

If you’re not sure where to start, take a step back and consider your overarching business needs and what you’re trying to achieve.

For example, are you looking to gain new customers or re-engage existing customers? Is brand awareness more important, or are you looking to drive sales of a new product?

In Google Ads, you’ll start the campaign creation by choosing from the following objectives and then choose the ‘Display’ campaign type after choosing an objective:

Google Ads objectives in new campaign creation.Screenshot taken by author, July 2024

2. Choose Your Budget, Bidding Strategy, And Audience

Budgets and bid strategies are set at the campaign level.

The typical bid strategy pricing models for display ads are a cost-per-click (CPC) basis or a cost per 1,000 impressions (CPM) model. You’ll want to choose the one that aligns with the campaign goal and your overall budget.

In this example, I chose “Awareness” as the campaign objective, so Google Ads recommends a Viewable impressions bid strategy.

Choosing bid strategies in Google Ads for display campaign.Screenshot taken by author, July 2024.

Next is to refine the audience targeting for your campaigns.

If the goal is to attract new customers, you can use your own data on existing customers to build audience profiles to target.

Keep in mind the demographics, interests, and overall browsing behavior when putting together your target audience.

Display ads targeting options.Screenshot taken by author, July 2024.

3. Choose Display Ad Type, Format, And Placements

The nice part about Google Ads is the ability to target (or exclude) specific website placements or apps to ensure your ads show up in the right place.

You may be tempted to choose a short list of very specific websites, but by doing so, you could end up limiting your reach immensely. It’s also not guaranteed that your ads will show on those placements if your budget or bid is not competitive enough.

At the beginning, use negative placements to your advantage to exclude sites where your content would be inappropriate.

Now, as for ad size and format, there are two options in Google Ads:

  • Uploaded display ads.
  • Utilize responsive display ads (RDAs).

The main benefit of using uploaded display ads is that you have full control over the design. However, not all websites utilize these formats, and you may be missing out on additional reach if you opt not to use RDAs.

The most typical banner sizes for uploaded display ads include:

  • 728×90 (leaderboard).
  • 300×250 (medium rectangle).
  • 336×280 (large rectangle).
  • 300×50 (mobile banner).
  • 160×600 (skyscraper).

If you opt to use responsive display ads, Google takes the guesswork out of ad sizes for you.

Essentially, you’ll provide the basic elements, and Google will mix and match that content to create personalized ads for each user based on when and where they’re browsing.

Be sure to provide these essentials for a well-formatted ad:

  • Images.
  • Logos.
  • Brand name.
  • Headlines.
  • Descriptions.
  • Custom colors.
  • Call-to-action (CTA) text.

4. Focus On Creating Compelling Ad Content

Expanding on point #3 above, the visual design is your chance to capture the user’s attention.

A boring ad won’t stand out and can turn customers away. When designing ads, make sure to design visually appealing ads that align with your brand.

Additionally, make sure to test different elements and rotate out poor-performing elements.

It’s especially important to remain visually consistent if you’re marketing across different channels like social media. Consistent brand recognition across platforms can pay dividends over time.

5. Track And Optimize Performance

Once your display campaign is launched, you’ll want to monitor the key metrics chosen for the campaign objectives.

It may be tempting to make changes immediately, but it’s important to give the algorithm time to learn before making any major changes.

Unless something serious goes awry, like showing up on inappropriate placements, give the campaign time to run and then make tweaks based on the data coming in.

For example, if an ad shows a lot of impressions but few clicks, you may need to change the creative elements to capture the user’s attention more. Or, it could be the placements that need tweaking.

Or, if an ad is getting a ton of clicks but very few conversions, it may not be the ad itself; it could mean the landing page needs to be optimized. Try segmenting the ads by device to identify if the majority of clicks are coming from mobile and if the corresponding landing page is optimized for mobile delivery.

Ongoing campaign monitoring and optimization are vital for delivering optimal ROI to your display ads.

Read More:

Top Display Advertising Networks

Believe it or not, there’s a ton of different advertising networks to choose from as an alternative to Google.

Depending on your goal and usage of Display ads, you may need a different platform.

Some of the top Display ad network platforms include:

  • AdRoll
  • Amazon
  • StackAdapt
  • AirNow Media
  • Yahoo Ad Tech

You can find a full recommended list of Display Ad networks here.

Read More:

Display Advertising Tools

Depending on which stage you’re in for creating or running display ads, there are multiple tools to help take your display ads to the next level.

Ad Creation And Design Tools

If you’re looking to create display ads where you have full control, there are many user-friendly tools to help guide the ad creation process.

  • Google Web Designer: This is a free tool from Google that allows you to create HTML5 ads and motion graphics.
  • Canva: A more user-friendly option that has tons of templates to start from or the ability to create from scratch.
  • Bannersnack: This tool is specifically for creating banner ads, but it simplifies the design process with drag-and-drop components.

Ad Analysis And Optimization Tools

Analyzing display campaigns can be half the battle, and you need reliable tools to help optimize these campaigns to the fullest.

  • Google Analytics: This tool is essential for tracking and analyzing the performance of your campaigns. It can help marry the metrics like impressions and clicks to user purchase behavior to help you determine where to optimize further.
  • Google Ads Performance Planner: If you need help forecasting potential campaign changes, this tool is for you. It takes historical data and trends into considerations to help provide budget and bidding recommendations.
  • Hotjar: This is a user behavior tool that can provide session recordings, heatmaps, and more to understand how real users interact with your landing page and website.

Ad Management And Automation Tools

  • Google Ads Editor: This tool is great for managing multiple Google Ads campaigns offline, allowing for bulk changes and uploading changes on your own time.
  • Optmzyr: This platform offers more automation and streamlined workflow for PPC campaigns, including display ads.
  • Semrush: This platform can help with competitive analysis for display ads, which can help you refine your strategies.

Summary

Display ads are part of any comprehensive digital marketing strategy.

Because of their scalability and reach, display ads can cast a wide net to make potential customers aware of your brand and increase engagement and, ultimately, sales.

From traditional banner ads to innovative, responsive display ads, each type serves a unique purpose in capturing user attention and driving conversions.

By understanding the differences between display and search advertising, leveraging effective strategies, and utilizing various tools for ad creation, design, analysis, and optimization, you can maximize the impact of your display advertising campaigns.

More resources: 


Featured Image: BestForBest/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

How You Can Measure Core Web Vitals

Published

on

By

How You Can Measure Core Web Vitals

Google has defined a set of metrics site owners should focus on when optimizing for page experience. Core Web Vitals metrics are part of Google’s page experience factors that all websites should strive to meet.

Users’ expectations for web experiences can vary according to site and context, but some remain consistent regardless of where they are on the web.

Specifically, Google identifies the core user experience needs such as loading speed, interactivity, and visual stability.

What Are Core Web Vitals Scores?

Google recommends site owners have CWV metrics under the ‘good’ threshold specified below:

Metric name Good Poor
Largest Contentful Paint (LCP) ≤2500ms >4000ms
Interaction to Next Paint (INP) ≤2000ms >500mx
Cumulative Layout Shift (CLS) ≤1 >0.25

Anything in between good and poor is considered as moderate, which should be improved.

Diagram showing three Core Web Vitals performance metrics

Google explains why these three metrics, in particular, are so important:

“All of these metrics capture important user-centric outcomes, are field measurable, and have supporting lab diagnostic metric equivalents and tooling.

For example, while Largest Contentful Paint is the topline loading metric, it is also highly dependent on First Contentful Paint (FCP) and Time to First Byte (TTFB), which remain critical to monitor and improve.”

How Google Measures Core Web Vitals

Google CrUX report uses Chrome data when users browse websites to gather real-world user data from their devices. At least 75% of pageviews to the site should have ‘good’ scores for the website to meet CWV thresholds.

Please note it uses 75% of pageviews of the entire site, which means pages with poor CWV and less traffic will not impact the overall website score.

This is why you may find that websites with a ‘good’ score have pages with terrible CWVs and vice versa.

This method of measuring ensures that a low number of percentage visits due to slow network conditions doesn’t take down the entire website’s ‘good’ score.

Here’s how those metrics can be measured.

How To Measure Core Web Vitals

Google incorporates Core Web Vitals measurement capabilities into many of its existing tools.

Core Web Vitals can be measured using these free tools:

Let’s dive into how to use each of these free SEO tools to measure Core Web Vitals.

PageSpeed Insights

PageSpeed Insights allows you to measure Core Web Vitals with both lab and field data included in the reports.

The lab section of the report provides data gathered from real users’ devices in all geos and different network conditions, whereas the field section shows data from simulated devices using just one device.

Pagespeed insights report. Field vs. Lab dataPagespeed insights report. Field vs. Lab data

If your pages have few visits or are new, there might be insufficient historical data for field data to show a report. In that case, the average field score for the entire website will be used as a fallback if available; otherwise, it will show no data.

Once you run reports you will have a list of recommendations on how to improve your scores underneath. You can read our guide on the PageSpeed Insights report to learn how to use it.

Web Vitals Extension

Using the PageSpeed Insights tool is always a great way to debug and audit performance, but it is often not convenient. You have to open a new tab in your browser and navigate away from the page, which is distracting.

Fortunately, there is an extension available to install from the Chrome Web Store that measures Core Web Vitals metrics in real-time during your browsing and also loads field data if available.

Core Web Vitals scoresCore Web Vitals scores

Besides this standard UI, this addon also offers more granular debugging opportunities via the browser DevTools ‘console’ tab. Here is a quick video guide on how to do that.

Debugging the Interaction Next Paint metric is quite challenging as it may degrade at any point during the user interaction journey. In PageSpeed Insights, you get only an average value across all interactions, not which interaction on the specific element on the page was slow.

By using this extension, you can interact with the page and identify elements that degrade the INP metric by checking the console logs. For example, you can click on buttons and check the console to see how long the interaction took.

As soon as you identify which element is slow to respond, you can check your JavaScript code to see if any scripts are blocking the interaction.

Lighthouse

Lighthouse is an open-source tool you can use to audit your webpage’s performance, which is also available in Chrome’s DevTools.

All of the reports that Lighthouse powers are updated to reflect the latest version.

Example lighthouse report in chrome browser DevToolsExample lighthouse report in chrome browser DevTools

One caveat to be aware of is that when running Lighthouse in your browser, it also loads many resources from your Chrome extensions, which can affect your metrics in the Lighthouse report.

Message indicating issues with the Lighthouse run, specifically mentioning that Chrome extensions negatively impacted the page's load performance. The message indicated issues with the Lighthouse run and specifically mentioned that Chrome extensions negatively impacted the page’s load performance.

That’s why I suggest using Chrome Canary for debugging as a good practice. Chrome Canary has an isolated installation from your regular Chrome browser where you can access experimental features. This allows you to test your website with features that will be included in future Chrome releases.

I ran a quick experiment to see how drastically Lighthouse page speed scores can vary in the Canary clean installation vs. your browser with add-ons enabled.

Two screenshots of Google Chrome DevTools' Lighthouse audit results. Left: Chrome stable version with add-ons and right: Canary without add-ons.Two screenshots of Google Chrome DevTools’ Lighthouse audit results. Left: Chrome stable version with add-ons and right: Canary without add-ons.

One important feature that Lighthouse enables is measuring scores while interacting with the webpage and measuring how certain interactions affect your scores, especially the Interaction to Next Paint (INP) metric.

Option timespan in Chrome Lighthouse DevToolsOption timespan in Chrome Lighthouse DevTools

I suggest you dive deep and master how to use Lighthouse by reading our guide written by the two of most experienced technical SEO experts in the world.

CrUX Dashboard

CrUX report is a public dataset of real user experience data on millions of websites. The Chrome UX report measures field versions of all the Core Web Vitals, which means it reports real-world data rather than lab data.

With PageSpeed Insights, Lighthouse, or the Web Vital add-on we have discussed, you now know how to measure individual URL performance. But how do you see the whole picture for a website with thousands of URLs? What percentage of URLs have ‘good’ scores or scores from a few months ago to compare against?

This is where Google’s CrUX free Looker Studio dashboard helps. You can check segments and see your historical data.

To do that, simply copy and paste your domain into the CrUX dashboard launcher.

CrUX dashboard launcherCrUX dashboard launcher

Then, enjoy beautiful reports for free. Here is an example report for Search Engine Journal in case you want to explore a real dashboard.

CrUX dashboard example for Search Engine JournalCrUX dashboard example for Search Engine Journal

In this dashboard, you can find much more besides the CWV metrics. If you fall short of CWV ‘good’ scores but lab data shows you are meeting all thresholds, it may be because your visitors have a bad connection.

This is where the connection distribution report is highly valuable: it can help you understand if your scores’ poor performance is due to network issues.

Connection Distribution in CrUX reportConnection Distribution in CrUX report

Unfortunately, this dashboard doesn’t give you a breakdown of CWV metrics by country, but there is a free tool, treo.sh, which you can use to check performance metrics by geos.

Break down of CWV metrics by geos which helps understand where they fall short of good scoresBreak down of CWV metrics by geos which helps understand where they fall short of good scores

Search Console

GSC is another tool to see how your overall website CWV metrics.

A Google Search Console dashboard displaying A Google Search Console dashboard displaying “Core Web Vitals”

The report identifies groups of pages that require attention based on real-world data from the Chrome UX report. If you open the report by clicking on the top right corner link, you will see a breakdown of your issues.

Core Web Vitals report for Mobile in GSCCore Web Vitals report for Mobile in GSC

With this report, be aware that it pulls data from CruX, and URLs will be omitted if they do not have a minimum amount of reporting data, which means you may have pages with poor CWV metrics that are not reported here.

Web-Vitals.JS And GA4

web-vitals.js is an open-source library that accurately measures CWV metrics the same way Chrome or PageSpeed Insights does. The web vitals extension we discussed above actually uses this library for reporting and logging.

However, you can integrate it with Google Analytics 4 to get a detailed performance report at scale on a website with many pages. Below is a code sample for GA4’s gtag integration.



In the code sample, ‘value’ is a built-in parameter, and ‘metric_id’, ‘metric_value’, ‘metric_delta’, ‘metric_rating’, and ‘debug_target’ are optional custom dimensions you may want to include per your needs.

If you want to see these dimensions in GA4’s exploration reports, you need to add them in GA4’s admin of custom definitions. Otherwise, if you decide to send these parameters and not add them via admin you can access raw data via BigQuery only. This provides much more flexibility but requires SQL expertise.

If you decide to include ‘metric_id,’ which, in the case of websites with a lot of traffic, will have an indefinite number of unique values, it may cause cardinality issues in exploration reports.

So, you may want to enable those additional custom parameters for a short period to gather sample data for troubleshooting.

To send CWV metrics data via Google Tag Manager, refer to this guide created by Google’s marketing solution team. As a best practice, you should use GTM integration, and the code above (which is fully functional) demonstrates the fundamental mechanics of CWV data collection and reporting.

Other than what we have discussed, freemium or paid tools such as Debugbear, treo.sh, Oncrawl, Lumar, or Semrush may help you identify your scores on all pages at a scale in real time.

However, I would like to note that from the listed tools, Debugbear and treo.sh are highly specialized in CWV metrics and provide high-granularity insights with advanced segmentations.

What About Other Valuable Metrics?

As important as the Core Web Vitals are, they’re not the only page experience metrics to focus on.

Ensuring your site uses HTTPS, is mobile-friendly, avoids intrusive interstitials, and maintains a clear distinction between the website are crucial parts of page experience ranking factors.

So think of it also from a user-centric point of view, and not only because it is a ranking factor.

For example, from a conversions perspective, if you have a slow ecommerce website, your potential customers may churn, and it will cause revenue losses.

More resources: 


Featured Image: BestForBest/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending