Connect with us

SEO

How NLP & NLU Work For Semantic Search

Published

on

How NLP & NLU Work For Semantic Search

Natural language processing (NLP) and natural language understanding (NLU) are two often-confused technologies that make search more intelligent and ensure people can search and find what they want.

This intelligence is a core component of semantic search.

NLP and NLU are why you can type “dresses” and find that long-sought-after “NYE Party Dress” and why you can type “Matthew McConnahey” and get Mr. McConnaughey back.

With these two technologies, searchers can find what they want without having to type their query exactly as it’s found on a page or in a product.

NLP is one of those things that has built up such a large meaning that it’s easy to look past the fact that it tells you exactly what it is: NLP processes natural language, specifically into a format that computers can understand.

Advertisement

These kinds of processing can include tasks like normalization, spelling correction, or stemming, each of which we’ll look at in more detail.

NLU, on the other hand, aims to “understand” what a block of natural language is communicating.

It performs tasks that can, for example, identify verbs and nouns in sentences or important items within a text. People or programs can then use this information to complete other tasks.

Computers seem advanced because they can do a lot of actions in a short period of time. However, in a lot of ways, computers are quite daft.

They need the information to be structured in specific ways to build upon it. For natural language data, that’s where NLP comes in.

Advertisement

It takes messy data (and natural language can be very messy) and processes it into something that computers can work with.

Text Normalization

When searchers type text into a search bar, they are trying to find a good match, not play “guess the format.”

For example, to require a user to type a query in exactly the same format as the matching words in a record is unfair and unproductive.

We use text normalization to do away with this requirement so that the text will be in a standard format no matter where it’s coming from.

As we go through different normalization steps, we’ll see that there is no approach that everyone follows. Each normalization step generally increases recall and decreases precision.

A quick aside: “recall” means a search engine finds results that are known to be good.

Advertisement

Precision means a search engine finds only good results.

Search results could have 100% recall by returning every document in an index, but precision would be poor.

Conversely, a search engine could have 100% recall by only returning documents that it knows to be a perfect fit, but sit will likely miss some good results.

Again, normalization generally increases recall and decreases precision.

Whether that movement toward one end of the recall-precision spectrum is valuable depends on the use case and the search technology. It isn’t a question of applying all normalization techniques but deciding which ones provide the best balance of precision and recall.

Letter Normalization

The simplest normalization you could imagine would be the handling of letter case.

Advertisement

In English, at least, words are generally capitalized at the beginning of sentences, occasionally in titles, and when they are proper nouns. (There are other rules, too, depending on whom you ask.)

But in German, all nouns are capitalized. Other languages have their own rules.

These rules are useful. Otherwise, we wouldn’t follow them.

For example, capitalizing the first words of sentences helps us quickly see where sentences begin.

That usefulness, however, is diminished in an information retrieval context.

The meanings of words don’t change simply because they are in a title and have their first letter capitalized.

Advertisement

Even trickier is that there are rules, and then there is how people actually write.

If I text my wife, “SOMEONE HIT OUR CAR!” we all know that I’m talking about a car and not something different because the word is capitalized.

We can see this clearly by reflecting on how many people don’t use capitalization when communicating informally – which is, incidentally, how most case-normalization works.

Of course, we know that sometimes capitalization does change the meaning of a word or phrase. We can see that “cats” are animals, and “Cats” is a musical.

In most cases, though, the increased precision that comes with not normalizing on case, is offset by decreasing recall by far too much.

The difference between the two is easy to tell via context, too, which we’ll be able to leverage through natural language understanding.

Advertisement

While less common in English, handling diacritics is also a form of letter normalization.

Diacritics are the marks, or “glyphs,” attached to letters, as in á, ë, or ç.

Words can otherwise be spelled the same, but added diacritics can change the meaning. In French, “élève” means “student,” while “élevé” means “elevated.”

Nonetheless, many people will not include the diacritics when searching, and so another form of normalization is to strip all diacritics, leaving behind the simple (and now ambiguous) “eleve.”

Tokenization

The next normalization challenge is breaking down the text the searcher has typed in the search bar and the text in the document.

This step is necessary because word order does not need to be exactly the same between the query and the document text, except when a searcher wraps the query in quotes.

Advertisement

Breaking queries, phrases, and sentences into words may seem like a simple task: Just break up the text at each space.

Problems show up quickly with this approach. Again, let’s start with English.

Separating on spaces alone means that the phrase “Let’s break up this phrase!” yields us let’s, break, up, this, and phrase! as words.

For search, we almost surely don’t want the exclamation point at the end of the word “phrase.”

Whether we want to keep the contracted word “let’s” together is not as clear.

Some software will break the word down even further (“let” and “‘s”) and some won’t.

Advertisement

Some will not break down “let’s” while breaking down “don’t” into two pieces.

This process is called “tokenization.”

We call it tokenization for reasons that should now be clear: What we end up with are not words but discrete groups of characters. This is even more true for languages other than English.

German speakers, for example, can merge words (more accurately “morphemes,” but close enough) together to form a larger word. The German word for “dog house” is “Hundehütte,” which contains the words for both “dog” (“Hund”) and “house” (“Hütte”).

Nearly all search engines tokenize text, but there are further steps an engine can take to normalize the tokens. Two related approaches are stemming and lemmatization.

Stemming And Lemmatization

Advertisement

Stemming and lemmatization take different forms of tokens and break them down for comparison.

For example, take the words “calculator” and “calculation,” or “slowing” and “slowly.”

We can see there are some clear similarities.

Stemming breaks a word down to its “stem,” or other variants of the word it is based on. Stemming is fairly straightforward; you could do it on your own.

What’s the stem of “stemming?”

You can probably guess that it’s “stem.” Often stemming means removing prefixes or suffixes, as in this case.

Advertisement

There are multiple stemming algorithms, and the most popular is the Porter Stemming Algorithm, which has been around since the 1980s. It is a series of steps applied to a token to get to the stem.

Stemming can sometimes lead to results that you wouldn’t foresee.

Looking at the words “carry” and “carries,” you might expect that the stem of each of these is “carry.”

The actual stem, at least according to the Porter Stemming Algorithm, is “carri.”

This is because stemming attempts to compare related words and break down words into their smallest possible parts, even if that part is not a word itself.

On the other hand, if you want an output that will always be a recognizable word, you want lemmatization. Again, there are different lemmatizers, such as NLTK using Wordnet.

Advertisement

Lemmatization breaks a token down to its “lemma,” or the word which is considered the base for its derivations. The lemma from Wordnet for “carry” and “carries,” then, is what we expected before: “carry.”

Lemmatization will generally not break down words as much as stemming, nor will as many different word forms be considered the same after the operation.

The stems for “say,” “says,” and “saying” are all “say,” while the lemmas from Wordnet are “say,” “say,” and “saying.” To get these lemma, lemmatizers are generally corpus-based.

If you want the broadest recall possible, you’ll want to use stemming. If you want the best possible precision, use neither stemming nor lemmatization.

Which you go with ultimately depends on your goals, but most searches can generally perform very well with neither stemming nor lemmatization, retrieving the right results, and not introducing noise.

Plurals

If you decide not to include lemmatization or stemming in your search engine, there is still one normalization technique that you should consider.

Advertisement

That is the normalization of plurals to their singular form.

Generally, ignoring plurals is done through the use of dictionaries.

Even if “de-pluralization” seems as simple as chopping off an “-s,” that’s not always the case. The first problem is with irregular plurals, such as “deer,” “oxen,” and “mice.”

A second problem is pluralization with an “-es” suffix, such as “potato.” Finally, there are simply the words that end in an “s” but aren’t plural, like “always.”

A dictionary-based approach will ensure that you introduce recall, but not incorrectly.

Just as with lemmatization and stemming, whether you normalize plurals is dependent on your goals.

Advertisement

Cast a wider net by normalizing plurals, a more precise one by avoiding normalization.

Usually, normalizing plurals is the right choice, and you can remove normalization pairs from your dictionary when you find them causing problems.

One area, however, where you will almost always want to introduce increased recall is when handling typos.

Typo Tolerance And Spell Check

We have all encountered typo tolerance and spell check within search, but it’s useful to think about why it’s present.

Sometimes, there are typos because fingers slip and hit the wrong key.

Other times, the searcher thinks a word is spelled differently than it is.

Advertisement

Increasingly, “typos” can also result from poor speech-to-text understanding.

Finally, words can seem like they have typos but really don’t, such as in comparing “scream” and “cream.”

The simplest way to handle these typos, misspellings, and variations, is to avoid trying to correct them at all. Some algorithms can compare different tokens.

One of these is the Damerau-Levenshtein Distance algorithm.

This measure looks at how many edits are needed to go from one token to another.

You can then filter out all tokens with a distance that is too high.

Advertisement

(Two is generally a good threshold, but you will probably want to adjust this based on the length of the token.)

After filtering, you can use the distance for sorting results or feeding into a ranking algorithm.

Many times, context can matter when determining if a word is misspelled or not. The word “scream” is probably correct after “I,” but not after “ice.”

Machine learning can be a solution for this by bringing context to this NLP task.

This spell check software can use the context around a word to identify whether it is likely to be misspelled and its most likely correction.

Typos In Documents

Advertisement

One thing that we skipped over before is that words may not only have typos when a user types it into a search bar.

Words may also have typos inside a document.

This is especially true when the documents are made of user-generated content.

This detail is relevant because if a search engine is only looking at the query for typos, it is missing half of the information.

The best typo tolerance should work across both query and document, which is why edit distance generally works best for retrieving and ranking results.

Spell check can be used to craft a better query or provide feedback to the searcher, but it is often unnecessary and should never stand alone.

Advertisement

Natural Language Understanding

While NLP is all about processing text and natural language, NLU is about understanding that text.

Named Entity Recognition

A task that can aid in search is that of named entity recognition, or NER. NER identifies key items, or “entities,” inside of text.

While some people will call NER natural language processing and others will call it natural language understanding, what’s clear is that it can find what’s important within a text.

For the query “NYE party dress” you would perhaps get back an entity of “dress” that is mapped to a type of “category.”

NER will always map an entity to a type, from as generic as “place” or “person,” to as specific as your own facets.

NER can also use context to identify entities.

Advertisement

A query of “white house” may refer to a place, while “white house paint” might refer to a color of “white” and a product category of “paint.”

Query Categorization

Named entity recognition is valuable in search because it can be used in conjunction with facet values to provide better search results.

Recalling the “white house paint” example, you can use the “white” color and the “paint” product category to filter down your results to only show those that match those two values.

This would give you high precision.

If you don’t want to go that far, you can simply boost all products that match one of the two values.

Advertisement

Query categorization can also help with recall.

For searches with few results, you can use the entities to include related products.

Imagine that there are no products that match the keywords “white house paint.”

In this case, leveraging the product category of “paint” can return other paints that might be a decent alternative, such as that nice eggshell color.

Document Tagging

Another way that named entity recognition can help with search quality is by moving the task from query time to ingestion time (when the document is added to the search index).

Advertisement

When ingesting documents, NER can use the text to tag those documents automatically.

These documents will then be easier to find for the searchers.

Either the searchers use explicit filtering, or the search engine applies automatic query-categorization filtering, to enable searchers to go directly to the right products using facet values.

Intent Detection

Related to entity recognition is intent detection, or determining the action a user wants to take.

Intent detection is not the same as what we talk about when we say “identifying searcher intent.”

Identifying searcher intent is getting people to the right content at the right time.

Advertisement

Intent detection maps a request to a specific, pre-defined intent.

It then takes action based on that intent. A user searching for “how to make returns” might trigger the “help” intent, while “red shoes” might trigger the “product” intent.

In the first case, you could route the search to your help desk search.

Intent detection maps a request to a specific, pre-defined intent – then takes action based on that intent.

In the second one, you could route it to the product search. This isn’t so different from what you see when you search for the weather on Google.

Look, and notice that you get a weather box at the very top of the page. (Newly launched web search engine Andi takes this concept to the extreme, bundling search in a chatbot.)

For most search engines, intent detection, as outlined here, isn’t necessary.

Advertisement

Most search engines only have a single content type on which to search at a time.

When there are multiple content types, federated search can perform admirably by showing multiple search results in a single UI at the same time.

Other NLP And NLU tasks

There are plenty of other NLP and NLU tasks, but these are usually less relevant to search.

Tasks like sentiment analysis can be useful in some contexts, but search isn’t one of them.

You could imagine using translation to search multi-language corpuses, but it rarely happens in practice, and is just as rarely needed.

Question answering is an NLU task that is increasingly implemented into search, especially search engines that expect natural language searches.

Advertisement

Once again, you can see this on major web search engines.

Google, Bing, and Kagi will all immediately answer the question “how old is the Queen of England?” without needing to click through to any results.

Some search engine technologies have explored implementing question answering for more limited search indices, but outside of help desks or long, action-oriented content, the usage is limited.

Few searchers are going to an online clothing store and asking questions to a search bar.

Summarization is an NLU task that is more useful for search.

Much like with the use of NER for document tagging, automatic summarization can enrich documents. Summaries can be used to match documents to queries, or to provide a better display of the search results.

Advertisement

This better display can help searchers be confident that they have gotten good results and get them to the right answers more quickly.

Even including newer search technologies using images and audio, the vast, vast majority of searches happen with text. To get the right results, it’s important to make sure the search is processing and understanding both the query and the documents.

Semantic search brings intelligence to search engines, and natural language processing and understanding are important components.

NLP and NLU tasks like tokenization, normalization, tagging, typo tolerance, and others can help make sure that searchers don’t need to be search experts.

Instead, they can go from need to solution “naturally” and quickly.

More resources: 

Advertisement

Featured Image: ryzhi/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Reddit Post Ranks On Google In 5 Minutes

Published

on

By

Google apparently ranks Reddit posts within minutes

Google’s Danny Sullivan disputed the assertions made in a Reddit discussion that Google is showing a preference for Reddit in the search results. But a Redditor’s example proves that it’s possible for a Reddit post to rank in the top ten of the search results within minutes and to actually improve rankings to position #2 a week later.

Discussion About Google Showing Preference To Reddit

A Redditor (gronetwork) complained that Google is sending so many visitors to Reddit that the server is struggling with the load and shared an example that proved that it can only take minutes for a Reddit post to rank in the top ten.

That post was part of a 79 post Reddit thread where many in the r/SEO subreddit were complaining about Google allegedly giving too much preference to Reddit over legit sites.

The person who did the test (gronetwork) wrote:

“…The website is already cracking (server down, double posts, comments not showing) because there are too many visitors.

…It only takes few minutes (you can test it) for a post on Reddit to appear in the top ten results of Google with keywords related to the post’s title… (while I have to wait months for an article on my site to be referenced). Do the math, the whole world is going to spam here. The loop is completed.”

Advertisement

Reddit Post Ranked Within Minutes

Another Redditor asked if they had tested if it takes “a few minutes” to rank in the top ten and gronetwork answered that they had tested it with a post titled, Google SGE Review.

gronetwork posted:

“Yes, I have created for example a post named “Google SGE Review” previously. After less than 5 minutes it was ranked 8th for Google SGE Review (no quotes). Just after Washingtonpost.com, 6 authoritative SEO websites and Google.com’s overview page for SGE (Search Generative Experience). It is ranked third for SGE Review.”

It’s true, not only does that specific post (Google SGE Review) rank in the top 10, the post started out in position 8 and it actually improved ranking, currently listed beneath the number one result for the search query “SGE Review”.

Screenshot Of Reddit Post That Ranked Within Minutes

Anecdotes Versus Anecdotes

Okay, the above is just one anecdote. But it’s a heck of an anecdote because it proves that it’s possible for a Reddit post to rank within minutes and get stuck in the top of the search results over other possibly more authoritative websites.

hankschrader79 shared that Reddit posts outrank Toyota Tacoma forums for a phrase related to mods for that truck.

Advertisement

Google’s Danny Sullivan responded to that post and the entire discussion to dispute that Reddit is not always prioritized over other forums.

Danny wrote:

“Reddit is not always prioritized over other forums. [super vhs to mac adapter] I did this week, it goes Apple Support Community, MacRumors Forum and further down, there’s Reddit. I also did [kumo cloud not working setup 5ghz] recently (it’s a nightmare) and it was the Netgear community, the SmartThings Community, GreenBuildingAdvisor before Reddit. Related to that was [disable 5g airport] which has Apple Support Community above Reddit. [how to open an 8 track tape] — really, it was the YouTube videos that helped me most, but it’s the Tapeheads community that comes before Reddit.

In your example for [toyota tacoma], I don’t even get Reddit in the top results. I get Toyota, Car & Driver, Wikipedia, Toyota again, three YouTube videos from different creators (not Toyota), Edmunds, a Top Stories unit. No Reddit, which doesn’t really support the notion of always wanting to drive traffic just to Reddit.

If I guess at the more specific query you might have done, maybe [overland mods for toyota tacoma], I get a YouTube video first, then Reddit, then Tacoma World at third — not near the bottom. So yes, Reddit is higher for that query — but it’s not first. It’s also not always first. And sometimes, it’s not even showing at all.”

hankschrader79 conceded that they were generalizing when they wrote that Google always prioritized Reddit. But they also insisted that that didn’t diminish what they said is a fact that Google’s “prioritization” forum content has benefitted Reddit more than actual forums.

Why Is The Reddit Post Ranked So High?

It’s possible that Google “tested” that Reddit post in position 8 within minutes and that user interaction signals indicated to Google’s algorithms that users prefer to see that Reddit post. If that’s the case then it’s not a matter of Google showing preference to Reddit post but rather it’s users that are showing the preference and the algorithm is responding to those preferences.

Advertisement

Nevertheless, an argument can be made that user preferences for Reddit can be a manifestation of Familiarity Bias. Familiarity Bias is when people show a preference for things that are familiar to them. If a person is familiar with a brand because of all the advertising they were exposed to then they may show a bias for the brand products over unfamiliar brands.

Users who are familiar with Reddit may choose Reddit because they don’t know the other sites in the search results or because they have a bias that Google ranks spammy and optimized websites and feel safer reading Reddit.

Google may be picking up on those user interaction signals that indicate a preference and satisfaction with the Reddit results but those results may simply be biases and not an indication that Reddit is trustworthy and authoritative.

Is Reddit Benefiting From A Self-Reinforcing Feedback Loop?

It may very well be that Google’s decision to prioritize user generated content may have started a self-reinforcing pattern that draws users in to Reddit through the search results and because the answers seem plausible those users start to prefer Reddit results. When they’re exposed to more Reddit posts their familiarity bias kicks in and they start to show a preference for Reddit. So what could be happening is that the users and Google’s algorithm are creating a self-reinforcing feedback loop.

Is it possible that Google’s decision to show more user generated content has kicked off a cycle where more users are exposed to Reddit which then feeds back into Google’s algorithm which in turn increases Reddit visibility, regardless of lack of expertise and authoritativeness?

Featured Image by Shutterstock/Kues

Advertisement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

WordPress Releases A Performance Plugin For “Near-Instant Load Times”

Published

on

By

WordPress speculative loading plugin

WordPress released an official plugin that adds support for a cutting edge technology called speculative loading that can help boost site performance and improve the user experience for site visitors.

Speculative Loading

Rendering means constructing the entire webpage so that it instantly displays (rendering). When your browser downloads the HTML, images, and other resources and puts it together into a webpage, that’s rendering. Prerendering is putting that webpage together (rendering it) in the background.

What this plugin does is to enable the browser to prerender the entire webpage that a user might navigate to next. The plugin does that by anticipating which webpage the user might navigate to based on where they are hovering.

Chrome lists a preference for only prerendering when there is an at least 80% probability of a user navigating to another webpage. The official Chrome support page for prerendering explains:

“Pages should only be prerendered when there is a high probability the page will be loaded by the user. This is why the Chrome address bar prerendering options only happen when there is such a high probability (greater than 80% of the time).

There is also a caveat in that same developer page that prerendering may not happen based on user settings, memory usage and other scenarios (more details below about how analytics handles prerendering).

Advertisement

The Speculative Loading API solves a problem that previous solutions could not because in the past they were simply prefetching resources like JavaScript and CSS but not actually prerendering the entire webpage.

The official WordPress announcement explains it like this:

Introducing the Speculation Rules API
The Speculation Rules API is a new web API that solves the above problems. It allows defining rules to dynamically prefetch and/or prerender URLs of certain structure based on user interaction, in JSON syntax—or in other words, speculatively preload those URLs before the navigation. This API can be used, for example, to prerender any links on a page whenever the user hovers over them.”

The official WordPress page about this new functionality describes it:

“The Speculation Rules API is a new web API… It allows defining rules to dynamically prefetch and/or prerender URLs of certain structure based on user interaction, in JSON syntax—or in other words, speculatively preload those URLs before the navigation.

This API can be used, for example, to prerender any links on a page whenever the user hovers over them. Also, with the Speculation Rules API, “prerender” actually means to prerender the entire page, including running JavaScript. This can lead to near-instant load times once the user clicks on the link as the page would have most likely already been loaded in its entirety. However that is only one of the possible configurations.”

The new WordPress plugin adds support for the Speculation Rules API. The Mozilla developer pages, a great resource for HTML technical understanding describes it like this:

“The Speculation Rules API is designed to improve performance for future navigations. It targets document URLs rather than specific resource files, and so makes sense for multi-page applications (MPAs) rather than single-page applications (SPAs).

The Speculation Rules API provides an alternative to the widely-available <link rel=”prefetch”> feature and is designed to supersede the Chrome-only deprecated <link rel=”prerender”> feature. It provides many improvements over these technologies, along with a more expressive, configurable syntax for specifying which documents should be prefetched or prerendered.”

Advertisement

See also: Are Websites Getting Faster? New Data Reveals Mixed Results

Performance Lab Plugin

The new plugin was developed by the official WordPress performance team which occasionally rolls out new plugins for users to test ahead of possible inclusion into the actual WordPress core. So it’s a good opportunity to be first to try out new performance technologies.

The new WordPress plugin is by default set to prerender “WordPress frontend URLs” which are pages, posts, and archive pages. How it works can be fine-tuned under the settings:

Settings > Reading > Speculative Loading

Browser Compatibility

The Speculative API is supported by Chrome 108 however the specific rules used by the new plugin require Chrome 121 or higher. Chrome 121 was released in early 2024.

Browsers that do not support will simply ignore the plugin and will have no effect on the user experience.

Check out the new Speculative Loading WordPress plugin developed by the official core WordPress performance team.

Advertisement

How Analytics Handles Prerendering

A WordPress developer commented with a question asking how Analytics would handle prerendering and someone else answered that it’s up to the Analytics provider to detect a prerender and not count it as a page load or site visit.

Fortunately both Google Analytics and Google Publisher Tags (GPT) both are able to handle prerenders. The Chrome developers support page has a note about how analytics handles prerendering:

“Google Analytics handles prerender by delaying until activation by default as of September 2023, and Google Publisher Tag (GPT) made a similar change to delay triggering advertisements until activation as of November 2023.”

Possible Conflict With Ad Blocker Extensions

There are a couple things to be aware of about this plugin, aside from the fact that it’s an experimental feature that requires Chrome 121 or higher.

A comment by a WordPress plugin developer that this feature may not work with browsers that are using the uBlock Origin ad blocking browser extension.

Download the plugin:
Speculative Loading Plugin by the WordPress Performance Team

Read the announcement at WordPress
Speculative Loading in WordPress

Advertisement

See also: WordPress, Wix & Squarespace Show Best CWV Rate Of Improvement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

10 Paid Search & PPC Planning Best Practices

Published

on

By

10 Paid Search & PPC Planning Best Practices

Whether you are new to paid media or reevaluating your efforts, it’s critical to review your performance and best practices for your overall PPC marketing program, accounts, and campaigns.

Revisiting your paid media plan is an opportunity to ensure your strategy aligns with your current goals.

Reviewing best practices for pay-per-click is also a great way to keep up with trends and improve performance with newly released ad technologies.

As you review, you’ll find new strategies and features to incorporate into your paid search program, too.

Here are 10 PPC best practices to help you adjust and plan for the months ahead.

Advertisement

1. Goals

When planning, it is best practice to define goals for the overall marketing program, ad platforms, and at the campaign level.

Defining primary and secondary goals guides the entire PPC program. For example, your primary conversion may be to generate leads from your ads.

You’ll also want to look at secondary goals, such as brand awareness that is higher in the sales funnel and can drive interest to ultimately get the sales lead-in.

2. Budget Review & Optimization

Some advertisers get stuck in a rut and forget to review and reevaluate the distribution of their paid media budgets.

To best utilize budgets, consider the following:

  • Reconcile your planned vs. spend for each account or campaign on a regular basis. Depending on the budget size, monthly, quarterly, or semiannually will work as long as you can hit budget numbers.
  • Determine if there are any campaigns that should be eliminated at this time to free up the budget for other campaigns.
  • Is there additional traffic available to capture and grow results for successful campaigns? The ad platforms often include a tool that will provide an estimated daily budget with clicks and costs. This is just an estimate to show more click potential if you are interested.
  • If other paid media channels perform mediocrely, does it make sense to shift those budgets to another?
  • For the overall paid search and paid social budget, can your company invest more in the positive campaign results?

3. Consider New Ad Platforms

If you can shift or increase your budgets, why not test out a new ad platform? Knowing your audience and where they spend time online will help inform your decision when choosing ad platforms.

Go beyond your comfort zone in Google, Microsoft, and Meta Ads.

Advertisement

Here are a few other advertising platforms to consider testing:

  • LinkedIn: Most appropriate for professional and business targeting. LinkedIn audiences can also be reached through Microsoft Ads.
  • TikTok: Younger Gen Z audience (16 to 24), video.
  • Pinterest: Products, services, and consumer goods with a female-focused target.
  • Snapchat: Younger demographic (13 to 35), video ads, app installs, filters, lenses.

Need more detailed information and even more ideas? Read more about the 5 Best Google Ads Alternatives.

4. Top Topics in Google Ads & Microsoft Ads

Recently, trends in search and social ad platforms have presented opportunities to connect with prospects more precisely, creatively, and effectively.

Don’t overlook newer targeting and campaign types you may not have tried yet.

  • Video: Incorporating video into your PPC accounts takes some planning for the goals, ad creative, targeting, and ad types. There is a lot of opportunity here as you can simply include video in responsive display ads or get in-depth in YouTube targeting.
  • Performance Max: This automated campaign type serves across all of Google’s ad inventory. Microsoft Ads recently released PMAX so you can plan for consistency in campaign types across platforms. Do you want to allocate budget to PMax campaigns? Learn more about how PMax compares to search.
  • Automation: While AI can’t replace human strategy and creativity, it can help manage your campaigns more easily. During planning, identify which elements you want to automate, such as automatically created assets and/or how to successfully guide the AI in the Performance Max campaigns.

While exploring new features, check out some hidden PPC features you probably don’t know about.

5. Revisit Keywords

The role of keywords has evolved over the past several years with match types being less precise and loosening up to consider searcher intent.

For example, [exact match] keywords previously would literally match with the exact keyword search query. Now, ads can be triggered by search queries with the same meaning or intent.

A great planning exercise is to lay out keyword groups and evaluate if they are still accurately representing your brand and product/service.

Advertisement

Review search term queries triggering ads to discover trends and behavior you may not have considered. It’s possible this has impacted performance and conversions over time.

Critical to your strategy:

  • Review the current keyword rules and determine if this may impact your account in terms of close variants or shifts in traffic volume.
  • Brush up on how keywords work in each platform because the differences really matter!
  • Review search term reports more frequently for irrelevant keywords that may pop up from match type changes. Incorporate these into match type changes or negative keywords lists as appropriate.

6. Revisit Your Audiences

Review the audiences you selected in the past, especially given so many campaign types that are intent-driven.

Automated features that expand your audience could be helpful, but keep an eye out for performance metrics and behavior on-site post-click.

Remember, an audience is simply a list of users who are grouped together by interests or behavior online.

Therefore, there are unlimited ways to mix and match those audiences and target per the sales funnel.

Here are a few opportunities to explore and test:

Advertisement
  • LinkedIn user targeting: Besides LinkedIn, this can be found exclusively in Microsoft Ads.
  • Detailed Demographics: Marital status, parental status, home ownership, education, household income.
  • In-market and custom intent: Searches and online behavior signaling buying cues.
  • Remarketing: Advertisers website visitors, interactions with ads, and video/ YouTube.

Note: This varies per the campaign type and seems to be updated frequently, so make this a regular check-point in your campaign management for all platforms.

7. Organize Data Sources

You will likely be running campaigns on different platforms with combinations of search, display, video, etc.

Looking back at your goals, what is the important data, and which platforms will you use to review and report? Can you get the majority of data in one analytics platform to compare and share?

Millions of companies use Google Analytics, which is a good option for centralized viewing of advertising performance, website behavior, and conversions.

8. Reevaluate How You Report

Have you been using the same performance report for years?

It’s time to reevaluate your essential PPC key metrics and replace or add that data to your reports.

There are two great resources to kick off this exercise:

Advertisement

Your objectives in reevaluating the reporting are:

  • Are we still using this data? Is it still relevant?
  • Is the data we are viewing actionable?
  • What new metrics should we consider adding we haven’t thought about?
  • How often do we need to see this data?
  • Do the stakeholders receiving the report understand what they are looking at (aka data visualization)?

Adding new data should be purposeful, actionable, and helpful in making decisions for the marketing plan. It’s also helpful to decide what type of data is good to see as “deep dives” as needed.

9. Consider Using Scripts

The current ad platforms have plenty of AI recommendations and automated rules, and there is no shortage of third-party tools that can help with optimizations.

Scripts is another method for advertisers with large accounts or some scripting skills to automate report generation and repetitive tasks in their Google Ads accounts.

Navigating the world of scripts can seem overwhelming, but a good place to start is a post here on Search Engine Journal that provides use cases and resources to get started with scripts.

Luckily, you don’t need a Ph.D. in computer science — there are plenty of resources online with free or templated scripts.

10. Seek Collaboration

Another effective planning tactic is to seek out friendly resources and second opinions.

Advertisement

Much of the skill and science of PPC management is unique to the individual or agency, so there is no shortage of ideas to share between you.

You can visit the Paid Search Association, a resource for paid ad managers worldwide, to make new connections and find industry events.

Preparing For Paid Media Success

Strategies should be based on clear and measurable business goals. Then, you can evaluate the current status of your campaigns based on those new targets.

Your paid media strategy should also be built with an eye for both past performance and future opportunities. Look backward and reevaluate your existing assumptions and systems while investigating new platforms, topics, audiences, and technologies.

Also, stay current with trends and keep learning. Check out ebooks, social media experts, and industry publications for resources and motivational tips.

More resources: 

Advertisement

Featured Image: Vanatchanan/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS