Connect with us

SEO

Microsoft Unveils Predictive Targeting, AI-Based Advertising Tool

Published

on

Microsoft Unveils Predictive Targeting, AI-Based Advertising Tool

Microsoft announces the launch of Predictive Targeting, an artificial intelligence-powered advertising tool.

The technology relies on machine learning to help advertisers reach new, receptive audiences and drive higher conversion rates.

Finding Hidden Audiences

Screenshot from: about.ads.microsoft.com, June 2023.

Predictive Targeting analyzes signals from advertisers’ existing ads and landing pages and Microsoft’s audience data to identify potential new audiences.

The tool automatically targets ads to the audiences most likely to convert without requiring advertisers to build an audience-targeting strategy manually.

Saving Time & Increasing Efficiency

Microsoft claims Predictive Targeting can increase advertisers’ conversion rates by an average of 46 percent while streamlining the ad targeting process.

Advertisers no longer have to spend time researching to determine their target audiences and can rely on Microsoft’s algorithms to find the most promising prospects.

The tool aims to help advertisers maximize their return on investment and gain greater efficiency in their ad campaigns.

Flexibility For Different Needs

Predictive Targeting can be used independently or in combination with advertisers’ existing audience targeting strategies.

When used alone, it provides a comprehensive solution for discovering and reaching relevant audiences.

When layered on top of existing strategies, it helps advertisers expand their reach and find new potential customers outside of their defined target audiences.

This flexibility allows advertisers to tailor the tool to suit their specific needs.

Potential Drawbacks

Before switching to a new targeting solution, it’s essential to consider potential drawbacks.

By relying on Microsoft’s algorithms to determine target audiences, advertisers give up some control over who sees their ads.

The AI may target audiences advertisers did not anticipate or intend to reach.

This could result in wasted ad spend or damage to the brand if the wrong audiences are exposed to the ads.

Advertisers may want to use other targeting and measurement tools in addition to Protective Targeting to avoid complete reliance on Microsoft.

Getting Started

Predictive Targeting will now be the default targeting method for Audience Ads.

Advertisers simply have to activate the tool, and Microsoft’s algorithms will determine the optimal audiences for their ads.

Microsoft Unveils Predictive Targeting, AI-Based Advertising ToolScreenshot from: about.ads.microsoft.com, June 2023.

Advertisers can also disable Predictive Targeting and define their audiences as needed.

Microsoft recommends that advertisers use compelling ad copy aligned with their target customers, automated bidding, expanded reach across ad groups, and campaign performance monitoring.

The company predicts Predictive Targeting will help propel advertisers’ Audience Ads campaigns forward and usher in the future of targeted advertising.


Featured image: Screenshot from about.ads.microsoft.com, June 2023.

Source: Microsoft



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Competing Against Brands & Nouns Of The Same Name

Published

on

By

An illustration of a man in a business suit interacting with a floating 3D network of connected nodes, symbolizing SEO strategy and digital technology, set against a stylized outdoor background with clouds and plants

Establishing and building a brand has always been both a challenge and an investment, even before the days of the internet.

One thing the internet has done, however, is make the world a lot smaller, and the frequency of brand (or noun) conflicts has greatly increased.

In the past year, I’ve been emailed and asked questions about these conflicts at conferences more than I have in my entire SEO career.

When you share your brand name with another brand, town, or city, Google has to decide and determine the dominant user interpretation of the query – or at least, if there are multiple common interpretations, the most common interpretations.

Noun and brand conflicts typically happen when:

  • A rebrand’s research focuses on other business names and doesn’t take into consideration general user search.
  • When a brand chooses a word in one language, but it has a use in another.
  • A name is chosen that is also a noun (e.g. the name of a town or city).

Some examples include Finlandia, which is both a brand of cheese and vodka; Graco, which is both a brand of commercial products and a brand of baby products; and Kong, which is both the name of a pet toy manufacturer and a tech company.

User Interpretations

From conversations I’ve had with marketers and SEO pros working for various brands with this issue, the underlying theme (and potential cause) comes down to how Google handles interpretation of what users are looking for.

When a user enters a query, Google processes the query to identify known entities that are contained.

It does this to improve the relevance of search results being returned (as outlined in its 2015 Patent #9,009,192). From this, Google also works to return related, relevant results and search engine results page (SERP) elements.

For example, when you search for a specific film or TV series, Google may return a SERP feature containing relevant actors or news (if deemed relevant) about the media.

This then leads to interpretation.

When Google receives a query, the search results need to often cater for multiple common interpretations and intents. This is no different when someone searches for a recognized branded entity like Nike.

When I search for Nike, I get a search results page that is a combination of branded web assets such as the Nike website and social media profiles, the Map Pack showing local stores, PLAs, the Nike Knowledge Panel, and third-party online retailers.

This variation is to cater for the multiple interpretations and intents that a user just searching for “Nike” may have.

Brand Entity Disambiguation

Now, if we look at brands that share a name such as Kong, when Google checks for entities and references against the Knowledge Graph (and knowledge base sources), it gets two closer matches: Kong Company and Kong, Inc.

The search results page is also littered with product listing ads (PLAs) and ecommerce results for pet toys, but the second blue link organic result is Kong, Inc.

Also on page one, we can find references to a restaurant with the same name (UK-based search), and in the image carousel, Google is introducing the (King) Kong film franchise.

It is clear that Google sees the dominant interpretation of this query to be the pet toy company, but has diversified the SERP further to cater for secondary and tertiary meanings.

In 2015, Google was granted a patent that included features of how Google might determine differences in entities of the same name.

This includes the possible use of annotations within the Knowledge Base – such as the addition of a word or descriptor – to help disambiguate entities with the same name. For example, the entries for Dan Taylor could be:

  • Dan Taylor (marketer).
  • Dan Taylor (journalist).
  • Dan Taylor (olympian).

How it determines what is the “dominant” interpretation of the query, and then how to order search results and the types of results, from experience, comes down to:

  • Which results users are clicking on when they perform the query (SERP interaction).
  • How established the entity is within the user’s market/region.
  • How closely the entity is related to previous queries the user has searched (personalization).

I’ve also observed that there is a correlation between extended brand searches and how they affect exact match branded search.

It’s also worth highlighting that this can be dynamic. Should a brand start receiving a high volume of mentions from multiple news publishers, Google will take this into account and amend the search results to better meet users’ needs and potential query interpretations at that moment in time.

SEO For Brand Disambiguation

Building a brand is not a task solely on the shoulders of SEO professionals. It requires buy-in from the wider business and ensuring the brand and brand messaging are both defined and aligned.

SEO can, however, influence this effort through the full spectrum of SEO: technical, content, and digital PR.

Google understands entities on the concept of relatedness, and this is determined by the co-occurrence of entities and then how Google classifies and discriminates between those entities.

We can influence this through technical SEO through granular Schema markup and by making sure the brand name is consistent across all web properties and references.

This ties into how we then write about the brand in our content and the co-occurrence of the brand name with other entity types.

To reinforce this and build brand awareness, this should be coupled with digital PR efforts with the objective of brand placement and corroborating topical relevance.

A Note On Search Generative Experience

As it looks likely that Search Generative Experience is going to be the future of search, or at least components of it, it’s worth noting that in tests we’ve done, Google can, at times, have issues when generative AI snapshots for brands, when there are multiple brands with the same name.

To check your brand’s exposure, I recommend asking Google and generating an SGE snapshot for your brand + reviews.

If Google isn’t 100% sure which brand you mean, it will start to include reviews and comments on companies of the same (or very similar) name.

It does disclose that they are different companies in the snapshot, but if your user is skim-reading and only looking at the summaries, this could be an accidental negative brand touchpoint.

More resources:


Featured Image: VectorMine/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google Rolls Out New ‘Web’ Filter For Search Results

Published

on

By

Google logo inside the Google Indonesia office in Jakarta

Google is introducing a filter that allows you to view only text-based webpages in search results.

The “Web” filter, rolling out globally over the next two days, addresses demand from searchers who prefer a stripped-down, simplified view of search results.

Danny Sullivan, Google’s Search Liaison, states in an announcement:

“We’ve added this after hearing from some that there are times when they’d prefer to just see links to web pages in their search results, such as if they’re looking for longer-form text documents, using a device with limited internet access, or those who just prefer text-based results shown separately from search features.”

The new functionality is a throwback to when search results were more straightforward. Now, they often combine rich media like images, videos, and shopping ads alongside the traditional list of web links.

How It Works

On mobile devices, the “Web” filter will be displayed alongside other filter options like “Images” and “News.”

Screenshot from: twitter.com/GoogleSearchLiaison, May 2024.

If Google’s systems don’t automatically surface it based on the search query, desktop users may need to select “More” to access it.

1715727362 7 Google Rolls Out New Web Filter For Search ResultsScreenshot from: twitter.com/GoogleSearchLiaison, May 2024.

More About Google Search Filters

Google’s search filters allow you to narrow results by type. The options displayed are dynamically generated based on your search query and what Google’s systems determine could be most relevant.

The “All Filters” option provides access to filters that are not shown automatically.

Alongside filters, Google also displays “Topics” – suggested related terms that can further refine or expand a user’s original query into new areas of exploration.

For more about Google’s search filters, see its official help page.


Featured Image: egaranugrah/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Why Google Can’t Tell You About Every Ranking Drop

Published

on

By

Why Google Can't Tell You About Every Ranking Drop

In a recent Twitter exchange, Google’s Search Liaison, Danny Sullivan, provided insight into how the search engine handles algorithmic spam actions and ranking drops.

The discussion was sparked by a website owner’s complaint about a significant traffic loss and the inability to request a manual review.

Sullivan clarified that a site could be affected by an algorithmic spam action or simply not ranking well due to other factors.

He emphasized that many sites experiencing ranking drops mistakenly attribute it to an algorithmic spam action when that may not be the case.

“I’ve looked at many sites where people have complained about losing rankings and decide they have a algorithmic spam action against them, but they don’t. “

Sullivan’s full statement will help you understand Google’s transparency challenges.

Additionally, he explains why the desire for manual review to override automated rankings may be misguided.

Challenges In Transparency & Manual Intervention

Sullivan acknowledged the idea of providing more transparency in Search Console, potentially notifying site owners of algorithmic actions similar to manual actions.

However, he highlighted two key challenges:

  1. Revealing algorithmic spam indicators could allow bad actors to game the system.
  2. Algorithmic actions are not site-specific and cannot be manually lifted.

Sullivan expressed sympathy for the frustration of not knowing the cause of a traffic drop and the inability to communicate with someone about it.

However, he cautioned against the desire for a manual intervention to override the automated systems’ rankings.

Sullivan states:

“…you don’t really want to think “Oh, I just wish I had a manual action, that would be so much easier.” You really don’t want your individual site coming the attention of our spam analysts. First, it’s not like manual actions are somehow instantly processed. Second, it’s just something we know about a site going forward, especially if it says it has change but hasn’t really.”

Determining Content Helpfulness & Reliability

Moving beyond spam, Sullivan discussed various systems that assess the helpfulness, usefulness, and reliability of individual content and sites.

He acknowledged that these systems are imperfect and some high-quality sites may not be recognized as well as they should be.

“Some of them ranking really well. But they’ve moved down a bit in small positions enough that the traffic drop is notable. They assume they have fundamental issues but don’t, really — which is why we added a whole section about this to our debugging traffic drops page.”

Sullivan revealed ongoing discussions about providing more indicators in Search Console to help creators understand their content’s performance.

“Another thing I’ve been discussing, and I’m not alone in this, is could we do more in Search Console to show some of these indicators. This is all challenging similar to all the stuff I said about spam, about how not wanting to let the systems get gamed, and also how there’s then no button we would push that’s like “actually more useful than our automated systems think — rank it better!” But maybe there’s a way we can find to share more, in a way that helps everyone and coupled with better guidance, would help creators.”

Advocacy For Small Publishers & Positive Progress

In response to a suggestion from Brandon Saltalamacchia, founder of RetroDodo, about manually reviewing “good” sites and providing guidance, Sullivan shared his thoughts on potential solutions.

He mentioned exploring ideas such as self-declaration through structured data for small publishers and learning from that information to make positive changes.

“I have some thoughts I’ve been exploring and proposing on what we might do with small publishers and self-declaring with structured data and how we might learn from that and use that in various ways. Which is getting way ahead of myself and the usual no promises but yes, I think and hope for ways to move ahead more positively.”

Sullivan said he can’t make promises or implement changes overnight, but he expressed hope for finding ways to move forward positively.


Featured Image: Tero Vesalainen/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending