Connect with us

SEO

The inner workings of search advertising in a cookieless world

Published

on

The inner workings of search advertising in a cookieless world

30-second summary:

  • As third-party cookies will eventually phase out and marketers search for alternate approaches, they may find themselves lost in a sea of data when attempting to measure and evaluate the impact
  • Focusing on the quality of users instead of attributable conversions can mitigate the inconvenience of losing third-party cookies
  • The shift from cookies to a new engagement model will require constant testing, so keep data simple where possible

For years now, digital marketers have been spoiled by third-party cookies and the ability to accurately track engagement – it has made life simple, and reporting a campaign’s activity a breeze. Such an approach has allowed us to easily see how many conversions Meta, Criteo, or an influencer has contributed to with minimal effort. But the eventual demise of third-party cookies demands accurate data on engagement to ensure that the transition to new identifiers can be as clear as possible. However, due to either ignorance or convenience, many advertisers still take overly positive and blindly optimistic metrics as the truth.

Counting your chickens before they’ve converted

If we take Facebook for example, they have no way of knowing to what extent their services contributed to a conversion. There are many ways of producing wildly inflated numbers, such as having several touch points and one conversion being associated with multiple channels, or even inaccuracies from false positives. This is particularly troubling for those engaging in heavy remarketing based on past users who already have visited or interacted with a site. One must ask the question – when working with inaccurate metrics, will remarketing actually contribute to further conversions or will it simply attribute miss-clicks to campaigns that don’t increase revenue?

We as humans love to oversimplify things, especially complex patterns. Imagine how complex a visit is to your webpage – you get a session that is connected to a user, that considers different attributes such as age, gender, location, interests as well as their current activity on your site. That user data is then sent to, for example, Google Ads, in a remarketing list.

Even the remarketing list provides a notable variable when trying to make sense of conversions. Facebook and Google users are not 1:1, with one user on Google often being connected to more devices and browsers than the average Facebook user. You could get a conversion from a device that Google has connected to the same user, while Facebook may lack any insight.

With each user visiting your website you populate remarketing lists. Those remarketing lists build “lookalikes” in Facebook and “similar” in Google. These “similars” can be extremely useful, as although traffic from one channel could be attributed to zero to no conversions, they could in fact help build the most efficient “similars” in Google Ads that can then drive a large number of cheap conversions.

Identify data that helps you steer clear of over-attribution

All automated optimization efforts, whether they be the campaign budget optimization (CBO) or Target CPA are dependent on data. The more data you feed the machines the better results you get. The bigger your remarketing lists, the more efficient your automatic/smart campaigns will be on Google. This is what makes the value of a user so multifaceted and incredibly complex, even when you don’t take the action impression of an ad into account.

With this incredible complexity, we need to have an attribution model that can genuinely portray engagement data without inflating or underselling a campaign’s conversions. However, while there may be many models that are well suited to produce the most accurate results, it should be remembered that attribution is by itself flawed. As consumers, we understand that the actions that drive us to conversions in our personal lives are varied, with so many things that can’t be tracked enough to be attributed. While attribution cannot be perfect, it is essentially the best tool available and can become far more useful when applied alongside other data points.

The last non-direct click attribution model

When trying to avoid inflated data, the easiest attribution model is a last non-direct click. With this model, all direct traffic is ignored and all the credit for the conversion goes to the last channel that the customer clicked through, ultimately preventing any conversions from being falsely attributed to multiple touchpoints. It is a simple model that only considers the bare minimum that still manages to solve the problems of over-attribution by being direct. This way, marketers can measure the effect rather than attributing parts of conversion to different campaigns or channels. It really is a very straightforward approach; essentially, “if we do this to x, does that increase y?”. Of course, like all attribution models, the last non-direct click approach has its downsides. For one, it’s not a perfect solution to over/under contribution, but it is an easily replicable and strategically sound approach that provides reliable data where you can measure everything in one place.

In any case, the delayed death of the third-party cookie is certainly causing many to reevaluate their digital advertising methodologies. For now, proactive marketers will continue to search for privacy-friendly identifiers that can provide alternative solutions. First-party data could well have a larger role to play if consent from users can be reliably gained. While we wait for the transition, getting your data in order and finding accurate, reliable approaches to attribution must be a priority.

Ensuring the accuracy of this data is therefore imperative, this can be achieved by ensuring there are no discrepancies between clicks and sessions whilst all webpages are accurately tracked. In the absence of auto-tracking, UTMs should also be used to track all campaigns and, if possible, tracking should be server-side. Finally, marketers should test their tracking with Tag Assistant, and make sure they don’t create duplicate sessions or lose parameters during the session. Ultimately, once the third-party cookie becomes entirely obsolete, which direction marketers go in will ultimately be decided by data – which must be as accurate as possible.


Torkel Öhman is CTO and co-founder of Amanda AI. Responsible for building Amanda AI, with his experience in data/analytics, Torkel oversees all technical aspects of the product ensuring all ad accounts run smoothly.

Subscribe to the Search Engine Watch newsletter for insights on SEO, the search landscape, search marketing, digital marketing, leadership, podcasts, and more.

Join the conversation with us on LinkedIn and Twitter.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

5 Questions Answered About The OpenAI Search Engine

Published

on

By

5 Questions Answered About The OpenAI Search Engine

It was reported that OpenAI is working on a search engine that would directly challenge Google. But details missing from the report raise questions about whether OpenAI is creating a standalone search engine or if there’s another reason for the announcement.

OpenAI Web Search Report

The report published on The Information relates that OpenAI is developing a Web Search product that will directly compete with Google. A key detail of the report is that it will be partly powered by Bing, Microsoft’s search engine. Apart from that there are no other details, including whether it will be a standalone search engine or be integrated within ChatGPT.

All reports note that it will be a direct challenge to Google so let’s start there.

1. Is OpenAI Mounting A Challenge To Google?

OpenAI is said to be using Bing search as part of the rumored search engine, a combination of a GPT-4 with Bing Search, plus something in the middle to coordinate between the two .

In that scenario, what OpenAI is not doing is developing its own search indexing technology, it’s using Bing.

What’s left then for OpenAI to do in order to create a search engine is to devise how the search interface interacts with GPT-4 and Bing.

And that’s a problem that Bing has already solved by using what it Microsoft calls an orchestration layer. Bing Chat uses retrieval-augmented generation (RAG) to improve answers by adding web search data to use as context for the answers that GPT-4 creates. For more information on how orchestration and RAG works watch the keynote at Microsoft Build 2023 event by Kevin Scott, Chief Technology Officer at Microsoft, at the 31:45 minute mark here).

If OpenAI is creating a challenge to Google Search, what exactly is left for OpenAI to do that Microsoft isn’t already doing with Bing Chat? Bing is an experienced and mature search technology, an expertise that OpenAI does not have.

Is OpenAI challenging Google? A more plausible answer is that Bing is challenging Google through OpenAI as a proxy.

2. Does OpenAI Have The Momentum To Challenge Google?

ChatGPT is the fastest growing app of all time, currently with about 180 million users, achieving in two months what took years for Facebook and Twitter.

Yet despite that head start Google’s lead is a steep hill for OpenAI to climb.  Consider that Google has approximately 3 to 4 billion users worldwide, absolutely dwarfing OpenAI’s 180 million.

Assuming that all 180 million OpenAI users performed an average of 4 searches per day, the daily number of searches could reach 720 million searches per day.

Statista estimates that there are 6.3 million searches on Google per minute which equals over 9 billion searches per day.

If OpenAI is to compete they’re going to have to offer a useful product with a compelling reason to use it. For example, Google and Apple have a captive audience on mobile device ecosystem that embeds them into the daily lives of their users, both at work and at home. It’s fairly apparent that it’s not enough to create a search engine to compete.

Realistically, how can OpenAI achieve that level of ubiquity and usefulness?

OpenAI is facing an uphill battle against not just Google but Microsoft and Apple, too. If we count Internet of Things apps and appliances then add Amazon to that list of competitors that already have a presence in billions of users daily lives.

OpenAI does not have the momentum to launch a search engine to compete against Google because it doesn’t have the ecosystem to support integration into users lives.

3. OpenAI Lacks Information Retrieval Expertise

Search is formally referred to as Information Retrieval (IR) in research papers and patents. No amount of searching in the Arxiv.org repository of research papers will surface papers authored by OpenAI researchers related to information retrieval. The same can be said for searching for information retrieval (IR) related patents. OpenAI’s list of research papers also lacks IR related studies.

It’s not that OpenAI is being secretive. OpenAI has a long history of publishing research papers about the technologies they’re developing. The research into IR does not exist. So if OpenAI is indeed planning on launching a challenge to Google, where is the smoke from that fire?

It’s a fair guess that search is not something OpenAI is developing right now. There are no signs that it is even flirting with building a search engine, there’s nothing there.

4. Is The OpenAI Search Engine A Microsoft Project?

There is substantial evidence that Microsoft is furiously researching how to use LLMs as a part of a search engine.

All of the following research papers are classified as belonging to the fields of Information Retrieval (aka search), Artificial Intelligence, and Natural Language Computing.

Here are few research papers just from 2024:

Enhancing human annotation: Leveraging large language models and efficient batch processing
This is about using AI for classifying search queries.

Structured Entity Extraction Using Large Language Models
This research paper discovers a way to extracting structured information from unstructured text (like webpages). It’s like turning a webpage (unstructured data) into a machine understandable format (structured data).

Improving Text Embeddings with Large Language Models (PDF version here)
This research paper discusses a way to get high-quality text embeddings that can be used for information retrieval (IR). Text embeddings is a reference to creating a representation of text in a way that can be used by algorithms to understand the semantic meanings and relationships between the words.

The above research paper explains the use:

“Text embeddings are vector representations of natural language that encode its semantic information. They are widely used in various natural language processing (NLP) tasks, such as information retrieval (IR), question answering…etc. In the field of IR, the first-stage retrieval often relies on text embeddings to efficiently recall a small set of candidate documents from a large-scale corpus using approximate nearest neighbor search techniques.”

There’s more research by Microsoft that relates to search, but these are the ones that are specifically related to search together with large language models (like GPT-4.5).

Following the trail of breadcrumbs leads directly to Microsoft as the technology powering any search engine that OpenAI is supposed to be planning… if that rumor is true.

5. Is Rumor Meant To Steal Spotlight From Gemini?

The rumor that OpenAI is launching a competing search engine was published on February 14th. The next day on February 15th Google announced the launch of Gemini 1.5, after announcing Gemini Advanced on February 8th.

Is it a coincidence that OpenAI’s announcement completely overshadowed the Gemini announcement the next day? The timing is incredible.

At this point the OpenAI search engine is just a rumor.

Featured Image by Shutterstock/rafapress

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Warning: Unpopular SEO writing opinion

Published

on

Warning: Unpopular SEO writing opinion

Unpopular opinion alert: Adding new blog posts may not help your site.

(No matter what that content marketing company told you.) 🙄

So many of my new clients — especially subject matter experts — don’t need new content (immediately).

They HAVE content — scads of it scattered across various platforms.

(Maybe that sounds familiar.)

What they DO need is someone to review their content and customer persona, pinpoint opportunities, and develop a baby-step approach to leveraging those older content assets.

Because there are always opportunities. 🔥

Before writing another word, ask…

  • Are you repurposing the content you have? Or are you writing it once and forgetting about it (which is so common)?
  • Is your customer/reader persona still accurate, or has your target audience changed post-COVID?
  • Do your sales pages showcase your benefits and speak to your customers’ pain points? Or are they flat and dull?
  • Does your content sound like YOU with a point of view? Or is there a massive disconnect between how you talk to clients and the words you use on your site?
  • When did you last take a peek at your old sales emails and email welcome sequences? Could updating those assets make you more money?
  • Isn’t it time to save time (and budget) and leverage your existing content?

If you need help untangling your content and messaging, let me know. I love creating content order out of chaos.

After all…

 

Warning Unpopular SEO writing opinion

 

What do you think? Leave your comment below.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google Bans Impersonation In Ads

Published

on

By

Google Bans Impersonation In Ads

Google bans impersonation and false affiliation in ads, enforcing policy changes in March.

  • Google bans impersonation and false affiliation in ads.
  • Policy enforcement starts in March.
  • Violators will be banned from Google Ads.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS