Connect with us


7 Ways SEMs Can Leverage AI Tools



7 Ways SEMs Can Leverage AI Tools

With the arrival of widely available generative AI like ChatGPT and Bard, there is a whole slew of additional things PPC account managers can now automate.

Whereas most of the previous waves of automation helped with math in the form of bidding or pattern discovery in the form of targeting, the latest wave is focused on generating text, which can help with writing ads.

But generative AI can do much more than simply suggest a few additional headlines for responsive search ads (RSAs), so here I’ll share examples of how to use GPT to set up and optimize Google Ads.

How To Access Generative AI

The most widely discussed way of trying GPT is through ChatGPT, which is accessible at

But while this may be the quickest way to try generative AI, you’ll probably want a more scalable solution once you start using it to build and optimize PPC campaigns. This is where add-ons for spreadsheets come in handy.

One of my favorite ones is a Chrome extension called GPT for Sheets and Docs.

After you install it, you can add GPT formulas to cells in Google Sheets by typing formulas in the form of ‘=GPT(A2,B2)’.

So rather than working with a single prompt at a time in a chat interface, you can run the same prompt at scale on many cells.

Screenshot from author, April 2023

You’ll need an API key to use this plugin, and you can get one from openAI here.

Now we’re ready to start using GPT to help us with PPC work.

1. How To Use ChatGPT For Keyword Discovery

While Google’s Keyword Planner is a great tool for finding new keywords, ChatGPT provides a good alternative. You can prompt it with the landing page you want keywords for and get a few suggestions.

keyword ideas: Google keyword plannerScreenshot from ChatGPT, April 2023

To make the keyword suggestions a bit more relevant, you can ask ChatGPT to include a particular term in each one:

keyword ideas: chatGPTScreenshot from ChatGPT, April 2023

Once ad groups are up and running, you might want to find some additional related keywords to target.

So rather than starting with a landing page URL, we’ll prompt ChatGPT to find keywords similar to existing ones.

In this example, I’ll show how that was done in Google Sheets.

Prompt Inputs GPT Response
find 5 keywords similar to these: “amazon ppc tools”,”optimize amazon ppc”,”amazon advertising software”,”amazon ads automation”,”amazon ppc automation”,”amazon ppc software”,”amazon ppc management tools”,”amazon ppc automation software”,”amazon ads software”,”amazon sponsored ads management”,”amazon ppc management software” 1. Amazon PPC Optimization

2. Amazon Advertising Strategies

3. Amazon Ads Optimization

4. Amazon PPC Strategies

5. Amazon Sponsored Ads Automation

The first column contains the prompt, the second column the list of existing keywords, and the third column the formula that gets the response from GPT: =GPT(A2,B2).

The beauty of using GPT in a sheet is that it’s quite easy to just change the keywords in the second column while using the same prompt and formula to generate the ChatGPT response.

2. How To Use GPT For Keyword Classification

But what happens when a list of suggested keywords, whether from ChatGPT or another tool, gets too long?

We all know that Google rewards relevance through higher Quality Score. So we should split the list of keywords into smaller related groups.

Turns out ChatGPT is quite good at grouping words by relevance.

In my first attempt, I tried to help ChatGPT understand what might be a good categorization, so I added a category name after the first few keywords in my prompt. But I found there was no need to explain categorization, and issuing the same prompt without examples yielded equally good results.

Prompting ChatGPT for categorizing marketing keywordsScreenshot from ChatGPT, April 2023

In this prompt for ChatGPT, we provided examples of how to categorize keywords:

Categorize search marketing keywordsScreenshot from ChatGPT, April 2023
In this prompt, we didn’t provide examples for classification, but the quality of the response remained the same:
ChatGPT suggestionScreenshot from ChatGPT, April 2023

This output could be more useful if presented in a table, so read on to the tips and tricks section of this post to learn how to ask ChatGPT for that.

3. How To Use GPT To Create Ads

With a solid set of grouped keywords and an associated landing page from my site, all I’m really missing to set up ad groups are headlines and descriptions for the RSA ads.

So I asked ChatGPT for help with writing some ads:

Ad headlines by ChatGPTScreenshot from ChatGPT, April 2023
ChatGPT ad headline suggestions can far exceed the requested character limits.

The response to this prompt illustrates a known limitation of ChatGPT: it’s not good at math, and the headlines it suggested tended to be too long.

ChatGPT isn’t good at math because it works by predicting what text would logically appear next in a sequence. It might know that “1+1=” is usually followed by “2,” but it doesn’t do the math to know this. It looks for common sequences.

This is a known issue and seems to be getting addressed. In my most recent experiments this week, ChatGPT is now writing strings that are shorter and more likely to fit into the limited ad space provided by Google for headlines:

ChatGPT suggestions for headlinesScreenshot from ChatGPT, April 2023

When looking for additional ad text variations, providing the current assets helps ChatGPT do better.

This is because GPT is really good at completing text – so providing examples in the prompt leads to better suggestions, because they will follow the same pattern as the examples.

4. How To Use GPT For Search Terms Optimization

Once the ad groups are running, they’ll start to collect data, like what search terms the ads showed for.

Now we can use this data for optimization. The problem with search terms is that there can be a lot of them, and manually working your way through them can be tedious and time-consuming.

So I asked ChatGPT if it could take all my search terms for an ad group and rank them by relevance.

Since it already understands the concept of relevance, I didn’t need to explain, and got the following result:

keyword tableScreenshot from ChatGPT, April 2023

Some search terms ChatGPT considers more relevant to a company selling PPC management software:

keywords with relevance scoreScreenshot from ChatGPT, April 2023

Some search terms ChatGPT considers less relevant to a company selling PPC management software:

keyword listScreenshot from ChatGPT, April 2023
I found this extremely helpful when researching negative keyword ideas. I could focus my attention near the bottom of the relevance list, where the terms were indeed more likely to be less relevant to what the landing page offered.
This is by no means a perfect solution, but it helps prioritize things for busy marketers.

5. How To Use GPT For Shopping Feed Optimization

So far, I have covered a fairly traditional example of keyword advertising.

But could GPT also be used for shopping ads that are based on a product feed?

Instead of optimizing keywords, shopping advertisers need to optimize the feed, and that often means filling in missing data or coming up with new suggestions for product titles and descriptions.

GPT understands semantics and relationships and knows that “Nespresso” is a brand of kitchen appliances.

With no need for you to define brands and product categories, you can give GPT a product detail page URL in a Google Sheet and ask it to fill in a few blanks as I did below:

what’s the brand of the product on the page Nespresso
what’s the product on the page Nespresso Vertuo Next Premium Coffee & Espresso Maker with Frother
what’s the category of the product on the page Kitchen Appliances

As you can see, with just a landing page as input and some simple prompts, GPT is very capable of explaining what the product on the page is.

Advertisers can deploy GPT on their merchant center feed data to optimize their PPC ads.

6. How To Use GPT For Building PPC Audiences

Another important PPC targeting lever is audiences. For example, Performance Max campaigns and search campaigns can be optimized by attaching audiences to them.

So next, I tried using GPT for developing audiences. Advertisers should focus on their own first-party audience data, but there are ways GPT can be used when first-party data is not available.

Here I used ChatGPT as I might use a research assistant.

I asked it what things a certain type of consumer might care about, and then I asked it to suggest some keywords those consumers might use if they were searching for something.

The keywords suggested by ChatGPT can then be used to create a custom audience segment in Google Ads.

hotel characteristics for budget travelersScreenshot from ChatGPT, April 2023
Use ChatGPT as a research assistant to generate ideas about qualities of your business a prospect might care about:
related search keywords vis ChatGPTScreenshot from ChatGPT, April 2023
Using ChatGPT, follow up those qualities with a request for related keywords a prospect might search for.
New custom segment: Google AdsScreenshot from ChatGPT, April 2023
Use the suggested keywords to create a custom audience segment in Google Ads.

7. How To Use ChatGPT To Optimize Landing Page Relevance

As more campaigns become automated by ad engines, one thing advertisers seem to struggle with is understanding why ads show up for seemingly irrelevant search terms.

Dynamic search ads (DSAs) and Performance Max campaigns use a site’s webpages to determine relevant search terms that should trigger an ad.

So I asked ChatGPT to tell me what it thought the pages on my site were about. If some of the answers seemed too loosely related to what I’m selling, this could give me ideas for how to optimize those pages with more text related to our core business.

Use the Google Ads landing page report to grab landing page to analyze with GPT:

Google AdsScreenshot from Google Ads, April 2023
GPT shows the topic it believes each of the landing pages is about:
topic of pages listed by ChatGPTScreenshot from ChatGPT, April 2023

Here’s an example where ChatGPT said my landing page was about a topic that seems quite broad:

  • Prompt: what is the topic of this page, in 5 words or less?
  • Response: Integrations Solutions

With that information, I can tweak my landing page to be more PPC focused, or I could exclude that page from automated campaigns, for example, by using the URL exclusion feature of Performance Max campaigns.

ChatGPT Tips And Tricks

As you noticed, GPT’s responses take a variety of forms, from paragraphs to bullet point text, code, and tabular data. You can request the type of response you want rather than leave it to chance.

For example, after I asked ChatGPT for headline suggestions and got a list of headlines, I followed up with this prompt:

PromptScreenshot from ChatGPT, April 2023

And it responded with this:

Headlines from ChatGPTScreenshot from ChatGPT, April 2023

GPT responses can be requested in a table format, making it easier to work with the output.

Note, again, that it’s still bad at math, so the numbers can’t be trusted (and this was also a problem in tests of GPT-4) – but it is convenient that you can so easily work with data in tables.


Generative AI like GPT and Bard are opening up a slew of new automation opportunities for PPC advertisers.

Try some of these examples on your own accounts, but be vigilant and monitor what the machines are doing for correctness and applicability to your business goals.

Human oversight through techniques like automation layering is a form of PPC insurance that is becoming more relevant in PPC every day, as AI like GPT shifts what it means to be a PPC marketer.

More resources:

Featured Image: Cast Of Thousands/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address


Google Documents Leaked & SEOs Are Making Some Wild Assumptions



Google Documents Leaked & SEOs Are Making Some Wild Assumptions

You’ve probably heard about the recent Google documents leak. It’s on every major site and all over social media.

Where did the docs come from?

My understanding is that a bot called yoshi-code-bot leaked docs related to the Content API Warehouse on Github on March 13th, 2024. It may have appeared earlier in some other repos, but this is the one that was first discovered.

They were discovered by an anonymous ex-Googler who shared the info with Erfan Azimi who shared it with Rand Fishkin who shared it with Mike King. The docs were removed on May 7th.

I appreciate all involved for sharing their findings with the community.

Google’s response

There was some debate if the documents were real or not, but they mention a lot of internal systems and link to internal documentation and it definitely appears to be real.

A Google spokesperson released the following statement to Search Engine Land:

We would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information. We’ve shared extensive information about how Search works and the types of factors that our systems weigh, while also working to protect the integrity of our results from manipulation.

SEOs interpret things based on their own experiences and bias

Many SEOs are saying that the ranking factors leaked. I haven’t seen any code or weights, just what appear to be descriptions and storage info. Unless one of the descriptions says the item is used for ranking, I think it’s dangerous for SEOs that all of these are used in ranking.

Having some features or information stored does not mean they’re used in ranking. For our search engine,, we have all kinds of things stored that might be used for crawling, indexing, ranking, personalization, testing, or feedback. We even have things stored that we aren’t doing things with yet.

What is more likely is that SEOs are making assumptions that favor their own opinions and biases.

It’s the same for me. I may not have full context or knowledge and may have inherent biases that influence my interpretation, but I try to be as fair as I can be. If I’m wrong, it means that I will learn something new and that’s a good thing! SEOs can, and do, interpret things differently.

Gael Breton said it well:

I’ve been around long enough to see many SEO myths created over the years and I can point you to who started many of them and what they misunderstood. We’ll likely see a lot of new myths from this leak that we’ll be dealing with for the next decade or longer.

Let’s look at a few things that in my opinion are being misinterpreted or where conclusions are being drawn where they shouldn’t be.


As much as I want to be able to say Google has a Site Authority score that they use for ranking that’s like DR, that part specifically is about compressed quality metrics and talks about quality.

I believe DR is more an effect that happens as you have a lot of pages with strong PageRank, not that it’s necessarily something Google uses. Lots of pages with higher PageRank that internally link to each other means you’re more likely to create stronger pages.

  • Do I believe that PageRank could be part of what Google calls quality? Yes.
  • Do I think that’s all of it? No.
  • Could Site Authority be something similar to DR? Maybe. It fits in the bigger picture.
  • Can I prove that or even that it’s used in rankings? No, not from this.

From some of the Google testimony to the US Department of Justice, we found out that quality is often measured with an Information Satisfaction (IS) score from the raters. This isn’t directly used in rankings, but is used for feedback, testing, and fine-tuning models.

We know the quality raters have the concept of E-E-A-T, but again that’s not exactly what Google uses. They use signals that align to E-E-A-T.

Some of the E-E-A-T signals that Google has mentioned are:

  • PageRank
  • Mentions on authoritative sites
  • Site queries. This could be “site: E-E-A-T” or searches like “ahrefs E-E-A-T”

So could some kind of PageRank scores extrapolated to the domain level and called Site Authority be used by Google and be part of what makes up the quality signals? I’d say it’s plausible, but this leak doesn’t prove it.

I can recall 3 patents from Google I’ve seen about quality scores. One of them aligns with the signals above for site queries.

I should point out that just because something is patented, doesn’t mean it is used. The patent around site queries was written in part by Navneet Panda. Want to guess who the Panda algorithm that related to quality was named after? I’d say there’s a good chance this is being used.

The others were around n-gram usage and seemed to be to calculate a quality score for a new website and another mentioned time on site.


I think this has been misinterpreted as well. The document has a field called hostAge and refers to a sandbox, but it specifically says it’s used “to sandbox fresh spam in serving time.”

To me, that doesn’t confirm the existence of a sandbox in the way that SEOs see it where new sites can’t rank. To me, it reads like a spam protection measure.


Are clicks used in rankings? Well, yes, and no.

We know Google uses clicks for things like personalization, timely events, testing, feedback, etc. We know they have models upon models trained on the click data including navBoost. But is that directly accessing the click data and being used in rankings? Nothing I saw confirms that.

The problem is SEOs are interpreting this as CTR is a ranking factor. Navboost is made to predict which pages and features will be clicked. It’s also used to cut down on the number of returned results which we learned from the DOJ trial.

As far as I know, there is nothing to confirm that it takes into account the click data of individual pages to re-order the results or that if you get more people to click on your individual results, that your rankings would go up.

That should be easy enough to prove if it was the case. It’s been tried many times. I tried it years ago using the Tor network. My friend Russ Jones (may he rest in peace) tried using residential proxies.

I’ve never seen a successful version of this and people have been buying and trading clicks on various sites for years. I’m not trying to discourage you or anything. Test it yourself, and if it works, publish the study.

Rand Fishkin’s tests for searching and clicking a result at conferences years ago showed that Google used click data for trending events, and they would boost whatever result was being clicked. After the experiments, the results went right back to normal. It’s not the same as using them for the normal rankings.


We know Google matches authors with entities in the knowledge graph and that they use them in Google news.

There seems to be a decent amount of author info in these documents, but nothing about them confirms that they’re used in rankings as some SEOs are speculating.

Was Google lying to us?

What I do disagree with whole-heartedly is SEOs being angry with the Google Search Advocates and calling them liars. They’re nice people who are just doing their job.

If they told us something wrong, it’s likely because they don’t know, they were misinformed, or they’ve been instructed to obfuscate something to prevent abuse. They don’t deserve the hate that the SEO community is giving them right now. We’re lucky that they share information with us at all.

If you think something they said is wrong, go and run a test to prove it. Or if there’s a test you want me to run, let me know. Just being mentioned in the docs is not proof that a thing is used in rankings.

Final Thoughts

While I may agree or I may disagree with the interpretations of other SEOs, I respect all who are willing to share their analysis. It’s not easy to put yourself or your thoughts out there for public scrutiny.

I also want to reiterate that unless these fields specifically say they are used in rankings, that the information could just as easily be used for something else. We definitely don’t need any posts about Google’s 14,000 ranking factors.

If you want my thoughts on a particular thing, message me on X or LinkedIn.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


Do Higher Content Scores Mean Higher Google Rankings? Our Data Says It’s Unlikely.



Do Higher Content Scores Mean Higher Google Rankings? Our Data Says It's Unlikely.

I studied the correlation between rankings and content scores from four popular content optimization tools: Clearscope, Surfer, MarketMuse, and Frase. The result? Weak correlations all around.

This suggests (correlation does not necessarily imply causation!) that obsessing over your content score is unlikely to lead to significantly higher Google rankings.

Does that mean content optimization scores are pointless?

No. You just need to know how best to use them and understand their flaws.

Most tools’ content scores are based on keywords. If top-ranking pages mention keywords your page doesn’t, your score will be low. If it does, your score will be high.

While this has its obvious flaws (having more keyword mentions doesn’t always mean better topic coverage), content scores can at least give some indication of how comprehensively you’re covering the topic. This is something Google is looking for.

Google says that comprehensively covering the topic is a sign of quality contentGoogle says that comprehensively covering the topic is a sign of quality content

If your page’s score is significantly lower than the scores of competing pages, you’re probably missing important subtopics that searchers care about. Filling these “content gaps” might help improve your rankings.

However, there’s nuance to this. If competing pages score in the 80-85 range while your page scores 79, it likely isn’t worth worrying about. But if it’s 95 vs. 20 then yeah, you should probably try to cover the topic better.

Key takeaway

Don’t obsess over content scores. Use them as a barometer for topic coverage. If your score is significantly lower than competitors, you’re probably missing important subtopics and might rank higher by filling those “content gaps.”

There are at least two downsides you should be aware of when it comes to content scores.

They’re easy to cheat

Content scores tend to be largely based on how many times you use the recommended set of keywords. In some tools, you can literally copy-paste the entire list, draft nothing else, and get an almost perfect score.

Scoring 98 on MarketMuse after shoehorning all the suggested keywords without any semblance of a draftScoring 98 on MarketMuse after shoehorning all the suggested keywords without any semblance of a draft

This is something we aim to solve with our upcoming content optimization tool: Content Master.

I can’t reveal too much about this yet, but it has a big USP compared to most existing content optimization tools: its content score is based on topic coverage—not just keywords.

For example, it tells us that our SEO strategy template should better cover subtopics like keyword research, on-page SEO, and measuring and tracking SEO success.

Preview of our upcoming Content Master toolPreview of our upcoming Content Master tool

But, unlike other content optimization tools, lazily copying and pasting related keywords into the document won’t necessarily increase our content score. It’s smart enough to understand that keyword coverage and topic coverage are different things.


This tool is still in production so the final release may look a little different.

They encourage copycat content

Content scores tell you how well you’re covering the topic based on what’s already out there. If you cover all important keywords and subtopics from the top-ranking pages and create the ultimate copycat content, you’ll score full marks.

This is a problem because quality content should bring something new to the table, not just rehash existing information. Google literally says this in their helpful content guidelines.

Google says quality content goes beyond obvious information. It needs to bring something new to the tableGoogle says quality content goes beyond obvious information. It needs to bring something new to the table

In fact, Google even filed a patent some years back to identify ‘information gain’: a measurement of the new information provided by a given article, over and above the information present in other articles on the same topic.

You can’t rely on content optimization tools or scores to create something unique. Making something that stands out from the rest of the search results will require experience, experimentation, or effort—something only humans can have/do.

Enrich common knowledge with new information and experiences in your contentEnrich common knowledge with new information and experiences in your content

Big thanks to my colleagues Si Quan and Calvinn who did the heavy lifting for this study. Nerd notes below. 😉

  • For the study, we selected 20 random keywords and pulled the top 20 ranking pages.
  • We pulled the SERPs before the March 2024 update was rolled out.
  • Some of the tools had issues pulling the top 20 pages, which we suspect was due to SERP features.
  • Clearscope didn’t give numerical scores; they opted for grades. We used ChatGPT to convert those grades into numbers.
  • Despite their increasing prominence in the SERPs, most of the tools had trouble analyzing Reddit, Quora, and YouTube. They typically gave a zero or no score for these results. If they gave no scores, we excluded them from the analysis.
  • The reason why we calculated both Spearman and Kendall correlations (and took the average) is because according to Calvinn (our Data Scientist), Spearman correlations are more sensitive and therefore more prone to being swayed by small sample size and outliers. On the other hand, the Kendall rank correlation coefficient only takes order into account. So, it is more robust for small sample sizes and less sensitive to outliers.

Final thoughts

Improving your content score is unlikely to hurt Google rankings. After all, although the correlation between scores and rankings is weak, it’s still positive. Just don’t obsess and spend hours trying to get a perfect score; scoring in the same ballpark as top-ranking pages is enough.

You also need to be aware of their downsides, most notably that they can’t help you craft unique content. That requires human creativity and effort.

Any questions or comments? Ping me on X or LinkedIn.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


Unlocking Brand Growth: Strategies for B2B and E-commerce Marketers



Unlocking Brand Growth: Strategies for B2B and E-commerce Marketers

In today’s fast-paced digital landscape, scaling a brand effectively requires more than just an innovative product or service. For B2B and e-commerce marketers, understanding the intricacies of growth strategies across different stages of business development is crucial.  

A recent analysis of 71 brands offers valuable insights into the optimal strategies for startups, scaleups, mature brands, and majority offline businesses. Here’s what we learned. 

Startup Stage: Building the Foundation 

Key Strategy: Startups focus on impressions-driven channels like Paid Social to establish their audience base. This approach is essential for gaining visibility and creating a strong initial footprint in the market. 

Case Study: Pooch & Mutt exemplified this strategy by leveraging Paid Social to achieve significant year-on-year revenue gains while also improving acquisition costs. This foundational step is crucial for setting the stage for future growth and stability. 

Scaleup Stage: Accelerating Conversion 

Key Strategy: For scaleups, having already established an audience, the focus shifts to conversion activities. Increasing spend in impressions-led media helps continue generating demand while maintaining a balance with acquisition costs. 

Case Study: The Essence Vault successfully applied this approach, scaling their Meta presence while minimizing cost increases. This stage emphasizes the importance of efficient spending to maximize conversion rates and sustain growth momentum. 

Mature Stage: Expanding Horizons 

Key Strategy: Mature brands invest in higher funnel activities to avoid market saturation and explore international expansion opportunities. This strategic pivot ensures sustained growth and market diversification. 

Case Study: Represent scaled their efforts on TikTok, enhancing growth and improving Meta efficiency. By expanding their presence in the US, they exemplified how mature brands can navigate saturation and seek new markets for continued success. 

Majority Offline Brands: Embracing Digital Channels 

Key Strategy: Majority offline brands primarily invest in click-based channels like Performance Max. However, the analysis reveals significant opportunities in Paid Social, suggesting a balanced approach for optimal results. 

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading