Connect with us

SEO

14 Must-Know Tips For Crawling Millions Of Webpages

Published

on

14 Must-Know Tips For Crawling Millions Of Webpages

Crawling enterprise sites has all the complexities of any normal crawl plus several additional factors that need to be considered before beginning the crawl.

The following approaches show how to accomplish a large-scale crawl and achieve the given objectives, whether it’s part of an ongoing checkup or a site audit.

1. Make The Site Ready For Crawling

An important thing to consider before crawling is the website itself.

It’s helpful to fix issues that may slow down a crawl before starting the crawl.

That may sound counterintuitive to fix something before fixing it but when it comes to really big sites, a small problem multiplied by five million becomes a significant problem.

Adam Humphreys, the founder of Making 8 Inc. digital marketing agency, shared a clever solution he uses for identifying what is causing a slow TTFB (time to first byte), a metric that measures how responsive a web server is.

Advertisement

A byte is a unit of data. So the TTFB is the measurement of how long it takes for a single byte of data to be delivered to the browser.

TTFB measures the amount of time between a server receiving a request for a file to the time that the first byte is delivered to the browser, thus providing a measurement of how fast the server is.

A way to measure TTFB is to enter a URL in Google’s PageSpeed Insights tool, which is powered by Google’s Lighthouse measurement technology.

Screenshot from PageSpeed Insights Tool, July 2022

Adam shared: “So a lot of times, Core Web Vitals will flag a slow TTFB for pages that are being audited. To get a truly accurate TTFB reading one can compare the raw text file, just a simple text file with no html, loading up on the server to the actual website.

Throw some Lorem ipsum or something on a text file and upload it then measure the TTFB. The idea is to see server response times in TTFB and then isolate what resources on the site are causing the latency.

More often than not it’s excessive plugins that people love. I refresh both Lighthouse in incognito and web.dev/measure to average out measurements. When I see 30–50 plugins or tons of JavaScript in the source code, it’s almost an immediate problem before even starting any crawling.”

When Adam says he’s refreshing the Lighthouse scores, what he means is that he’s testing the URL multiple times because every test yields a slightly different score (which is due to the fact that the speed at which data is routed through the Internet is constantly changing, just like how the speed of traffic is constantly changing).

Advertisement

So what Adam does is collect multiple TTFB scores and average them to come up with a final score that then tells him how responsive a web server is.

If the server is not responsive, the PageSpeed Insights tool can provide an idea of why the server is not responsive and what needs to be fixed.

2. Ensure Full Access To Server: Whitelist Crawler IP

Firewalls and CDNs (Content Delivery Networks) can block or slow down an IP from crawling a website.

So it’s important to identify all security plugins, server-level intrusion prevention software, and CDNs that may impede a site crawl.

Typical WordPress plugins to add an IP to the whitelist are Sucuri Web Application Firewall (WAF) and Wordfence.

3. Crawl During Off-Peak Hours

Crawling a site should ideally be unintrusive.

Advertisement

Under the best-case scenario, a server should be able to handle being aggressively crawled while also serving web pages to actual site visitors.

But on the other hand, it could be useful to test how well the server responds under load.

This is where real-time analytics or server log access will be useful because you can immediately see how the server crawl may be affecting site visitors, although the pace of crawling and 503  server responses are also a clue that the server is under strain.

If it’s indeed the case that the server is straining to keep up then make note of that response and crawl the site during off-peak hours.

A CDN should in any case mitigate the effects of an aggressive crawl.

4. Are There Server Errors?

The Google Search Console Crawl Stats report should be the first place to research if the server is having trouble serving pages to Googlebot.

Advertisement

Any issues in the Crawl Stats report should have the cause identified and fixed before crawling an enterprise-level website.

Server error logs are a gold mine of data that can reveal a wide range of errors that may affect how well a site is crawled. Of particular importance is being able to debug otherwise invisible PHP errors.

5. Server Memory

Perhaps something that’s not routinely considered for SEO is the amount of RAM (random access memory) that a server has.

RAM is like short-term memory, a place where a server stores information that it’s using in order to serve web pages to site visitors.

A server with insufficient RAM will become slow.

So if a server becomes slow during a crawl or doesn’t seem to be able to cope with a crawling then this could be an SEO problem that affects how well Google is able to crawl and index web pages.

Advertisement

Take a look at how much RAM the server has.

A VPS (virtual private server) may need a minimum of 1GB of RAM.

However, 2GB to 4GB of RAM may be recommended if the website is an online store with high traffic.

More RAM is generally better.

If the server has a sufficient amount of RAM but the server slows down then the problem might be something else, like the software (or a plugin) that’s inefficient and causing excessive memory requirements.

6. Periodically Verify The Crawl Data

Keep an eye out for crawl anomalies as the website is crawled.

Advertisement

Sometimes the crawler may report that the server was unable to respond to a request for a web page, generating something like a 503 Service Unavailable server response message.

So it’s useful to pause the crawl and check out what’s going on that might need fixing in order to proceed with a crawl that provides more useful information.

Sometimes it’s not getting to the end of the crawl that’s the goal.

The crawl itself is an important data point, so don’t feel frustrated that the crawl needs to be paused in order to fix something because the discovery is a good thing.

7. Configure Your Crawler For Scale

Out of the box, a crawler like Screaming Frog may be set up for speed which is probably great for the majority of users. But it’ll need to be adjusted in order for it to crawl a large website with millions of pages.

Screaming Frog uses RAM for its crawl which is great for a normal site but becomes less great for an enterprise-sized website.

Advertisement

Overcoming this shortcoming is easy by adjusting the Storage Setting in Screaming Frog.

This is the menu path for adjusting the storage settings:

Configuration > System > Storage > Database Storage

If possible, it’s highly recommended (but not absolutely required) to use an internal SSD (solid-state drive) hard drive.

Most computers use a standard hard drive with moving parts inside.

An SSD is the most advanced form of hard drive that can transfer data at speeds from 10 to 100 times faster than a regular hard drive.

Using a computer with SSD results will help in achieving an amazingly fast crawl which will be necessary for efficiently downloading millions of web pages.

Advertisement

To ensure an optimal crawl it’s necessary to allocate 4 GB of RAM and no more than 4 GB for a crawl of up to 2 million URLs.

For crawls of up to 5 million URLs, it is recommended that 8 GB of RAM are allocated.

Adam Humphreys shared: “Crawling sites is incredibly resource intensive and requires a lot of memory. A dedicated desktop or renting a server is a much faster method than a laptop.

I once spent almost two weeks waiting for a crawl to complete. I learned from that and got partners to build remote software so I can perform audits anywhere at any time.”

8. Connect To A Fast Internet

If you are crawling from your office then it’s paramount to use the fastest Internet connection possible.

Using the fastest available Internet can mean the difference between a crawl that takes hours to complete to a crawl that takes days.

Advertisement

In general, the fastest available Internet is over an ethernet connection and not over a Wi-Fi connection.

If your Internet access is over Wi-Fi, it’s still possible to get an ethernet connection by moving a laptop or desktop closer to the Wi-Fi router, which contains ethernet connections in the rear.

This seems like one of those “it goes without saying” pieces of advice but it’s easy to overlook because most people use Wi-Fi by default, without really thinking about how much faster it would be to connect the computer straight to the router with an ethernet cord.

9. Cloud Crawling

Another option, particularly for extraordinarily large and complex site crawls of over 5 million web pages, crawling from a server can be the best option.

All normal constraints from a desktop crawl are off when using a cloud server.

Ash Nallawalla, an Enterprise SEO specialist and author, has over 20 years of experience working with some of the world’s biggest enterprise technology firms.

Advertisement

So I asked him about crawling millions of pages.

He responded that he recommends crawling from the cloud for sites with over 5 million URLs.

Ash shared: “Crawling huge websites is best done in the cloud. I do up to 5 million URIs with Screaming Frog on my laptop in database storage mode, but our sites have far more pages, so we run virtual machines in the cloud to crawl them.

Our content is popular with scrapers for competitive data intelligence reasons, more so than copying the articles for their textual content.

We use firewall technology to stop anyone from collecting too many pages at high speed. It is good enough to detect scrapers acting in so-called “human emulation mode.” Therefore, we can only crawl from whitelisted IP addresses and a further layer of authentication.”

Adam Humphreys agreed with the advice to crawl from the cloud.

Advertisement

He said: “Crawling sites is incredibly resource intensive and requires a lot of memory. A dedicated desktop or renting a server is a much faster method than a laptop. I once spent almost two weeks waiting for a crawl to complete.

I learned from that and got partners to build remote software so I can perform audits anywhere at any time from the cloud.”

10. Partial Crawls

A technique for crawling large websites is to divide the site into parts and crawl each part according to sequence so that the result is a sectional view of the website.

Another way to do a partial crawl is to divide the site into parts and crawl on a continual basis so that the snapshot of each section is not only kept up to date but any changes made to the site can be instantly viewed.

So rather than doing a rolling update crawl of the entire site, do a partial crawl of the entire site based on time.

This is an approach that Ash strongly recommends.

Advertisement

Ash explained: “I have a crawl going on all the time. I am running one right now on one product brand. It is configured to stop crawling at the default limit of 5 million URLs.”

When I asked him the reason for a continual crawl he said it was because of issues beyond his control which can happen with businesses of this size where many stakeholders are involved.

Ash said: “For my situation, I have an ongoing crawl to address known issues in a specific area.”

11. Overall Snapshot: Limited Crawls

A way to get a high-level view of what a website looks like is to limit the crawl to just a sample of the site.

This is also useful for competitive intelligence crawls.

For example, on a Your Money Or Your Life project I worked on I crawled about 50,000 pages from a competitor’s website to see what kinds of sites they were linking out to.

Advertisement

I used that data to convince the client that their outbound linking patterns were poor and showed them the high-quality sites their top-ranked competitors were linking to.

So sometimes, a limited crawl can yield enough of a certain kind of data to get an overall idea of the health of the overall site.

12. Crawl For Site Structure Overview

Sometimes one only needs to understand the site structure.

In order to do this faster one can set the crawler to not crawl external links and internal images.

There are other crawler settings that can be un-ticked in order to produce a faster crawl so that the only thing the crawler is focusing on is downloading the URL and the link structure.

13. How To Handle Duplicate Pages And Canonicals

Unless there’s a reason for indexing duplicate pages, it can be useful to set the crawler to ignore URL parameters and other URLs that are duplicates of a canonical URL.

Advertisement

It’s possible to set a crawler to only crawl canonical pages.  But if someone set paginated pages to canonicalize to the first page in the sequence then you’ll never discover this error.

For a similar reason, at least on the initial crawl, one might want to disobey noindex tags in order to identify instances of the noindex directive on pages that should be indexed.

14. See What Google Sees

As you’ve no doubt noticed, there are many different ways to crawl a website consisting of millions of web pages.

A crawl budget is how much resources Google devotes to crawling a website for indexing.

The more webpages are successfully indexed the more pages have the opportunity to rank.

Small sites don’t really have to worry about Google’s crawl budget.

Advertisement

But maximizing Google’s crawl budget is a priority for enterprise websites.

In the previous scenario illustrated above, I advised against respecting noindex tags.

Well for this kind of crawl you will actually want to obey noindex directives because the goal for this kind of crawl is to get a snapshot of the website that tells you how Google sees the entire website itself.

Google Search Console provides lots of information but crawling a website yourself with a user agent disguised as Google may yield useful information that can help improve getting more of the right pages indexed while discovering which pages Google might be wasting the crawl budget on.

For that kind of crawl, it’s important to set the crawler user agent to Googlebot, set the crawler to obey robots.txt, and set the crawler to obey the noindex directive.

That way, if the site is set to not show certain page elements to Googlebot you’ll be able to see a map of the site as Google sees it.

Advertisement

This is a great way to diagnose potential issues such as discovering pages that should be crawled but are getting missed.

For other sites, Google might be finding its way to pages that are useful to users but might be perceived as low quality by Google, like pages with sign-up forms.

Crawling with the Google user agent is useful to understand how Google sees the site and help to maximize the crawl budget.

Beating The Learning Curve

One can crawl enterprise websites and learn how to crawl them the hard way. These fourteen tips should hopefully shave some time off the learning curve and make you more prepared to take on those enterprise-level clients with gigantic websites.


Featured Image: SvetaZi/Shutterstock

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Reddit Post Ranks On Google In 5 Minutes

Published

on

By

Google apparently ranks Reddit posts within minutes

Google’s Danny Sullivan disputed the assertions made in a Reddit discussion that Google is showing a preference for Reddit in the search results. But a Redditor’s example proves that it’s possible for a Reddit post to rank in the top ten of the search results within minutes and to actually improve rankings to position #2 a week later.

Discussion About Google Showing Preference To Reddit

A Redditor (gronetwork) complained that Google is sending so many visitors to Reddit that the server is struggling with the load and shared an example that proved that it can only take minutes for a Reddit post to rank in the top ten.

That post was part of a 79 post Reddit thread where many in the r/SEO subreddit were complaining about Google allegedly giving too much preference to Reddit over legit sites.

The person who did the test (gronetwork) wrote:

“…The website is already cracking (server down, double posts, comments not showing) because there are too many visitors.

…It only takes few minutes (you can test it) for a post on Reddit to appear in the top ten results of Google with keywords related to the post’s title… (while I have to wait months for an article on my site to be referenced). Do the math, the whole world is going to spam here. The loop is completed.”

Advertisement

Reddit Post Ranked Within Minutes

Another Redditor asked if they had tested if it takes “a few minutes” to rank in the top ten and gronetwork answered that they had tested it with a post titled, Google SGE Review.

gronetwork posted:

“Yes, I have created for example a post named “Google SGE Review” previously. After less than 5 minutes it was ranked 8th for Google SGE Review (no quotes). Just after Washingtonpost.com, 6 authoritative SEO websites and Google.com’s overview page for SGE (Search Generative Experience). It is ranked third for SGE Review.”

It’s true, not only does that specific post (Google SGE Review) rank in the top 10, the post started out in position 8 and it actually improved ranking, currently listed beneath the number one result for the search query “SGE Review”.

Screenshot Of Reddit Post That Ranked Within Minutes

Anecdotes Versus Anecdotes

Okay, the above is just one anecdote. But it’s a heck of an anecdote because it proves that it’s possible for a Reddit post to rank within minutes and get stuck in the top of the search results over other possibly more authoritative websites.

hankschrader79 shared that Reddit posts outrank Toyota Tacoma forums for a phrase related to mods for that truck.

Advertisement

Google’s Danny Sullivan responded to that post and the entire discussion to dispute that Reddit is not always prioritized over other forums.

Danny wrote:

“Reddit is not always prioritized over other forums. [super vhs to mac adapter] I did this week, it goes Apple Support Community, MacRumors Forum and further down, there’s Reddit. I also did [kumo cloud not working setup 5ghz] recently (it’s a nightmare) and it was the Netgear community, the SmartThings Community, GreenBuildingAdvisor before Reddit. Related to that was [disable 5g airport] which has Apple Support Community above Reddit. [how to open an 8 track tape] — really, it was the YouTube videos that helped me most, but it’s the Tapeheads community that comes before Reddit.

In your example for [toyota tacoma], I don’t even get Reddit in the top results. I get Toyota, Car & Driver, Wikipedia, Toyota again, three YouTube videos from different creators (not Toyota), Edmunds, a Top Stories unit. No Reddit, which doesn’t really support the notion of always wanting to drive traffic just to Reddit.

If I guess at the more specific query you might have done, maybe [overland mods for toyota tacoma], I get a YouTube video first, then Reddit, then Tacoma World at third — not near the bottom. So yes, Reddit is higher for that query — but it’s not first. It’s also not always first. And sometimes, it’s not even showing at all.”

hankschrader79 conceded that they were generalizing when they wrote that Google always prioritized Reddit. But they also insisted that that didn’t diminish what they said is a fact that Google’s “prioritization” forum content has benefitted Reddit more than actual forums.

Why Is The Reddit Post Ranked So High?

It’s possible that Google “tested” that Reddit post in position 8 within minutes and that user interaction signals indicated to Google’s algorithms that users prefer to see that Reddit post. If that’s the case then it’s not a matter of Google showing preference to Reddit post but rather it’s users that are showing the preference and the algorithm is responding to those preferences.

Advertisement

Nevertheless, an argument can be made that user preferences for Reddit can be a manifestation of Familiarity Bias. Familiarity Bias is when people show a preference for things that are familiar to them. If a person is familiar with a brand because of all the advertising they were exposed to then they may show a bias for the brand products over unfamiliar brands.

Users who are familiar with Reddit may choose Reddit because they don’t know the other sites in the search results or because they have a bias that Google ranks spammy and optimized websites and feel safer reading Reddit.

Google may be picking up on those user interaction signals that indicate a preference and satisfaction with the Reddit results but those results may simply be biases and not an indication that Reddit is trustworthy and authoritative.

Is Reddit Benefiting From A Self-Reinforcing Feedback Loop?

It may very well be that Google’s decision to prioritize user generated content may have started a self-reinforcing pattern that draws users in to Reddit through the search results and because the answers seem plausible those users start to prefer Reddit results. When they’re exposed to more Reddit posts their familiarity bias kicks in and they start to show a preference for Reddit. So what could be happening is that the users and Google’s algorithm are creating a self-reinforcing feedback loop.

Is it possible that Google’s decision to show more user generated content has kicked off a cycle where more users are exposed to Reddit which then feeds back into Google’s algorithm which in turn increases Reddit visibility, regardless of lack of expertise and authoritativeness?

Featured Image by Shutterstock/Kues

Advertisement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

WordPress Releases A Performance Plugin For “Near-Instant Load Times”

Published

on

By

WordPress speculative loading plugin

WordPress released an official plugin that adds support for a cutting edge technology called speculative loading that can help boost site performance and improve the user experience for site visitors.

Speculative Loading

Rendering means constructing the entire webpage so that it instantly displays (rendering). When your browser downloads the HTML, images, and other resources and puts it together into a webpage, that’s rendering. Prerendering is putting that webpage together (rendering it) in the background.

What this plugin does is to enable the browser to prerender the entire webpage that a user might navigate to next. The plugin does that by anticipating which webpage the user might navigate to based on where they are hovering.

Chrome lists a preference for only prerendering when there is an at least 80% probability of a user navigating to another webpage. The official Chrome support page for prerendering explains:

“Pages should only be prerendered when there is a high probability the page will be loaded by the user. This is why the Chrome address bar prerendering options only happen when there is such a high probability (greater than 80% of the time).

There is also a caveat in that same developer page that prerendering may not happen based on user settings, memory usage and other scenarios (more details below about how analytics handles prerendering).

Advertisement

The Speculative Loading API solves a problem that previous solutions could not because in the past they were simply prefetching resources like JavaScript and CSS but not actually prerendering the entire webpage.

The official WordPress announcement explains it like this:

Introducing the Speculation Rules API
The Speculation Rules API is a new web API that solves the above problems. It allows defining rules to dynamically prefetch and/or prerender URLs of certain structure based on user interaction, in JSON syntax—or in other words, speculatively preload those URLs before the navigation. This API can be used, for example, to prerender any links on a page whenever the user hovers over them.”

The official WordPress page about this new functionality describes it:

“The Speculation Rules API is a new web API… It allows defining rules to dynamically prefetch and/or prerender URLs of certain structure based on user interaction, in JSON syntax—or in other words, speculatively preload those URLs before the navigation.

This API can be used, for example, to prerender any links on a page whenever the user hovers over them. Also, with the Speculation Rules API, “prerender” actually means to prerender the entire page, including running JavaScript. This can lead to near-instant load times once the user clicks on the link as the page would have most likely already been loaded in its entirety. However that is only one of the possible configurations.”

The new WordPress plugin adds support for the Speculation Rules API. The Mozilla developer pages, a great resource for HTML technical understanding describes it like this:

“The Speculation Rules API is designed to improve performance for future navigations. It targets document URLs rather than specific resource files, and so makes sense for multi-page applications (MPAs) rather than single-page applications (SPAs).

The Speculation Rules API provides an alternative to the widely-available <link rel=”prefetch”> feature and is designed to supersede the Chrome-only deprecated <link rel=”prerender”> feature. It provides many improvements over these technologies, along with a more expressive, configurable syntax for specifying which documents should be prefetched or prerendered.”

Advertisement

See also: Are Websites Getting Faster? New Data Reveals Mixed Results

Performance Lab Plugin

The new plugin was developed by the official WordPress performance team which occasionally rolls out new plugins for users to test ahead of possible inclusion into the actual WordPress core. So it’s a good opportunity to be first to try out new performance technologies.

The new WordPress plugin is by default set to prerender “WordPress frontend URLs” which are pages, posts, and archive pages. How it works can be fine-tuned under the settings:

Settings > Reading > Speculative Loading

Browser Compatibility

The Speculative API is supported by Chrome 108 however the specific rules used by the new plugin require Chrome 121 or higher. Chrome 121 was released in early 2024.

Browsers that do not support will simply ignore the plugin and will have no effect on the user experience.

Check out the new Speculative Loading WordPress plugin developed by the official core WordPress performance team.

Advertisement

How Analytics Handles Prerendering

A WordPress developer commented with a question asking how Analytics would handle prerendering and someone else answered that it’s up to the Analytics provider to detect a prerender and not count it as a page load or site visit.

Fortunately both Google Analytics and Google Publisher Tags (GPT) both are able to handle prerenders. The Chrome developers support page has a note about how analytics handles prerendering:

“Google Analytics handles prerender by delaying until activation by default as of September 2023, and Google Publisher Tag (GPT) made a similar change to delay triggering advertisements until activation as of November 2023.”

Possible Conflict With Ad Blocker Extensions

There are a couple things to be aware of about this plugin, aside from the fact that it’s an experimental feature that requires Chrome 121 or higher.

A comment by a WordPress plugin developer that this feature may not work with browsers that are using the uBlock Origin ad blocking browser extension.

Download the plugin:
Speculative Loading Plugin by the WordPress Performance Team

Read the announcement at WordPress
Speculative Loading in WordPress

Advertisement

See also: WordPress, Wix & Squarespace Show Best CWV Rate Of Improvement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

10 Paid Search & PPC Planning Best Practices

Published

on

By

10 Paid Search & PPC Planning Best Practices

Whether you are new to paid media or reevaluating your efforts, it’s critical to review your performance and best practices for your overall PPC marketing program, accounts, and campaigns.

Revisiting your paid media plan is an opportunity to ensure your strategy aligns with your current goals.

Reviewing best practices for pay-per-click is also a great way to keep up with trends and improve performance with newly released ad technologies.

As you review, you’ll find new strategies and features to incorporate into your paid search program, too.

Here are 10 PPC best practices to help you adjust and plan for the months ahead.

Advertisement

1. Goals

When planning, it is best practice to define goals for the overall marketing program, ad platforms, and at the campaign level.

Defining primary and secondary goals guides the entire PPC program. For example, your primary conversion may be to generate leads from your ads.

You’ll also want to look at secondary goals, such as brand awareness that is higher in the sales funnel and can drive interest to ultimately get the sales lead-in.

2. Budget Review & Optimization

Some advertisers get stuck in a rut and forget to review and reevaluate the distribution of their paid media budgets.

To best utilize budgets, consider the following:

  • Reconcile your planned vs. spend for each account or campaign on a regular basis. Depending on the budget size, monthly, quarterly, or semiannually will work as long as you can hit budget numbers.
  • Determine if there are any campaigns that should be eliminated at this time to free up the budget for other campaigns.
  • Is there additional traffic available to capture and grow results for successful campaigns? The ad platforms often include a tool that will provide an estimated daily budget with clicks and costs. This is just an estimate to show more click potential if you are interested.
  • If other paid media channels perform mediocrely, does it make sense to shift those budgets to another?
  • For the overall paid search and paid social budget, can your company invest more in the positive campaign results?

3. Consider New Ad Platforms

If you can shift or increase your budgets, why not test out a new ad platform? Knowing your audience and where they spend time online will help inform your decision when choosing ad platforms.

Go beyond your comfort zone in Google, Microsoft, and Meta Ads.

Advertisement

Here are a few other advertising platforms to consider testing:

  • LinkedIn: Most appropriate for professional and business targeting. LinkedIn audiences can also be reached through Microsoft Ads.
  • TikTok: Younger Gen Z audience (16 to 24), video.
  • Pinterest: Products, services, and consumer goods with a female-focused target.
  • Snapchat: Younger demographic (13 to 35), video ads, app installs, filters, lenses.

Need more detailed information and even more ideas? Read more about the 5 Best Google Ads Alternatives.

4. Top Topics in Google Ads & Microsoft Ads

Recently, trends in search and social ad platforms have presented opportunities to connect with prospects more precisely, creatively, and effectively.

Don’t overlook newer targeting and campaign types you may not have tried yet.

  • Video: Incorporating video into your PPC accounts takes some planning for the goals, ad creative, targeting, and ad types. There is a lot of opportunity here as you can simply include video in responsive display ads or get in-depth in YouTube targeting.
  • Performance Max: This automated campaign type serves across all of Google’s ad inventory. Microsoft Ads recently released PMAX so you can plan for consistency in campaign types across platforms. Do you want to allocate budget to PMax campaigns? Learn more about how PMax compares to search.
  • Automation: While AI can’t replace human strategy and creativity, it can help manage your campaigns more easily. During planning, identify which elements you want to automate, such as automatically created assets and/or how to successfully guide the AI in the Performance Max campaigns.

While exploring new features, check out some hidden PPC features you probably don’t know about.

5. Revisit Keywords

The role of keywords has evolved over the past several years with match types being less precise and loosening up to consider searcher intent.

For example, [exact match] keywords previously would literally match with the exact keyword search query. Now, ads can be triggered by search queries with the same meaning or intent.

A great planning exercise is to lay out keyword groups and evaluate if they are still accurately representing your brand and product/service.

Advertisement

Review search term queries triggering ads to discover trends and behavior you may not have considered. It’s possible this has impacted performance and conversions over time.

Critical to your strategy:

  • Review the current keyword rules and determine if this may impact your account in terms of close variants or shifts in traffic volume.
  • Brush up on how keywords work in each platform because the differences really matter!
  • Review search term reports more frequently for irrelevant keywords that may pop up from match type changes. Incorporate these into match type changes or negative keywords lists as appropriate.

6. Revisit Your Audiences

Review the audiences you selected in the past, especially given so many campaign types that are intent-driven.

Automated features that expand your audience could be helpful, but keep an eye out for performance metrics and behavior on-site post-click.

Remember, an audience is simply a list of users who are grouped together by interests or behavior online.

Therefore, there are unlimited ways to mix and match those audiences and target per the sales funnel.

Here are a few opportunities to explore and test:

Advertisement
  • LinkedIn user targeting: Besides LinkedIn, this can be found exclusively in Microsoft Ads.
  • Detailed Demographics: Marital status, parental status, home ownership, education, household income.
  • In-market and custom intent: Searches and online behavior signaling buying cues.
  • Remarketing: Advertisers website visitors, interactions with ads, and video/ YouTube.

Note: This varies per the campaign type and seems to be updated frequently, so make this a regular check-point in your campaign management for all platforms.

7. Organize Data Sources

You will likely be running campaigns on different platforms with combinations of search, display, video, etc.

Looking back at your goals, what is the important data, and which platforms will you use to review and report? Can you get the majority of data in one analytics platform to compare and share?

Millions of companies use Google Analytics, which is a good option for centralized viewing of advertising performance, website behavior, and conversions.

8. Reevaluate How You Report

Have you been using the same performance report for years?

It’s time to reevaluate your essential PPC key metrics and replace or add that data to your reports.

There are two great resources to kick off this exercise:

Advertisement

Your objectives in reevaluating the reporting are:

  • Are we still using this data? Is it still relevant?
  • Is the data we are viewing actionable?
  • What new metrics should we consider adding we haven’t thought about?
  • How often do we need to see this data?
  • Do the stakeholders receiving the report understand what they are looking at (aka data visualization)?

Adding new data should be purposeful, actionable, and helpful in making decisions for the marketing plan. It’s also helpful to decide what type of data is good to see as “deep dives” as needed.

9. Consider Using Scripts

The current ad platforms have plenty of AI recommendations and automated rules, and there is no shortage of third-party tools that can help with optimizations.

Scripts is another method for advertisers with large accounts or some scripting skills to automate report generation and repetitive tasks in their Google Ads accounts.

Navigating the world of scripts can seem overwhelming, but a good place to start is a post here on Search Engine Journal that provides use cases and resources to get started with scripts.

Luckily, you don’t need a Ph.D. in computer science — there are plenty of resources online with free or templated scripts.

10. Seek Collaboration

Another effective planning tactic is to seek out friendly resources and second opinions.

Advertisement

Much of the skill and science of PPC management is unique to the individual or agency, so there is no shortage of ideas to share between you.

You can visit the Paid Search Association, a resource for paid ad managers worldwide, to make new connections and find industry events.

Preparing For Paid Media Success

Strategies should be based on clear and measurable business goals. Then, you can evaluate the current status of your campaigns based on those new targets.

Your paid media strategy should also be built with an eye for both past performance and future opportunities. Look backward and reevaluate your existing assumptions and systems while investigating new platforms, topics, audiences, and technologies.

Also, stay current with trends and keep learning. Check out ebooks, social media experts, and industry publications for resources and motivational tips.

More resources: 

Advertisement

Featured Image: Vanatchanan/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS