Connect with us

SEO

A Technical SEO Guide To Lighthouse Performance Metrics

Published

on

A Technical SEO Guide To Lighthouse Performance Metrics


Maybe you’re here because you’re a die-hard fan of performance metrics. Or maybe you don’t know what Lighthouse is and are too afraid to ask.

Either is an excellent option. Welcome!

Together, we’re hoping to take your performance improvement efforts from “make all the numbers green” to some clear and meaningful action items.

Note: This article was updated for freshness in January 2022 to represent versions 8 and 9.

Technical SEO and Google Data Studio nerd Rachel Anderson joined me on this merry adventure into demystifying developer documentation.

We’re going to answer:

  • What is Lighthouse?
  • How is Lighthouse different from Core Web Vitals?
  • Why doesn’t Lighthouse match Search Console/Crux reports?
  • How is Performance Score calculated?
  • Why is my score different each time I test?
  • Lighthouse Performance metrics explained
  • How to test performance using Lighthouse

What Is Lighthouse?

Performance is about measuring how quickly a browser can assemble a webpage.

Lighthouse uses a web browser called Chromium to build pages and runs tests on the pages as they’re built.  The tool is open-source (meaning it is maintained by the community and free to use).

Advertisement

Each audit falls into one of five categories:

  1. Performance.
  2. Accessibility.
  3. Best Practices.
  4. SEO.
  5. Progressive Web App.
Screenshot from Lighthouse, January 2022

For the purposes of this article, we’re going to use the name Lighthouse to refer to the series of tests executed by the shared Github repo, regardless of the execution method.

Version 9 is currently out on Github and is slated for large-scale rollout with the stable Chrome 98 release in February 2022.

Lighthouse And Web Core Vitals

On May 5, 2020, the Chromium project announced a set of three metrics with which the Google-backed open-source browser would measure performance.

The metrics, known as Web Vitals, are part of a Google initiative designed to provide unified guidance for quality signals.

The goal of these metrics is to measure web performance in a user-centric manner.

Within two weeks, Lighthouse v6 rolled out with a modified version of Web Core Vitals at the heart of the update.

July 2020 saw Lighthouse v6’s unified metrics adopted across Google products with the release of Chrome 84.

Chrome DevTools Audits panel was renamed to Lighthouse. Pagespeed insights and Google Search Console also reference these unified metrics.

Advertisement

This change in focus sets new, more refined goals.

How Is Lighthouse Different Than Core Web Vitals?

The three metrics represented by Core Web Vital are part of Lighthouse performance scoring.

Largest Contentful Paint, Total Blocking Time, and Cumulative Layout Shift comprise 70% of Lighthouse’s weighted performance score.

The scores you’ll see for CWV in Lighthouse are the result of emulated tests.

It’s the same metric but measured off a single page load rather than calculated from page loads around the world.

Why Doesn’t Lighthouse Match Search Console/Crux reports?

For real users, how quickly a page assembles is based on factors like their network connection, the device’s network processing power, and even the user’s physical distance to the site’s servers.

Lighthouse performance data doesn’t account for all these factors.

Instead, the tool emulates a mid-range device and throttles CPU in order to simulate the average user.

Advertisement

These are lab tests collected within a controlled environment with predefined device and network settings.

Lab data is helpful for debugging performance issues.

It does not mean that the experience on your local machine in a controlled environment represents the experiences of real humans in the wild.

The good news is you don’t have to choose between Lighthouse and Core Web Vitals. They’re designed to be part of the same workflow.

Always start with field data from the Chrome User Experience Report to identify issues impacting real uses.

Then leverage the expanded testing capabilities of Lighthouse to identify the code causing the issue.

If you’re working on a site pre-launch or QAing changes in a non-public environment, Lighthouse will be your new best #webperf friend.

Workflow for performanceScreenshot from Lighthouse, January 2022

How Is Lighthouse Performance Metrics Calculated?

Performance scores from LighthouseLighthouse, January 2022

In versions 8 and 9, Lighthouse’s performance score is made of seven metrics with each contributing a percentage of the total performance score.

Lighthouse metricsCreated by author, January 2022

Why Is My Score Different Each Time I Test?

Your score may change each time you test.

Browser extensions, internet connection, A/B tests, or even the ads displayed on that specific page load have an impact.

Advertisement

If you’re curious/furious to know more, check out the documentation on performance testing variability.

Lighthouse Performance Metrics Explained

Largest Contentful Paint (LCP)

  • What it represents: A user’s perception of loading experience.
  • Lighthouse Performance score weighting: 25%
  • What it measures: The point in the page load timeline when the page’s largest image or text block is visible within the viewport.
  • How it’s measured: Lighthouse extracts LCP data from Chrome’s tracing tool.
  • Is Largest Contentful Paint a Web Core Vital? Yes!
  • LCP Scoring
  • Goal: Achieve LCP in < 2.5 seconds.
LCP measurementsCreated by author, January 2022

What Elements Can Be Part Of LCP?

  • Text.
  • Images.
  • Videos.
  • Background images.

What Counts As LCP On Your Page?

It depends! LCP typically varies by page template.

This means that you can measure a handful of pages using the same template and define LCP.

Lighthouse will provide you with the exact HTML of the LCP element, but it can be useful to know the node as well when communicating with developers.

The node name will be consistent while the exact on-page image or text may change depending on which content is rendered by the template.

How To Define LCP Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Hover over the LCP marker in the Timings section.
  4. The element(s) that correspond to LCP is detailed in the Related Node field.
A Technical SEO Guide To Lighthouse Performance MetricsCreated by author, January 2022

What Causes Poor LCP?

Poor LCP typically comes from four issues:

  1. Slow server response times.
  2. Render-blocking JavaScript and CSS.
  3. Resource load times.
  4. Client-side rendering.

How To Fix Poor LCP

If the cause is slow server response time:

Advertisement
  • Optimize your server.
  • Route users to a nearby CDN.
  • Cache assets.
  • Serve HTML pages cache-first.
  • Establish third-party connections early.

If the cause is render-blocking JavaScript and CSS:

  • Minify CSS.
  • Defer non-critical CSS.
  • Inline critical CSS.
  • Minify and compress JavaScript files.
  • Defer unused JavaScript.
  • Minimize unused polyfills.

If the cause is resource load times:

  • Optimize and compress images.
  • Preload important resources.
  • Compress text files.
  • Deliver different assets based on the network connection (adaptive serving).
  • Cache assets using a service worker.

If the cause is client-side rendering:

Resources For Improving LCP

Total Blocking Time (TBT)

  • What it represents: Responsiveness to user input.
  • Lighthouse Performance score weighting: 30%
  • What it measures: TBT measures the time between First Contentful Paint and Time to Interactive. TBT is the lab equivalent of First Input Delay (FID) – the field data used in the Chrome User Experience Report and Google’s upcoming Page Experience ranking signal.
  • How it’s measured: The total time in which the main thread is occupied by tasks taking more than 50ms to complete. If a task takes 80ms to run, 30ms of that time will be counted toward TBT. If a task takes 45ms to run, 0ms will be added to TBT.
  • Is Total Blocking Time a Web Core Vital? Yes! It’s the lab data equivalent of First Input Delay (FID).

TBT Scoring

  • Goal: Achieve TBT score of less than 300 milliseconds.
A Technical SEO Guide To Lighthouse Performance MetricsCreated by author, January 2022

First Input Delay, the field data equivalent to TBT, has different thresholds.

FID Time in millisecondsCreated by author, January 2022

Long Tasks And Total Blocking Time

TBT measures long tasks – those taking longer than 50ms.

When a browser loads your site, there is essentially a single line queue of scripts waiting to be executed.

Any input from the user has to go into that same queue.

When the browser can’t respond to user input because other tasks are executing, the user perceives this as lag.

Essentially, long tasks are like that person at your favorite coffee shop who takes far too long to order a drink.

Like someone ordering a 2% venti four-pump vanilla, five-pump mocha whole-fat froth, long tasks are a major source of bad experiences.

Short tasks vs. long tasksScreenshot by author, January 2022

What Causes A High TBT On Your Page?

Heavy JavaScript.

Advertisement

That’s it.

How To See TBT Using Chrome Devtools

A Technical SEO Guide To Lighthouse Performance MetricsScreenshot from Chrome Devtools, January 2022

How To Fix Poor TBT

  • Break up Long Tasks.
  • Optimize your page for interaction readiness.
  • Use a web worker.
  • Reduce JavaScript execution time.

Resources For Improving TBT

First Contentful Paint (FCP)

  • What it represents: FCP marks the time at which the first text or image is painted (visible).
  • Lighthouse Performance score weighting: 10%
  • What it measures: The time when I can see the page I requested is responding. My thumb can stop hovering over the back button.
  • How it’s measured: Your FCP score in Lighthouse is measured by comparing your page’s FCP to FCP times for real website data stored by the HTTP Archive.
  • Your FCP increases if it is faster than other pages in the HTTP Archive.
  • Is First Contentful Paint a Web Core Vital? No

FCP Scoring

  • Goal: Achieve FCP in < 2 seconds.
FCP timeCreated by author, January 2022

What Elements Can Be Part Of FCP?

The time it takes to render the first visible element to the DOM is the FCP.

Anything that happens before an element that renders non-white content to the page (excluding iframes) is counted toward FCP.

Since iframes are not considered part of FCP, if they are the first content to render, FCP will continue counting until the first non-iframe content loads, but the iframe load time isn’t counted toward the FCP.

The documentation around FCP also calls out that is often impacted by font load time and there are tips for improving font loads.

FCP Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Click on the FCP marker in the Timings section.
  4. The summary tab has a timestamp with the FCP in ms.

How To Improve FCP

In order for content to be displayed to the user, the browser must first download, parse, and process all external stylesheets it encounters before it can display or render any content to a user’s screen.

Advertisement

The fastest way to bypass the delay of external resources is to use in-line styles for above-the-fold content.

To keep your site sustainably scalable, use an automated tool like penthouse and Apache’s mod_pagespeed.

These solutions will come with some restrictions to functionalities, require testing, and may not be for everyone.

Universally, we can all improve our site’s time to First Contentful Paint by reducing the scope and complexity of style calculations.

If a style isn’t being used, remove it.

You can identify unused CSS with Chrome Dev Tool’s built-in Code Coverage functionality.

Use better data to make better decisions.

Similar to TTI, you can capture real user metrics for FCP using Google Analytics to correlate improvements with KPIs.

Advertisement

Resources For Improving FCP

Speed Index

  • What it represents: How much is visible at a time during load.
  • Lighthouse Performance score weighting: 10%
  • What it measures: The Speed Index is the average time at which visible parts of the page are displayed.
  • How it’s measured: Lighthouse’s Speed Index measurement comes from a node module called Speedline.

You’ll have to ask the kindly wizards at webpagetest.org for the specifics but roughly, Speedline scores vary by the size of the viewport (read as device screen) and have an algorithm for calculating the completeness of each frame.

Speed index measurementsScreenshot by author, January 2022
  • Is Speed Index a Web Core Vital? No.

SI Scoring

  • Goal: achieve SI in < 4.3 seconds.
Speed Index metricsCreated by author, January 2022

How To Improve SI

Speed score reflects your site’s Critical Rendering Path.

A “critical” resource means that the resource is required for the first paint or is crucial to the page’s core functionality.

The longer and denser the path, the slower your site will be to provide a visual page.

If your path is optimized, you’ll give users content faster and score higher on Speed Index.

How The Critical Path Affects Rendering

Optimized rendering vs unoptimized timesScreenshot by author, January 2022

Lighthouse recommendations commonly associated with a slow Critical Rendering Path include:

  • Minimize main-thread work.
  • Reduce JavaScript execution time.
  • Minimize Critical Requests Depth.
  • Eliminate Render-Blocking Resources.
  • Defer offscreen images.

Resources For Improving SI

Time To Interactive

  • What it represents: Load responsiveness; identifying where a page looks responsive but isn’t yet.
  • Lighthouse Performance score weighting: 10%
  • What it measures: The time from when the page begins loading to when its main resources have loaded and are able to respond to user input.
  • How it’s measured: TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:

1. The page displays useful content, which is measured by the First Contentful Paint.

2. Event handlers are registered for most visible page elements.

3. The page responds to user interactions within 50 milliseconds.

Advertisement
  • Is Time to Interactive a Web Core Vital? No

TTI Scoring

Goal: achieve TTI score of less than 3.8 seconds.

TTI scoring systemCreated by author, January 2022

Resources For Improving TTI

Cumulative Layout Shift (CLS)

  • What it represents: A user’s perception of a page’s visual stability.
  • Lighthouse Performance score weighting: 15%
  • What it measures: It quantifies shifting page elements through the end of page load.
  • How it’s measured: Unlike other metrics, CLS isn’t measured in time. Instead, it’s a calculated metric based on the number of frames in which elements move and the total distance in pixels the elements moved.
CLS Layout Score formulaCreated by author, January 2022

CLS Scoring

  • Goal: achieve CLS score of less than 0.1.
CLS Scoring systemCreated by author, January 2022

What Elements Can Be Part Of CLS?

Any visual element that appears above the fold at some point in the load.

That’s right – if you’re loading your footer first and then the hero content of the page, your CLS is going to hurt.

Causes Of Poor CLS

  • Images without dimensions.
  • Ads, embeds, and iframes without dimensions.
  • Dynamically injected content.
  • Web Fonts causing FOIT/FOUT.
  • Actions waiting for a network response before updating DOM.

How To Define CLS Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Hover and move from left to right over the screenshots of the load (make sure the screenshots checkbox is checked).
  4. Watch for elements bouncing around after the first paint to identify elements causing CLS.

How To Improve CLS

Once you identify the element(s) at fault, you’ll need to update them to be stable during the page load.

For example, if slow-loading ads are causing the high CLS score, you may want to use placeholder images of the same size to fill that space as the ad loads to prevent the page shifting.

Some common ways to improve CLS include:

  • Always include width and height size attributes on images and video elements.
  • Reserve space for ad slots (and don’t collapse it).
  • Avoid inserting new content above existing content.
  • Take care when placing non-sticky ads near the top of the viewport.
  • Preload fonts.

CLS Resources

How To Test Performance Using Lighthouse

Methodology Matters

Out of the box, Lighthouse audits a single page at a time.

Advertisement

A single page score doesn’t represent your site, and a fast homepage doesn’t mean a fast site.

Test multiple page types within your site.

Identify your major page types, templates, and goal conversion points (signup, subscribe, and checkout pages).

If 40% of your site is blog posts, make 40% of your testing URLs blog pages!

Example Page Testing Inventory

Example Page Testing InventoryCreated by author, January 2022

Before you begin optimizing, run Lighthouse on each of your sample pages and save the report data.

Record your scores and the to-do list of improvements.

Prevent data loss by saving the JSON results and utilizing Lighthouse Viewer when detailed result information is needed.

Get Your Backlog to Bite Back Using ROI

Advertisement

Getting development resources to action SEO recommendations is hard.

An in-house SEO professional could destroy their pancreas by having a birthday cake for every backlogged ticket’s birthday. Or at least learn to hate cake.

In my experience as an in-house enterprise SEO pro, the trick to getting performance initiatives prioritized is having the numbers to back the investment.

This starting data will become dollar signs that serve to justify and reward development efforts.

With Lighthouse testing, you can recommend specific and direct changes (Think preload this font file) and associate the change to a specific metric.

Chances are you’re going to have more than one area flagged during tests. That’s okay!

If you’re wondering which changes will have the most bang for the buck, check out the Lighthouse Scoring Calculator.

How To Run Lighthouse Tests

This is a case of many roads leading to Oz.

Advertisement

Sure, some scarecrow might be particularly loud about a certain shade of brick but it’s about your goals.

Looking to test an entire staging site? Time to learn some NPM.

Have less than five minutes to prep for a prospective client meeting? A couple of one-off reports should do the trick.

Whichever way you execute, default to mobile unless you have a special use-case for desktop.

For One-Off Reports: PageSpeed Insights

Test one page at a time on PageSpeed Insights. Simply enter the URL.

Lab and field data available in PageSpeed InsightsScreenshot from PageSpeed Insights, January 2022

Pros Of Running Lighthouse From PageSpeed Insights

  • Detailed Lighthouse report is combined with URL-specific data from the Chrome User Experience Report.
  • Opportunities and Diagnostics can be filtered to specific metrics.  This is exceptionally useful when creating tickets for your engineers and tracking the resulting impact of the changes.
  • PageSpeed Insights is running already version 9.
    Pagespeed Insights opportunities and diagnostics filtered by metricScreenshot from PageSpeed Insights, January 2022

Cons Of Running Lighthouse From PageSpeed Insights

  • One report at a time.
  • Only Performance tests are run (if you need SEO, Accessibility, or Best Practices, you’ll need to run those separately)
  • You can’t test local builds or authenticated pages.
  • Reports can’t be saved in JSON, HTML, or Gist format. (Save as PDF via browser functionality is an option.
  • Requires you to manually save results.

For Comparing Test Results: Chrome DevTools Or Web.dev

Because the report will be emulating a user’s experience using your browser, use an incognito instance with all extensions, and the browser’s cache disabled.

Pro-tip: Create a Chrome profile for testing. Keep it local (no sync enabled, password saving, or association to an existing Google account) and don’t install extensions for the user.

How To Run A Test Lighthouse Using Chrome DevTools

  1. Open an incognito instance of Chrome.
  2. Navigate to the Network panel of Chrome Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Tick the box to disable cache.
  4. Navigate to the Lighthouse panel.
  5. Click Generate Report.
  6. Click the dots to the right of the URL in the report
  7. Save in your preferred format (JSON, HTML, or Gist)
    Save options for Lighthouse ReportsScreenshot from Lighthouse Reports, January 2022

Note that your version of Lighthouse may change depending on what version of Chrome you’re using. v8.5 is used on Chrome 97.

Lighthouse v9 will ship with DevTools in Chrome 98.

Advertisement

How To Run A Test Lighthouse Using Web.Dev

It’s just like DevTools but you don’t have to remember to disable all those pesky extensions!

  1. Go to web.dev/measure.
  2. Enter your URL.
  3. Click Run Audit.
  4. Click View Report.
    web.dev view report optionScreenshot by author, January 2022

Pros Of Running Lighthouse From DevTools/web.dev

  • You can test local builds or authenticated pages.
  • Saved reports can be compared using the Lighthouse CI Diff tool.
    Lighthouse CI Diff toolScreenshot from Lighthouse CI Diff, January 2022

Cons Of Running Lighthouse From DevTools/web.dev

  • One report at a time.
  • Requires you to manually save results.

For Testing At Scale (and Sanity): Node Command Line

1. Install npm.
(Mac Pro-tip: Use homebrew to avoid obnoxious dependency issues.)

2. Install the Lighthouse node module with npm

install -g lighthouse

3. Run a single text with

lighthouse <url>

4. Run tests on lists of usings by running tests programmatically.

Pros Of Running Lighthouse From Node

  • Many reports can be run at once.
  • Can be set to run automatically to track change over time.

Cons Of Running Lighthouse From Node

  • Requires some coding knowledge.
  • More time-intensive setup.

Conclusion

The complexity of performance metrics reflects the challenges facing all sites.

We use performance metrics as a proxy for user experience – that means factoring in some unicorns.

Tools like Google’s Test My Site and What Does My Site Cost? can help you make the conversion and customer-focused arguments for why performance matters.

Hopefully, once your project has traction, these definitions will help you translate Lighthouse’s single performance metric into action tickets for a skilled and collaborative engineering team.

Advertisement

Track your data and shout it from the rooftops.

As much as Google struggles to quantify qualitative experiences, SEO professionals and devs must decode how to translate a concept into code.

Test, iterate, and share what you learn! I look forward to seeing what you’re capable of, you beautiful unicorn.

More resources:


Featured Image: Paulo Bobita/Search Engine Journal

fbq('track', 'PageView');

fbq('trackSingle', '1321385257908563', 'ViewContent', { content_name: 'technical-seo-lighthouse-guide', content_category: 'seo ' });



Source link

SEO

B2B PPC Experts Give Their Take On Google Search On Announcements

Published

on

B2B PPC Experts Give Their Take On Google Search On Announcements

Google hosted its 3rd annual Search On event on September 28th.

The event announced numerous Search updates revolving around these key areas:

  • Visualization
  • Personalization
  • Sustainability

After the event, Google’s Ad Liason, Ginny Marvin, hosted a roundtable of PPC experts specifically in the B2B industry to give their thoughts on the announcements, as well as how they may affect B2B. I was able to participate in the roundtable and gained valuable feedback from the industry.

The roundtable of experts comprised of Brad Geddes, Melissa Mackey, Michelle Morgan, Greg Finn, Steph Bin, Michael Henderson, Andrea Cruz Lopez, and myself (Brooke Osmundson).

The Struggle With Images

Some of the updates in Search include browsable search results, larger image assets, and business messages for conversational search.

Brad Geddes, Co-Founder of Adalysis, mentioned “Desktop was never mentioned once.” Others echoed the same sentiment, that many of their B2B clients rely on desktop searches and traffic. With images showing mainly on mobile devices, their B2B clients won’t benefit as much.

Another great point came up about the context of images. While images are great for a user experience, the question reiterated by multiple roundtable members:

  • How is a B2B product or B2B service supposed to portray what they do in an image?

Images in search are certainly valuable for verticals such as apparel, automotive, and general eCommerce businesses. But for B2B, they may be left at a disadvantage.

More Uses Cases, Please

Ginny asked the group what they’d like to change or add to an event like Search On.

Advertisement

The overall consensus: both Search On and Google Marketing Live (GML) have become more consumer-focused.

Greg Finn said that the Search On event was about what he expected, but Google Marketing Live feels too broad now and that Google isn’t speaking to advertisers anymore.

Marvin acknowledged and then revealed that Google received feedback that after this year’s GML, the vision felt like it was geared towards a high-level investor.

The group gave a few potential solutions to help fill the current gap of what was announced, and then later how advertisers can take action.

  • 30-minute follow-up session on how these relate to advertisers
  • Focus less on verticals
  • Provide more use cases

Michelle Morgan and Melissa Mackey said that “even just screenshots of a B2B SaaS example” would help them immensely. Providing tangible action items on how to bring this information to clients is key.

Google Product Managers Weigh In

The second half of the roundtable included input from multiple Google Search Product Managers. I started off with a more broad question to Google:

  • It seems that Google is becoming a one-stop shop for a user to gather information and make purchases. How should advertisers prepare for this? Will we expect to see lower traffic, higher CPCs to compete for that coveted space?

Cecilia Wong, Global Product Lead of Search Formats, Google, mentioned that while they can’t comment directly on the overall direction, they do focus on Search. Their recommendation:

  • Manage assets and images and optimize for best user experience
  • For B2B, align your images as a sneak peek of what users can expect on the landing page

However, image assets have tight restrictions on what’s allowed. I followed up by asking if they would be loosening asset restrictions for B2B to use creativity in its image assets.

Google could not comment directly but acknowledged that looser restrictions on image content is a need for B2B advertisers.

Is Value-Based Bidding Worth The Hassle?

The topic of value-based bidding came up after Carlo Buchmann, Product Manager of Smart Bidding, said that they want advertisers to embrace and move towards value-based bidding. While the feedback seemed grim, it opened up for candid conversation.

Melissa Mackey said that while she’s talked to her clients about values-based bidding, none of her clients want to pull the trigger. For B2B, it’s difficult to assess the value on different conversion points.

Advertisement

Further, she stated that clients become fixated on their pipeline information and can end up making it too complicated. To sum up, they’re struggling to translate the value number input to what a sale is actually worth.

Geddes mentioned that some of his more sophisticated clients have moved back to manual bidding because Google doesn’t take all the values and signals to pass back and forth.

Finn closed the conversation with his experience. He emphasized that Google has not brought forth anything about best practices for value-based bidding. By having only one value, it seems like CPA bidding. And when a client has multiple value inputs, Google tends to optimize towards the lower-value conversions – ultimately affecting lead quality.

The Google Search Product Managers closed by providing additional resources to dig into overall best practices to leverage search in the world of automation.

Closing Thoughts

Google made it clear that the future of search is visual. For B2B companies, it may require extra creativity to succeed and compete with the visualization updates.

However, the PPC roundtable experts weighed in that if Google wants advertisers to adopt these features, they need to support advertisers more – especially B2B marketers. With limited time and resources, advertisers big and small are trying to do more with less.

Marketers are relying on Google to make these Search updates relevant to not only the user but the advertisers. Having clearer guides, use cases, and conversations is a great step to bringing back the Google and advertiser collaboration.

A special thank you to Ginny Marvin of Google for making space to hear B2B advertiser feedback, as well as all the PPC experts for weighing in.

Advertisement

Featured image: Shutterstock/T-K-M

fbq('track', 'PageView');

fbq('trackSingle', '1321385257908563', 'ViewContent', { content_name: 'b2b-ppc-experts-give-their-take-on-google-search-on-announcements', content_category: 'news pay-per-click seo' }); } });



Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish