SEO
In a sea of signals, is your on-page on-point?
30-second summary:
- Content managers who want to assess their on-page performance can feel lost at sea due to numerous SEO signals and their perceptions
- This problem gets bigger and highly complex for industries with niche semantics
- The scenarios they present to the content planning process are highly specific, with unique lexicons and semantic relationships
- Sr. SEO Strategist at Brainlabs, Zach Wales, uses findings from a rigorous competitive analysis to shed light on how to evaluate your on-page game
Industries with niche terminology, like scientific or medical ecommerce brands, present a layer of complexity to SEO. The scenarios they present to the content planning process are highly specific, with unique lexicons and semantic relationships.
SEO has many layers to begin with, from technical to content. They all aim to optimize for numerous search engine ranking signals, some of which are moving targets.
So how does one approach on-page SEO in this challenging space? We recently had the privilege of conducting a lengthy competitive analysis for a client in one of these industries.
What we walked away with was a repeatable process for on-page analysis in a complicated semantic space.
The challenge: Turning findings into action
At the outset of any analysis, it’s important to define the challenge. In the most general sense, ours was to turn findings into meaningful on-page actions — with priorities.
And we would do this by comparing the keyword ranking performance of our client’s domain to that of its five chosen competitors.
Specifically, we needed to identify areas of the client’s website content that were losing to competitors in keyword rankings. And to prioritize things, we needed to show where those losses were having the greatest impact on our client’s potential for search traffic.
Adding to the complexity were two additional sub-challenges:
- Volume of keyword data. When people think of “niche markets,” the implication is usually a small number of keywords with low monthly search volumes (MSV). Scientific industries are not so. They are “niche” in the sense that their semantics are not accessible to all—including keyword research tools—but their depth & breadth of keyword potential is vast.
- Our client already dominated the market. At first glance, using keyword gap analysis tools, there were no product categories where our client wasn’t dominating the market. Yet they were incurring traffic losses from these five competitors from a seemingly random, spread-out number of cases. Taken together incrementally, these losses had significant impacts on their web traffic.
If the needle-in-a-haystack analogy comes to mind, you see where this is going.
To put the details to our challenge, we had to:
- Identify where those incremental effects of keyword rank loss were being felt the most — knowing this would guide our prioritization;
- Map those keyword trends to their respective stage of the marketing funnel (from informational top-of-funnel to the transactional bottom-of-funnel)
- Rule out off-page factors like backlink equity, Core Web Vitals & page speed metrics, in order to…
- Isolate cases where competitor pages ranked higher than our client’s on the merits of their on-page techniques, and finally
- Identify what those successful on-page techniques were, in hopes that our client could adapt its content to a winning on-page formula.
How to spot trends in a sea of data
When the data sets you’re working with are large and no apparent trends stand out, it’s not because they don’t exist. It only means you have to adjust the way you look at the data.
As a disclaimer, we’re not purporting that our approach is the only approach. It was one that made sense in response to another challenge at hand, which, again, is one that’s common to this industry: The intent measures of SEO tools like Semrush and Ahrefs — “Informational,” “Navigational,” “Commercial” and “Transactional,” or some combination thereof — are not very reliable.
Our approach to spotting these trends in a sea of data went like this:
Step 1. Break it down to short-tail vs. long tail
Numbers don’t lie. Absent reliable intent data, we cut the dataset in half based on MSV ranges: Keywords with MSVs above 200 and those equal to/below 200. We even graphed these out, and indeed, it returned a classic short/long-tail curve.
This gave us a proxy for funnel mapping: Short-tail keywords, defined as high-MSV & broad focus, could be mostly associated with the upper funnel. This made long-tail keywords, being less searched but more specifically focused, a proxy for the lower funnel.
Doing this also helped us manage the million-plus keyword dataset our tools generated for the client and its five competitor websites. Even if you perform the export hack of downloading data in batches, neither Google Drive nor your device’s RAM want anything to do with that much data.
Step 2. Establish a list of keyword-operative root words
The “keyword-operative root word” is the term we gave to root words that are common to many or all of the keywords under a certain topic or content type. For example, “dna” is a common root word to most of the keywords about DNA lab products, which our client and its competitors sell. And “protocols” is a root word for many keywords that exist in upper-funnel, informational content.
We established this list by placing our short- and long-tail data (exported from Semrush’s Keyword Gap analysis tool) into two spreadsheets, where we were able to view the shared keyword rankings of our client and the five competitors. We equipped these spreadsheets with data filters and formulas that scored each keyword with a competitive value, relative to the six web domains analyzed.
Separately, we took a list of our client’s product categories and brainstormed all possibilities for keyword-operative root words. Finally, we filtered the data for each root word and noted trends, such as the number of keywords that a website ranked for on Google page 1, and the sum of their MSVs.
Finally, we applied a calculation that incorporated average position, MSV, and industry click-through rates to quantify the significance of a trend. So if a competitor appeared to have a keyword ranking edge over our client in a certain subset of keywords, we could place a numerical value on that edge.
Step 3. Identify content templates
If one of your objectives is to map keyword trends to the marketing funnel, then it’s critical to understand the role of page templates. Why?
Page speed performance is a known ranking signal that should be considered. And ecommerce websites often have content templates that reflect each stage of the funnel.
In this case, all six competitors conveniently had distinct templates for top-, middle- and bottom-funnel content:
- Top-funnel templates: Text-heavy, informational content in what was commonly called “Learning Resources” or something similar;
- Middle-funnel templates: Also text-heavy, informational content about a product category, with links to products and visual content like diagrams and videos — the Product Landing Page (PLP), essentially;
- Bottom-funnel templates: Transactional, Product Detail Pages (PDP) with concise, conversion-oriented text and purchasing calls-to-action.
Step 4. Map keyword trends to the funnel
After cross-examining the root terms (Step 2), keyword ranking trends began to emerge. Now we just had to map them to their respective funnel stage.
Having identified content templates, and having the data divided by short- & long-tail made this a quicker process. Our primary focus was on trends where competitor webpages were outranking our client’s site.
Identifying content templates brought the added value of seeing where competitors, for example, outranked our client on a certain keyword because their winning webpage was built in a content-rich, optimized PLP, while our client’s lower-ranking page was a PDP.
Step 5. Rule out the off-page ranking factors
Since our goal was to identify & analyze on-page techniques, we had to rule out off-page factors like link equity and page speed. We sought cases where one page outranked another on a shared keyword, in spite of having inferior link equity, page speed scores, etc.
For all of Google’s developments in processing semantics (e.g., BERT, the Helpful Content Update) there are still cases where a page with thin text content outranks another page that has lengthier, optimized text content — by virtue of link equity.
To rule these factors out, we assigned an “SEO scorecard” to each webpage under investigation. The scorecard tallied the number of rank-signal-worthy attributes the page had in its SEO favor. This included things like Semrush’s page authority score, the number of internal vs. external inlinks, the presence and types of Schema markup, and Core Web Vitals stats.
The scorecards also included on-page factors, like the number of headers & subheaders (H1, H2, H3…), use of keywords in alt-tags, meta titles & their character counts, and even page word count. This helped give a high-level sense of on-page performance before diving into the content itself.
Our findings
When comparing the SEO scorecards of our client’s pages to its competitors, we only chose cases where the losing scorecard (in off-page factors) was the keyword ranking winner. Here are a few of the standout findings.
Adding H3 tags to products names really works
This month, OrangeValley’s Koen Leemans published a Semrush article, titled, SEO Split Test Result: Adding H3 Tags to Products Names on Ecommerce Category Pages. We found this study especially well-timed, as it validated what we saw in this competitive analysis.
To those versed in on-page SEO, placing keywords in <h3> HTML format (or any level of <h…> for that matter) is a wise move. Google crawls this text before it gets to the paragraph copy. It’s a known ranking signal.
When it comes to SEO-informed content planning, ecommerce clients have a tendency — coming from the best of intentions — to forsake the product name in pursuit of the perfect on-page recipe for a specific non-brand keyword. The value of the product name becomes a blind spot because the brand assumes it will outrank others on its own product names.
It’s somewhere in this thought process that an editor may, for example, decide to list product names on a PLP as bolded <p> copy, rather than as a <h3> or <h4>. This, apparently, is a missed opportunity.
More to this point, we found that this on-page tactic performed even better when the <h>-tagged product name was linked (index, follow) to its corresponding PDP, AND accompanied with a sentence description beneath the product name.
This is in contrast to the product landing page (PLP) which has ample supporting page copy, and only lists its products as hyperlinked names with no descriptive text.
Word count probably matters, <h> count very likely matters
In the ecommerce space, it’s not uncommon to find PLPs that have not been visited by the content fairy. A storyless grid of images and product names.
Yet, in every case where two PLPs of this variety went toe-to-toe over the same keyword, the sheer number of <h> tags seemed to be the only on-page factor that ranked one PLP above its competitors’ PLPs, which themselves had higher link equity.
The takeaway here is that if you know you won’t have time to touch up your PLPs with landing copy, you should at least set all product names to <h> tags that are hyperlinked, and increase the number of them (e.g., set the page to load 6 rows of products instead of 4).
And word count? Although Google’s John Mueller confirmed that word count is not a ranking factor for the search algorithm, this topic is debated. We cannot venture anything conclusive about word count from our competitive analyses. What we can say is that it’s a component of our finding that…
Defining the entire topic with your content wins
Backlinko’s Brian Dean ventured and proved the radical notion that you can optimize a single webpage to rank for not the usual 2 or 3 target keywords, but hundreds of them. That is if your copy encompasses everything about the topic that unites those hundreds of keywords.
That practice may work in long-form content marketing but is a little less applicable in ecommerce settings. The alternative to this is to create a body of pages that are all interlinked deliberately and logically (from a UX standpoint) and that cover every aspect of the topic at hand.
This content should address the questions that people have at each stage of the awareness-to-purchase cycle (i.e., the funnel). It should define niche terminology and spell out acronyms. It should be accessible.
In one stand-out case from our analysis, a competitor page held position 1 for a lucrative keyword, while our client’s site and that of the other competitors couldn’t even muster a page 1 ranking. All six websites were addressing the keyword head-on, arguably, in all the right ways. And they had superior link equity.
What did the winner have that the rest did not? It happened that in this lone instance, its product was being marketed to a high-school teacher/administrator audience, rather than a PhD-level, corporate, governmental or university scientist. By this virtue alone, their marketing copy was far more layman-accessible, and, apparently, Google approved too.
The takeaway is not to dumb-down the necessary jargon of a technical industry. But it highlights the need to tell every part of the story within a topic vertical.
Conclusion: Findings-to-action
There is a common emphasis among SEO bloggers who specialize in biotech & scientific industries on taking a top-down, topical takeover approach to content planning.
I came across these posts after completing this competitive analysis for our client. This topic-takeover emphasis was validating because the “Findings-To-Action” section of our study prescribed something similar:
Map topics to the funnel. Prior to keyword research, map broad topics & subtopics to their respective places in the informational & consumer funnel. Within each topic vertical, identify:
- Questions-to-ask & problems-to-solve at each funnel stage
- Keyword opportunities that roll up to those respective stages
- How many pages should be planned to rank for those keywords
- The website templates that best accommodate this content
- The header & internal linking strategy between those pages
Unlike more common-language industries, the need to appeal to two audiences is especially pronounced in scientific industries. One is the AI-driven audience of search engine bots that scour this complex semantic terrain for symmetry of clues and meaning. The other is human, of course, but with a mind that has already mastered this symmetry and is highly capable of discerning it.
To make the most efficient use of time and user experience, content planning and delivery need to be highly organized. The age-old marketing funnel concept works especially well as an organizing model. The rest is the rigor of applying this full-topic-coverage, content approach.
Zach Wales is Sr. SEO Strategist at Brainlabs.
Subscribe to the Search Engine Watch newsletter for insights on SEO, the search landscape, search marketing, digital marketing, leadership, podcasts, and more.
Join the conversation with us on LinkedIn and Twitter.
SEO
How To Stop Filter Results From Eating Crawl Budget
Today’s Ask An SEO question comes from Michal in Bratislava, who asks:
“I have a client who has a website with filters based on a map locations. When the visitor makes a move on the map, a new URL with filters is created. They are not in the sitemap. However, there are over 700,000 URLs in the Search Console (not indexed) and eating crawl budget.
What would be the best way to get rid of these URLs? My idea is keep the base location ‘index, follow’ and newly created URLs of surrounded area with filters switch to ‘noindex, no follow’. Also mark surrounded areas with canonicals to the base location + disavow the unwanted links.”
Great question, Michal, and good news! The answer is an easy one to implement.
First, let’s look at what you’re trying and apply it to other situations like ecommerce and publishers. This way, more people can benefit. Then, go into your strategies above and end with the solution.
What Crawl Budget Is And How Parameters Are Created That Waste It
If you’re not sure what Michal is referring to with crawl budget, this is a term some SEO pros use to explain that Google and other search engines will only crawl so many pages on your website before it stops.
If your crawl budget is used on low-value, thin, or non-indexable pages, your good pages and new pages may not be found in a crawl.
If they’re not found, they may not get indexed or refreshed. If they’re not indexed, they cannot bring you SEO traffic.
This is why optimizing a crawl budget for efficiency is important.
Michal shared an example of how “thin” URLs from an SEO point of view are created as customers use filters.
The experience for the user is value-adding, but from an SEO standpoint, a location-based page would be better. This applies to ecommerce and publishers, too.
Ecommerce stores will have searches for colors like red or green and products like t-shirts and potato chips.
These create URLs with parameters just like a filter search for locations. They could also be created by using filters for size, gender, color, price, variation, compatibility, etc. in the shopping process.
The filtered results help the end user but compete directly with the collection page, and the collection would be the “non-thin” version.
Publishers have the same. Someone might be on SEJ looking for SEO or PPC in the search box and get a filtered result. The filtered result will have articles, but the category of the publication is likely the best result for a search engine.
These filtered results can be indexed because they get shared on social media or someone adds them as a comment on a blog or forum, creating a crawlable backlink. It might also be an employee in customer service responded to a question on the company blog or any other number of ways.
The goal now is to make sure search engines don’t spend time crawling the “thin” versions so you can get the most from your crawl budget.
The Difference Between Indexing And Crawling
There’s one more thing to learn before we go into the proposed ideas and solutions – the difference between indexing and crawling.
- Crawling is the discovery of new pages within a website.
- Indexing is adding the pages that are worthy of showing to a person using the search engine to the database of pages.
Pages can get crawled but not indexed. Indexed pages have likely been crawled and will likely get crawled again to look for updates and server responses.
But not all indexed pages will bring in traffic or hit the first page because they may not be the best possible answer for queries being searched.
Now, let’s go into making efficient use of crawl budgets for these types of solutions.
Using Meta Robots Or X Robots
The first solution Michal pointed out was an “index,follow” directive. This tells a search engine to index the page and follow the links on it. This is a good idea, but only if the filtered result is the ideal experience.
From what I can see, this would not be the case, so I would recommend making it “noindex,follow.”
Noindex would say, “This is not an official page, but hey, keep crawling my site, you’ll find good pages in here.”
And if you have your main menu and navigational internal links done correctly, the spider will hopefully keep crawling them.
Canonicals To Solve Wasted Crawl Budget
Canonical links are used to help search engines know what the official page to index is.
If a product exists in three categories on three separate URLs, only one should be “the official” version, so the two duplicates should have a canonical pointing to the official version. The official one should have a canonical link that points to itself. This applies to the filtered locations.
If the location search would result in multiple city or neighborhood pages, the result would likely be a duplicate of the official one you have in your sitemap.
Have the filtered results point a canonical back to the main page of filtering instead of being self-referencing if the content on the page stays the same as the original category.
If the content pulls in your localized page with the same locations, point the canonical to that page instead.
In most cases, the filtered version inherits the page you searched or filtered from, so that is where the canonical should point to.
If you do both noindex and have a self-referencing canonical, which is overkill, it becomes a conflicting signal.
The same applies to when someone searches for a product by name on your website. The search result may compete with the actual product or service page.
With this solution, you’re telling the spider not to index this page because it isn’t worth indexing, but it is also the official version. It doesn’t make sense to do this.
Instead, use a canonical link, as I mentioned above, or noindex the result and point the canonical to the official version.
Disavow To Increase Crawl Efficiency
Disavowing doesn’t have anything to do with crawl efficiency unless the search engine spiders are finding your “thin” pages through spammy backlinks.
The disavow tool from Google is a way to say, “Hey, these backlinks are spammy, and we don’t want them to hurt us. Please don’t count them towards our site’s authority.”
In most cases, it doesn’t matter, as Google is good at detecting spammy links and ignoring them.
You do not want to add your own site and your own URLs to the disavow tool. You’re telling Google your own site is spammy and not worth anything.
Plus, submitting backlinks to disavow won’t prevent a spider from seeing what you want and do not want to be crawled, as it is only for saying a link from another site is spammy.
Disavowing won’t help with crawl efficiency or saving crawl budget.
How To Make Crawl Budgets More Efficient
The answer is robots.txt. This is how you tell specific search engines and spiders what to crawl.
You can include the folders you want them to crawl by marketing them as “allow,” and you can say “disallow” on filtered results by disallowing the “?” or “&” symbol or whichever you use.
If some of those parameters should be crawled, add the main word like “?filter=location” or a specific parameter.
Robots.txt is how you define crawl paths and work on crawl efficiency. Once you’ve optimized that, look at your internal links. A link from one page on your site to another.
These help spiders find your most important pages while learning what each is about.
Internal links include:
- Breadcrumbs.
- Menu navigation.
- Links within content to other pages.
- Sub-category menus.
- Footer links.
You can also use a sitemap if you have a large site, and the spiders are not finding the pages you want with priority.
I hope this helps answer your question. It is one I get a lot – you’re not the only one stuck in that situation.
More resources:
Featured Image: Paulo Bobita/Search Engine Journal
SEO
Ad Copy Tactics Backed By Study Of Over 1 Million Google Ads
Mastering effective ad copy is crucial for achieving success with Google Ads.
Yet, the PPC landscape can make it challenging to discern which optimization techniques truly yield results.
Although various perspectives exist on optimizing ads, few are substantiated by comprehensive data. A recent study from Optmyzr attempted to address this.
The goal isn’t to promote or dissuade any specific method but to provide a clearer understanding of how different creative decisions impact your campaigns.
Use the data to help you identify higher profit probability opportunities.
Methodology And Data Scope
The Optmyzr study analyzed data from over 22,000 Google Ads accounts that have been active for at least 90 days with a minimum monthly spend of $1,500.
Across more than a million ads, we assessed Responsive Search Ads (RSAs), Expanded Text Ads (ETAs), and Demand Gen campaigns. Due to API limitations, we could not retrieve asset-level data for Performance Max campaigns.
Additionally, all monetary figures were converted to USD to standardize comparisons.
Key Questions Explored
To provide actionable insights, we focused on addressing the following questions:
- Is there a correlation between Ad Strength and performance?
- How do pinning assets impact ad performance?
- Do ads written in title case or sentence case perform better?
- How does creative length affect ad performance?
- Can ETA strategies effectively translate to RSAs and Demand Gen ads?
As we evaluated the results, it’s important to note that our data set represents advanced marketers.
This means there may be selection bias, and these insights might differ in a broader advertiser pool with varying levels of experience.
The Relationship Between Ad Strength And Performance
Google explicitly states that Ad Strength is a tool designed to guide ad optimization rather than act as a ranking factor.
Despite this, marketers often hold mixed opinions about its usefulness, as its role in ad performance appears inconsistent.
Our data corroborates this skepticism. Ads labeled with an “average” Ad Strength score outperformed those with “good” or “excellent” scores in key metrics like CPA, conversion rate, and ROAS.
This disparity is particularly evident in RSAs, where the ROAS tends to decrease sharply when moving from “average” to “good,” with only a marginal increase when advancing to “excellent.”
Interestingly, Demand Gen ads also showed a stronger performance with an “average” Ad Strength, except for ROAS.
The metrics for conversion rates in Demand Gen and RSAs were notably similar, which is surprising since Demand Gen ads are typically designed for awareness, while RSAs focus on driving transactions.
Key Takeaways:
- Ad Strength doesn’t reliably correlate with performance, so it shouldn’t be a primary metric for assessing your ads.
- Most ads with “poor” or “average” Ad Strength labels perform well by standard advertising KPIs.
- “Good” or “excellent” Ad Strength labels do not guarantee better performance.
How Does Pinning Affect Ad Performance?
Pinning refers to locking specific assets like headlines or descriptions in fixed positions within the ad. This technique became common with RSAs, but there’s ongoing debate about its efficacy.
Some advertisers advocate for pinning all assets to replicate the control offered by ETAs, while others prefer to let Google optimize placements automatically.
Our data suggests that pinning some, but not all, assets offers the most balanced results in terms of CPA, ROAS, and CPC. However, ads where all assets are pinned achieve the highest relevance in terms of CTR.
Still, this marginally higher CTR doesn’t necessarily translate into better conversion metrics. Ads with unpinned or partially pinned assets generally perform better in terms of conversion rates and cost-based metrics.
Key Takeaways:
- Selective pinning is optimal, offering a good balance between creative control and automation.
- Fully pinned ads may increase CTR but tend to underperform in metrics like CPA and ROAS.
- Advertisers should embrace RSAs, as they consistently outperform ETAs – even with fully pinned assets.
Title Case Vs. Sentence Case: Which Performs Better?
The choice between title case (“This Is a Title Case Sentence”) and sentence case (“This is a sentence case sentence”) is often a point of contention among advertisers.
Our analysis revealed a clear trend: Ads using sentence case generally outperformed those in title case, particularly in RSAs and Demand Gen campaigns.
(RSA Data)
(ETA Data)
(Demand Gen)
ROAS, in particular, showed a marked preference for sentence case across these ad types, suggesting that a more natural, conversational tone may resonate better with users.
Interestingly, many advertisers still use a mix of title and sentence case within the same account, which counters the traditional approach of maintaining consistency throughout the ad copy.
Key Takeaways:
- Sentence case outperforms title case in RSAs and Demand Gen ads on most KPIs.
- Including sentence case ads in your testing can improve performance, as it aligns more closely with organic results, which users perceive as higher quality.
- Although ETAs perform slightly better with title case, sentence case is increasingly the preferred choice in modern ad formats.
The Impact Of Ad Length On Performance
Ad copy, particularly for Google Ads, requires brevity without sacrificing impact.
We analyzed the effects of character count on ad performance, grouping ads by the length of headlines and descriptions.
(RSA Data)
(ETA Data)
(Demand Gen Data)
Interestingly, shorter headlines tend to outperform longer ones in CTR and conversion rates, while descriptions benefit from moderate length.
Ads that tried to maximize character counts by using dynamic keyword insertion (DKI) or customizers often saw no significant performance improvement.
Moreover, applying ETA strategies to RSAs proved largely ineffective.
In almost all cases, advertisers who carried over ETA tactics to RSAs saw a decline in performance, likely because of how Google dynamically assembles ad components for display.
Key Takeaways:
- Shorter headlines lead to better performance, especially in RSAs.
- Focus on concise, impactful messaging instead of trying to fill every available character.
- ETA tactics do not translate well to RSAs, and attempting to replicate them can hurt performance.
Final Thoughts On Ad Optimizations
In summary, several key insights emerge from this analysis.
First, Ad Strength should not be your primary focus when assessing performance. Instead, concentrate on creating relevant, engaging ad copy tailored to your target audience.
Additionally, pinning assets should be a strategic, creative decision rather than a hard rule, and advertisers should incorporate sentence case into their testing for RSAs and Demand Gen ads.
Finally, focus on quality over quantity in ad copy length, as longer ads do not always equate to better results.
By refining these elements of your ads, you can drive better ROI and adapt to the evolving landscape of Google Ads.
Read the full Ad Strength & Creative Study from Optmyzr.
More resources:
Featured Image: Sammby/Shutterstock
SEO
Bing Expands Generative Search Capabilities For Complex Queries
Microsoft has announced an expansion of Bing’s generative search capabilities.
The update focuses on handling complex, informational queries.
Bing provides examples such as “how to effectively run a one-on-one” and “how can I remove background noise from my podcast recordings.”
Searchers in the United States can access the new features by typing “Bing generative search” into the search bar. This will present a carousel of sample queries.
A “Deep search” button on the results page activates the generative search function for other searches.
Beta Release and Potential Challenges
It’s important to note that this feature is in beta.
Bing acknowledges that you may experience longer loading times as the system works to ensure accuracy and relevance.
The announcement reads:
“While we’re excited to give you this opportunity to explore generative search firsthand, this experience is still being rolled out in beta. You may notice a bit of loading time as we work to ensure generative search results are shown when we’re confident in their accuracy and relevancy, and when it makes sense for the given query. You will generally see generative search results for informational and complex queries, and it will be indicated under the search box with the sentence “Results enhanced with Bing generative search” …”
This is the waiting screen you get after clicking on “Deep search.”
In practice, I found the wait was long and sometimes the searches would fail before completing.
The ideal way to utilize this search experience is to click on the suggestions provided after entering “Bing generative search” into the search bar.
Potential Impact
Bing’s generative search results include citations and links to original sources.
This approach is intended to drive traffic to publishers, but it remains to be seen how effective this will be in practice.
Bing encourages users to provide feedback on the new feature using thumbs up/down icons or the dedicated feedback button.
See also: Google AIO Is Ranking More Niche Specific Sites
Looking Ahead
This development comes as search engines increasingly use AI to enhance their capabilities.
As Bing rolls out this expanded generative search feature, remember the technology is still in beta, so performance and accuracy may vary.
Featured Image: JarTee/Shutterstock
-
SEARCHENGINES6 days ago
Daily Search Forum Recap: September 27, 2024
-
SEO6 days ago
How to Estimate It and Source Data
-
SEO6 days ago
9 Successful PR Campaign Examples, According to the Data
-
SEO4 days ago
6 Things You Can Do to Compete With Big Sites
-
SEARCHENGINES5 days ago
Google’s 26th Birthday Doodle Is Missing
-
SEO5 days ago
Yoast Co-Founder Suggests A WordPress Contributor Board
-
AFFILIATE MARKETING7 days ago
Kevin O’Leary: I Got an MBA Instead of Following My Passion
-
SEARCHENGINES4 days ago
Google Volatility With Gains & Losses, Updated Web Spam Policies, Cache Gone & More Search News
You must be logged in to post a comment Login