Connect with us

SEO

Everything You Need To Know

Published

on

Everything You Need To Know

Google has just released Bard, its answer to ChatGPT, and users are getting to know it to see how it compares to OpenAI’s artificial intelligence-powered chatbot.

The name ‘Bard’ is purely marketing-driven, as there are no algorithms named Bard, but we do know that the chatbot is powered by LaMDA.

Here is everything we know about Bard so far and some interesting research that may offer an idea of the kind of algorithms that may power Bard.

What Is Google Bard?

Bard is an experimental Google chatbot that is powered by the LaMDA large language model.

It’s a generative AI that accepts prompts and performs text-based tasks like providing answers and summaries and creating various forms of content.

Advertisement

Bard also assists in exploring topics by summarizing information found on the internet and providing links for exploring websites with more information.

Why Did Google Release Bard?

Google released Bard after the wildly successful launch of OpenAI’s ChatGPT, which created the perception that Google was falling behind technologically.

ChatGPT was perceived as a revolutionary technology with the potential to disrupt the search industry and shift the balance of power away from Google search and the lucrative search advertising business.

On December 21, 2022, three weeks after the launch of ChatGPT, the New York Times reported that Google had declared a “code red” to quickly define its response to the threat posed to its business model.

Forty-seven days after the code red strategy adjustment, Google announced the launch of Bard on February 6, 2023.

What Was The Issue With Google Bard?

The announcement of Bard was a stunning failure because the demo that was meant to showcase Google’s chatbot AI contained a factual error.

Advertisement

The inaccuracy of Google’s AI turned what was meant to be a triumphant return to form into a humbling pie in the face.

Google’s shares subsequently lost a hundred billion dollars in market value in a single day, reflecting a loss of confidence in Google’s ability to navigate the looming era of AI.

How Does Google Bard Work?

Bard is powered by a “lightweight” version of LaMDA.

LaMDA is a large language model that is trained on datasets consisting of public dialogue and web data.

There are two important factors related to the training described in the associated research paper, which you can download as a PDF here: LaMDA: Language Models for Dialog Applications (read the abstract here).

  • A. Safety: The model achieves a level of safety by tuning it with data that was annotated by crowd workers.
  • B. Groundedness: LaMDA grounds itself factually with external knowledge sources (through information retrieval, which is search).

The LaMDA research paper states:

“…factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator.

We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible.”

Advertisement

Google used three metrics to evaluate the LaMDA outputs:

  1. Sensibleness: A measurement of whether an answer makes sense or not.
  2. Specificity: Measures if the answer is the opposite of generic/vague or contextually specific.
  3. Interestingness: This metric measures if LaMDA’s answers are insightful or inspire curiosity.

All three metrics were judged by crowdsourced raters, and that data was fed back into the machine to keep improving it.

The LaMDA research paper concludes by stating that crowdsourced reviews and the system’s ability to fact-check with a search engine were useful techniques.

Google’s researchers wrote:

“We find that crowd-annotated data is an effective tool for driving significant additional gains.

We also find that calling external APIs (such as an information retrieval system) offers a path towards significantly improving groundedness, which we define as the extent to which a generated response contains claims that can be referenced and checked against a known source.”

How Is Google Planning To Use Bard In Search?

The future of Bard is currently envisioned as a feature in search.

Google’s announcement in February was insufficiently specific on how Bard would be implemented.

Advertisement

The key details were buried in a single paragraph close to the end of the blog announcement of Bard, where it was described as an AI feature in search.

That lack of clarity fueled the perception that Bard would be integrated into search, which was never the case.

Google’s February 2023 announcement of Bard states that Google will at some point integrate AI features into search:

“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.

These new AI features will begin rolling out on Google Search soon.”

It’s clear that Bard is not search. Rather, it is intended to be a feature in search and not a replacement for search.

What Is A Search Feature?

A feature is something like Google’s Knowledge Panel, which provides knowledge information about notable people, places, and things.

Advertisement

Google’s “How Search Works” webpage about features explains:

“Google’s search features ensure that you get the right information at the right time in the format that’s most useful to your query.

Sometimes it’s a webpage, and sometimes it’s real-world information like a map or inventory at a local store.”

In an internal meeting at Google (reported by CNBC), employees questioned the use of Bard in search.

One employee pointed out that large language models like ChatGPT and Bard are not fact-based sources of information.

The Google employee asked:

“Why do we think the big first application should be search, which at its heart is about finding true information?”

Jack Krawczyk, the product lead for Google Bard, answered:

Advertisement

“I just want to be very clear: Bard is not search.”

At the same internal event, Google’s Vice President of Engineering for Search, Elizabeth Reid, reiterated that Bard is not search.

She said:

“Bard is really separate from search…”

What we can confidently conclude is that Bard is not a new iteration of Google search. It is a feature.

Bard Is An Interactive Method For Exploring Topics

Google’s announcement of Bard was fairly explicit that Bard is not search. This means that, while search surfaces links to answers, Bard helps users investigate knowledge.

The announcement explains:

“When people think of Google, they often think of turning to us for quick factual answers, like ‘how many keys does a piano have?’

But increasingly, people are turning to Google for deeper insights and understanding – like, ‘is the piano or guitar easier to learn, and how much practice does each need?’

Advertisement

Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives.”

It may be helpful to think of Bard as an interactive method for accessing knowledge about topics.

Bard Samples Web Information

The problem with large language models is that they mimic answers, which can lead to factual errors.

The researchers who created LaMDA state that approaches like increasing the size of the model can help it gain more factual information.

But they noted that this approach fails in areas where facts are constantly changing during the course of time, which researchers refer to as the “temporal generalization problem.”

Freshness in the sense of timely information cannot be trained with a static language model.

Advertisement

The solution that LaMDA pursued was to query information retrieval systems. An information retrieval system is a search engine, so LaMDA checks search results.

This feature from LaMDA appears to be a feature of Bard.

The Google Bard announcement explains:

“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models.

It draws on information from the web to provide fresh, high-quality responses.”

Screenshot of a Google Bard Chat, March 2023

LaMDA and (possibly by extension) Bard achieve this with what is called the toolset (TS).

The toolset is explained in the LaMDA researcher paper:

“We create a toolset (TS) that includes an information retrieval system, a calculator, and a translator.

TS takes a single string as input and outputs a list of one or more strings. Each tool in TS expects a string and returns a list of strings.

Advertisement

For example, the calculator takes “135+7721”, and outputs a list containing [“7856”]. Similarly, the translator can take “hello in French” and output [‘Bonjour’].

Finally, the information retrieval system can take ‘How old is Rafael Nadal?’, and output [‘Rafael Nadal / Age / 35’].

The information retrieval system is also capable of returning snippets of content from the open web, with their corresponding URLs.

The TS tries an input string on all of its tools, and produces a final output list of strings by concatenating the output lists from every tool in the following order: calculator, translator, and information retrieval system.

A tool will return an empty list of results if it can’t parse the input (e.g., the calculator cannot parse ‘How old is Rafael Nadal?’), and therefore does not contribute to the final output list.”

Here’s a Bard response with a snippet from the open web:

Advertisement
Google Bard: Everything You Need To KnowScreenshot of a Google Bard Chat, March 2023

Conversational Question-Answering Systems

There are no research papers that mention the name “Bard.”

However, there is quite a bit of recent research related to AI, including by scientists associated with LaMDA, that may have an impact on Bard.

The following doesn’t claim that Google is using these algorithms. We can’t say for certain that any of these technologies are used in Bard.

The value in knowing about these research papers is in knowing what is possible.

The following are algorithms relevant to AI-based question-answering systems.

One of the authors of LaMDA worked on a project that’s about creating training data for a conversational information retrieval system.

You can download the 2022 research paper as a PDF here: Dialog Inpainting: Turning Documents into Dialogs (and read the abstract here).

Advertisement

The problem with training a system like Bard is that question-and-answer datasets (like datasets comprised of questions and answers found on Reddit) are limited to how people on Reddit behave.

It doesn’t encompass how people outside of that environment behave and the kinds of questions they would ask, and what the correct answers to those questions would be.

The researchers explored creating a system read webpages, then used a “dialog inpainter” to predict what questions would be answered by any given passage within what the machine was reading.

A passage in a trustworthy Wikipedia webpage that says, “The sky is blue,” could be turned into the question, “What color is the sky?”

The researchers created their own dataset of questions and answers using Wikipedia and other webpages. They called the datasets WikiDialog and WebDialog.

  • WikiDialog is a set of questions and answers derived from Wikipedia data.
  • WebDialog is a dataset derived from webpage dialog on the internet.

These new datasets are 1,000 times larger than existing datasets. The importance of that is it gives conversational language models an opportunity to learn more.

The researchers reported that this new dataset helped to improve conversational question-answering systems by over 40%.

Advertisement

The research paper describes the success of this approach:

“Importantly, we find that our inpainted datasets are powerful sources of training data for ConvQA systems…

When used to pre-train standard retriever and reranker architectures, they advance state-of-the-art across three different ConvQA retrieval benchmarks (QRECC, OR-QUAC, TREC-CAST), delivering up to 40% relative gains on standard evaluation metrics…

Remarkably, we find that just pre-training on WikiDialog enables strong zero-shot retrieval performance—up to 95% of a finetuned retriever’s performance—without using any in-domain ConvQA data. “

Is it possible that Google Bard was trained using the WikiDialog and WebDialog datasets?

It’s difficult to imagine a scenario where Google would pass on training a conversational AI on a dataset that is over 1,000 times larger.

But we don’t know for certain because Google doesn’t often comment on its underlying technologies in detail, except on rare occasions like for Bard or LaMDA.

Advertisement

Large Language Models That Link To Sources

Google recently published an interesting research paper about a way to make large language models cite the sources for their information. The initial version of the paper was published in December 2022, and the second version was updated in February 2023.

This technology is referred to as experimental as of December 2022.

You can download the PDF of the paper here: Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models (read the Google abstract here).

The research paper states the intent of the technology:

“Large language models (LLMs) have shown impressive results while requiring little or no direct supervision.

Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios.

We believe the ability of an LLM to attribute the text that it generates is likely to be crucial in this setting.

Advertisement

We formulate and study Attributed QA as a key first step in the development of attributed LLMs.

We propose a reproducible evaluation framework for the task and benchmark a broad set of architectures.

We take human annotations as a gold standard and show that a correlated automatic metric is suitable for development.

Our experimental work gives concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third (How to build LLMs with attribution?).”

This kind of large language model can train a system that can answer with supporting documentation that, theoretically, assures that the response is based on something.

The research paper explains:

Advertisement

“To explore these questions, we propose Attributed Question Answering (QA). In our formulation, the input to the model/system is a question, and the output is an (answer, attribution) pair where answer is an answer string, and attribution is a pointer into a fixed corpus, e.g., of paragraphs.

The returned attribution should give supporting evidence for the answer.”

This technology is specifically for question-answering tasks.

The goal is to create better answers – something that Google would understandably want for Bard.

  • Attribution allows users and developers to assess the “trustworthiness and nuance” of the answers.
  • Attribution allows developers to quickly review the quality of the answers since the sources are provided.

One interesting note is a new technology called AutoAIS that strongly correlates with human raters.

In other words, this technology can automate the work of human raters and scale the process of rating the answers given by a large language model (like Bard).

The researchers share:

“We consider human rating to be the gold standard for system evaluation, but find that AutoAIS correlates well with human judgment at the system level, offering promise as a development metric where human rating is infeasible, or even as a noisy training signal. “

This technology is experimental; it’s probably not in use. But it does show one of the directions that Google is exploring for producing trustworthy answers.

Advertisement

Research Paper On Editing Responses For Factuality

Lastly, there’s a remarkable technology developed at Cornell University (also dating from the end of 2022) that explores a different way to source attribution for what a large language model outputs and can even edit an answer to correct itself.

Cornell University (like Stanford University) licenses technology related to search and other areas, earning millions of dollars per year.

It’s good to keep up with university research because it shows what is possible and what is cutting-edge.

You can download a PDF of the paper here: RARR: Researching and Revising What Language Models Say, Using Language Models (and read the abstract here).

The abstract explains the technology:

“Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog.

However, they sometimes generate unsupported or misleading content.

Advertisement

A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for attribution to external evidence.

To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically finds attribution for the output of any text generation model and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible.

…we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models.

Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.”

How Do I Get Access To Google Bard?

Google is currently accepting new users to test Bard, which is currently labeled as experimental. Google is rolling out access for Bard here.

Google Bard is ExperimentalScreenshot from bard.google.com, March 2023

Google is on the record saying that Bard is not search, which should reassure those who feel anxiety about the dawn of AI.

We are at a turning point that is unlike any we’ve seen in, perhaps, a decade.

Advertisement

Understanding Bard is helpful to anyone who publishes on the web or practices SEO because it’s helpful to know the limits of what is possible and the future of what can be achieved.

More Resources:


Featured Image: Whyredphotographor/Shutterstock



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Top Priorities, Challenges, And Opportunities

Published

on

By

Top Priorities, Challenges, And Opportunities

The world of search has seen massive change recently. Whether you’re still in the planning stages for this year or underway with your 2024 strategy, you need to know the new SEO trends to stay ahead of seismic search industry shifts.

It’s time to chart a course for SEO success in this changing landscape.

Watch this on-demand webinar as we explore exclusive survey data from today’s top SEO professionals and digital marketers to inform your strategy this year. You’ll also learn how to navigate SEO in the era of AI, and how to gain an advantage with these new tools.

You’ll hear:

  • The top SEO priorities and challenges for 2024.
  • The role of AI in SEO – how to get ahead of the anticipated disruption of SGE and AI overall, plus SGE-specific SEO priorities.
  • Winning SEO resourcing strategies and reporting insights to fuel success.

With Shannon Vize and Ryan Maloney, we’ll take a deep dive into the top trends, priorities, and challenges shaping the future of SEO.

Discover timely insights and unlock new SEO growth potential in 2024.

Advertisement

View the slides below or check out the full webinar for all the details.

Join Us For Our Next Webinar!

10 Successful Ways To Improve Your SERP Rankings [With Ahrefs]

Reserve your spot and discover 10 quick and easy SEO wins to boost your site’s rankings.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

E-E-A-T’s Google Ranking Influence Decoded

Published

on

By

E-E-A-T's Google Ranking Influence Decoded

The idea that something is not a ranking factor that nevertheless plays a role in ranking websites seems to be logically irreconcilable. Despite seeming like a paradox that cancels itself out, SearchLiaison recently tweeted some comments that go a long way to understanding how to think about E-E-A-T and apply it to SEO.

What A Googler Said About E-E-A-T

Marie Haynes published a video excerpt on YouTube from an event at which a Googler spoke, essentially doubling down on the importance of E-A-T.

This is what he said:

“You know this hasn’t always been there in Google and it’s something that we developed about ten to twelve or thirteen years ago. And it really is there to make sure that along the lines of what we talked about earlier is that it really is there to ensure that the content that people consume is going to be… it’s not going to be harmful and it’s going to be useful to the user. These are principles that we live by every single day.

And E-A-T, that template of how we rate an individual site based off of Expertise, Authoritativeness and Trustworthiness, we do it to every single query and every single result. So it’s actually very pervasive throughout everything that we do .

I will say that the YMYL queries, the Your Money or Your Life Queries, such as you know when I’m looking for a mortgage or when I’m looking for the local ER,  those we have a particular eye on and we pay a bit more attention to those queries because clearly they’re some of the most important decisions that people can make.

Advertisement

So I would say that E-A-T has a bit more of an impact there but again, I will say that E-A-T applies to everything, every single query that we actually look at.”

How can something be a part of every single search query and not be a ranking factor, right?

Background, Experience & Expertise In Google Circa 2012

Something to consider is that in 2012 Google’s senior engineer at the time, Matt Cutts, said that experience and expertise brings a measure of quality to content and makes it worthy of ranking.

Matt Cutts’ remarks on experience and expertise were made in an interview with Eric Enge.

Discussing whether the website of a hypothetical person named “Jane” deserves to rank with articles that are original variations of what’s already in the SERPs.

Matt Cutts observed:

Advertisement

“While they’re not duplicates they bring nothing new to the table.

Google would seek to detect that there is no real differentiation between these results and show only one of them so we could offer users different types of sites in the other search results.

They need to ask themselves what really is their value add? …they need to figure out what… makes them special.

…if Jane is just churning out 500 words about a topic where she doesn’t have any background, experience or expertise, a searcher might not be as interested in her opinion.”

Matt then cites the example of Pulitzer Prize-Winning movie reviewer Roger Ebert as a person with the background, experience and expertise that makes his opinion valuable to readers and the content worthy of ranking.

Matt didn’t say that a webpage author’s background, experience and expertise were ranking factors. But he did say that these are the kinds of things that can differentiate one webpage from another and align it to what Google wants to rank.

He specifically said that Google’s algorithm detects if there is something different about it that makes it stand out. That was in 2012 but not much has changed because Google’s John Mueller says the same thing.

Advertisement

For example, in 2020 John Mueller said that differentiation and being compelling is important for getting Google to notice and rank a webpage.

“So with that in mind, if you’re focused on kind of this small amount of content that is the same as everyone else then I would try to find ways to significantly differentiate yourselves to really make it clear that what you have on your website is significantly different than all of those other millions of ringtone websites that have kind of the same content.

…And that’s the same recommendation I would have for any kind of website that offers essentially the same thing as lots of other web sites do.

You really need to make sure that what you’re providing is unique and compelling and high quality so that our systems and users in general will say, I want to go to this particular website because they offer me something that is unique on the web and I don’t just want to go to any random other website.”

In 2021, in regard to getting Google to index a webpage, Mueller also said:

“Is it something the web has been waiting for? Or is it just another red widget?”

This thing about being compelling and different than other sites, it’s something that’s been a part of Google’s algorithm awhile, just like the Googler in the video said, just like Matt Cutts said and exactly like what Mueller has said as well.

Are they talking about signals?

Advertisement

E-EA-T Algorithm Signals

We know there’s something in the algorithm that relates to someone’s expertise and background that Google’s looking for. The table is set and we can dig into the next step of what it all means.

A while back back I remember reading something that Marie Haynes said about E-A-T, she called it a framework. And I thought, now that’s an interesting thing she just did, she’s conceptualizing E-A-T.

When SEOs discussed E-A-T it was always in the context of what to do in order to demonstrate E-A-T. So they looked at the Quality Raters Guide for guidance, which kind of makes sense since it’s a guide, right?

But what I’m proposing is that the answer isn’t really in the guidelines or anything that the quality raters are looking for.

The best way to explain it is to ask you to think about the biggest part of Google’s algorithm, relevance.

What’s relevance? Is it something you have to do? It used to be about keywords and that’s easy for SEOs to understand. But it’s not about keywords anymore because Google’s algorithm has natural language understanding (NLU). NLU is what enables machines to understand language in the way that it’s actually spoken (natural language).

Advertisement

So, relevance is just something that’s related or connected to something else. So, if I ask, how do I satiate my thirst? The answer can be water, because water quenches the thirst.

How is a site relevant to the search query: “how do I satiate my thirst?”

An SEO would answer the problem of relevance by saying that the webpage has to have the keywords that match the search query, which would be the words “satiate” and “thirst.”

The next step the SEO would take is to extract the related entities for “satiate” and “thirst” because every SEO “knows” they need to do entity research to understand how to make a webpage that answers the search query, “How do I satiate my thirst?”

Hypothetical Related entities:

  • Thirst: Water, dehydration, drink,
  • Satiate: Food, satisfaction, quench, fulfillment, appease

Now that the SEO has their entities and their keywords they put it all together and write a 600 word essay that uses all their keywords and entities so that their webpage is relevant for the search query, “How do I satiate my thirst?”

I think we can stop now and see how silly that is, right? If someone asked you, “How do I satiate my thirst?” You’d answer, “With water” or “a cold refreshing beer” because that’s what it means to be relevant.

Advertisement

Relevance is just a concept. It doesn’t have anything to do with entities or keywords in today’s search algorithms because the machine is understanding search queries as natural language, even more so with AI search engines.

Similarly, E-E-A-T is also just a concept. It doesn’t have anything to do with author bios, LinkedIn profiles, it doesn’t have anything at all to do with making your content say that you handled the product that’s being reviewed.

Here’s what SearchLiaison recently said about an E-E-A-T, SEO and Ranking:

“….just making a claim and talking about a ‘rigorous testing process’ and following an ‘E-E-A-T checklist’ doesn’t guarantee a top ranking or somehow automatically cause a page to do better.”

Here’s the part where SearchLiaison ties a bow around the gift of E-E-A-T knowledge:

“We talk about E-E-A-T because it’s a concept that aligns with how we try to rank good content.”

E-E-A-T Can’t Be Itemized On A Checklist

Remember how we established that relevance is a concept and not a bunch of keywords and entities? Relevance is just answering the question.

E-E-A-T is the same thing. It’s not something that you do. It’s closer to something that you are.

Advertisement

SearchLiaison elaborated:

“…our automated systems don’t look at a page and see a claim like “I tested this!” and think it’s better just because of that. Rather, the things we talk about with E-E-A-T are related to what people find useful in content. Doing things generally for people is what our automated systems seek to reward, using different signals.”

A Better Understanding Of E-E-A-T

I think it’s clear now how E-E-A-T isn’t something that’s added to a webpage or is something that is demonstrated on the webpage. It’s a concept, just like relevance.

A good way to think o fit is if someone asks you a question about your family and you answer it. Most people are pretty expert and experienced enough to answer that question. That’s what E-E-A-T is and how it should be treated when publishing content, regardless if it’s YMYL content or a product review, the expertise is just like answering a question about your family, it’s just a concept.

Featured Image by Shutterstock/Roman Samborskyi

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google Announces A New Carousel Rich Result

Published

on

By

Google Announces A New Carousel Rich Result

Google announced a new carousel rich result that can be used for local businesses, products, and events which will show a scrolling horizontal carousel displaying all of the items in the list. It’s very flexible and can even be used to create a top things to do in a city list that combines hotels, restaurants, and events. This new feature is in beta, which means it’s being tested.

The new carousel rich result is for displaying lists in a carousel format. According to the announcement the rich results is limited to the following types:

LocalBusiness and its subtypes, for example:
– Restaurant
– Hotel
– VacationRental
– Product
– Event

An example of subtypes is Lodgings, which is a subset of LocalBusiness.

Here is the Schema.org hierarchical structure that shows the LodgingBusiness type as being a subset of the LocalBusiness type.

  • Thing > Organization > LocalBusiness > LodgingBusiness
  • Thing > Place > LocalBusiness > LodgingBusiness

ItemList Structured Data

The carousel displays “tiles” that contain information from the webpage that’s about the price, ratings and images. The order of what’s in the ItemList structured data is the order that they will be displayed in the carousel.

Advertisement

Publishers must use the ItemList structured data in order to become eligible for the new rich result

All information in the ItemList structured data must be on the webpage. Just like any other structured data, you can’t stuff the structured data with information that is not visible on the webpage itself.

There are two important rules when using this structured data:

  1. 1. The ItemList type must be the top level container for the structured data.
  2. 2. All the URLs of in the list must point to different webpages on the same domain.

The part about the ItemList being the top level container means that the structured data cannot be merged together with another structured data where the top-level container is something other than ItemList.

For example, the structured data must begin like this:

<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "ItemList", "itemListElement": [ { "@type": "ListItem", "position": 1,

A useful quality of this new carousel rich result is that publishers can mix and match the different entities as long as they’re within the eligible structured data types.

Eligible Structured Data Types

Advertisement
  • LocalBusiness and its subtypes
  • Product
  • Event

Google’s announcement explains how to mix and match the different structured data types:

“You can mix and match different types of entities (for example, hotels, restaurants), if needed for your scenario. For example, if you have a page that has both local events and local businesses.”

Here is an example of a ListItem structured data that can be used in a webpage about Things To Do In Paris.

The following structured data is for two events and a local business (the Eiffel Tower):

<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "ItemList", "itemListElement": [ { "@type": "ListItem", "position": 1, "item": { "@type": "Event", "name": "Paris Seine River Dinner Cruise", "image": [ "https://example.com/photos/1x1/photo.jpg", "https://example.com/photos/4x3/photo.jpg", "https://example.com/photos/16x9/photo.jpg" ], "offers": { "@type": "Offer", "price": 45.00, "priceCurrency": "EUR" }, "aggregateRating": { "@type": "AggregateRating", "ratingValue": 4.2, "reviewCount": 690 }, "url": "https://www.example.com/event-location1" } }, { "@type": "ListItem", "position": 2, "item": { "@type": "LocalBusiness", "name": "Notre-Dame Cathedral", "image": [ "https://example.com/photos/1x1/photo.jpg", "https://example.com/photos/4x3/photo.jpg", "https://example.com/photos/16x9/photo.jpg" ], "priceRange": "$", "aggregateRating": { "@type": "AggregateRating", "ratingValue": 4.8, "reviewCount": 4220 }, "url": "https://www.example.com/localbusiness-location" } }, { "@type": "ListItem", "position": 3, "item": { "@type": "Event", "name": "Eiffel Tower With Host Summit Tour", "image": [ "https://example.com/photos/1x1/photo.jpg", "https://example.com/photos/4x3/photo.jpg", "https://example.com/photos/16x9/photo.jpg" ], "offers": { "@type": "Offer", "price": 59.00, "priceCurrency": "EUR" }, "aggregateRating": { "@type": "AggregateRating", "ratingValue": 4.9, "reviewCount": 652 }, "url": "https://www.example.com/event-location2" } } ] } </script>

Be As Specific As Possible

Google’s guidelines recommends being as specific as possible but that if there isn’t a structured data type that closely matches with the type of business then it’s okay to use the more generic LocalBusiness structured data type.

“Depending on your scenario, you may choose the best type to use. For example, if you have a list of hotels and vacation rentals on your page, use both Hotel and VacationRental types. While it’s ideal to use the type that’s closest to your scenario, you can choose to use a more generic type (for example, LocalBusiness).”

Can Be Used For Products

A super interesting use case for this structured data is for displaying a list of products in a carousel rich result.

Advertisement

The structured data for that begins as a ItemList structured data type like this:

<script type="application/ld+json"> { "@context": "https://schema.org", "@type": "ItemList", "itemListElement": [ { "@type": "ListItem", "position": 1, "item": { "@type": "Product",

The structured data can list images, ratings, reviewCount, and currency just like any other product listing, but doing it like this will make the webpage eligible for the carousel rich results.

Google has a list of recommended recommended properties that can be used with the Products version, such as offers, offers.highPrice, and offers.lowPrice.

Good For Local Businesses and Merchants

This new structured data is a good opportunity for local businesses and publishers that list events, restaurants and lodgings to get in on a new kind of rich result.

Using this structured data doesn’t guarantee that it will display as a rich result, it only makes it eligible for it.

This new feature is in beta, meaning that it’s a test.

Advertisement

Read the new developer page for this new rich result type:

Structured data carousels (beta)

Featured Image by Shutterstock/RYO Alexandre

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS