Connect with us

SEO

Is ChatGPT Use Of Web Content Fair?

Published

on

Is ChatGPT Use Of Web Content Fair?

Large Language Models (LLMs) like ChatGPT train using multiple sources of information, including web content. This data forms the basis of summaries of that content in the form of articles that are produced without attribution or benefit to those who published the original content used for training ChatGPT.

Search engines download website content (called crawling and indexing) to provide answers in the form of links to the websites.

Website publishers have the ability to opt-out of having their content crawled and indexed by search engines through the Robots Exclusion Protocol, commonly referred to as Robots.txt.

The Robots Exclusions Protocol is not an official Internet standard but it’s one that legitimate web crawlers obey.

Should web publishers be able to use the Robots.txt protocol to prevent large language models from using their website content?

Large Language Models Use Website Content Without Attribution

Some who are involved with search marketing are uncomfortable with how website data is used to train machines without giving anything back, like an acknowledgement or traffic.

Hans Petter Blindheim (LinkedIn profile), Senior Expert at Curamando shared his opinions with me.

Hans commented:

“When an author writes something after having learned something from an article on your site, they will more often than not link to your original work because it offers credibility and as a professional courtesy.

It’s called a citation.

But the scale at which ChatGPT assimilates content and does not grant anything back differentiates it from both Google and people.

A website is generally created with a business directive in mind.

Google helps people find the content, providing traffic, which has a mutual benefit to it.

But it’s not like large language models asked your permission to use your content, they just use it in a broader sense than what was expected when your content was published.

And if the AI language models do not offer value in return – why should publishers allow them to crawl and use the content?

Does their use of your content meet the standards of fair use?

When ChatGPT and Google’s own ML/AI models trains on your content without permission, spins what it learns there and uses that while keeping people away from your websites – shouldn’t the industry and also lawmakers try to take back control over the Internet by forcing them to transition to an “opt-in” model?”

The concerns that Hans expresses are reasonable.

In light of how fast technology is evolving, should laws concerning fair use be reconsidered and updated?

I asked John Rizvi, a Registered Patent Attorney (LinkedIn profile) who is board certified in Intellectual Property Law, if Internet copyright laws are outdated.

John answered:

“Yes, without a doubt.

One major bone of contention in cases like this is the fact that the law inevitably evolves far more slowly than technology does.

In the 1800s, this maybe didn’t matter so much because advances were relatively slow and so legal machinery was more or less tooled to match.

Today, however, runaway technological advances have far outstripped the ability of the law to keep up.

There are simply too many advances and too many moving parts for the law to keep up.

As it is currently constituted and administered, largely by people who are hardly experts in the areas of technology we’re discussing here, the law is poorly equipped or structured to keep pace with technology…and we must consider that this isn’t an entirely bad thing.

So, in one regard, yes, Intellectual Property law does need to evolve if it even purports, let alone hopes, to keep pace with technological advances.

The primary problem is striking a balance between keeping up with the ways various forms of tech can be used while holding back from blatant overreach or outright censorship for political gain cloaked in benevolent intentions.

The law also has to take care not to legislate against possible uses of tech so broadly as to strangle any potential benefit that may derive from them.

You could easily run afoul of the First Amendment and any number of settled cases that circumscribe how, why, and to what degree intellectual property can be used and by whom.

And attempting to envision every conceivable usage of technology years or decades before the framework exists to make it viable or even possible would be an exceedingly dangerous fool’s errand.

In situations like this, the law really cannot help but be reactive to how technology is used…not necessarily how it was intended.

That’s not likely to change anytime soon, unless we hit a massive and unanticipated tech plateau that allows the law time to catch up to current events.”

So it appears that the issue of copyright laws has many considerations to balance when it comes to how AI is trained, there is no simple answer.

OpenAI and Microsoft Sued

An interesting case that was recently filed is one in which OpenAI and Microsoft used open source code to create their CoPilot product.

The problem with using open source code is that the Creative Commons license requires attribution.

According to an article published in a scholarly journal:

“Plaintiffs allege that OpenAI and GitHub assembled and distributed a commercial product called Copilot to create generative code using publicly accessible code originally made available under various “open source”-style licenses, many of which include an attribution requirement.

As GitHub states, ‘…[t]rained on billions of lines of code, GitHub Copilot turns natural language prompts into coding suggestions across dozens of languages.’

The resulting product allegedly omitted any credit to the original creators.”

The author of that article, who is a legal expert on the subject of copyrights, wrote that many view open source Creative Commons licenses as a “free-for-all.”

Some may also consider the phrase free-for-all a fair description of the datasets comprised of Internet content are scraped and used to generate AI products like ChatGPT.

Background on LLMs and Datasets

Large language models train on multiple data sets of content. Datasets can consist of emails, books, government data, Wikipedia articles, and even datasets created of websites linked from posts on Reddit that have at least three upvotes.

Many of the datasets related to the content of the Internet have their origins in the crawl created by a non-profit organization called Common Crawl.

Their dataset, the Common Crawl dataset, is available free for download and use.

The Common Crawl dataset is the starting point for many other datasets that created from it.

For example, GPT-3 used a filtered version of Common Crawl (Language Models are Few-Shot Learners PDF).

This is how  GPT-3 researchers used the website data contained within the Common Crawl dataset:

“Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset… constituting nearly a trillion words.

This size of dataset is sufficient to train our largest models without ever updating on the same sequence twice.

However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets.

Therefore, we took 3 steps to improve the average quality of our datasets:

(1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora,

(2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and

(3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.”

Google’s C4 dataset (Colossal, Cleaned Crawl Corpus), which was used to create the Text-to-Text Transfer Transformer (T5), has its roots in the Common Crawl dataset, too.

Their research paper (Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer PDF) explains:

“Before presenting the results from our large-scale empirical study, we review the necessary background topics required to understand our results, including the Transformer model architecture and the downstream tasks we evaluate on.

We also introduce our approach for treating every problem as a text-to-text task and describe our “Colossal Clean Crawled Corpus” (C4), the Common Crawl-based data set we created as a source of unlabeled text data.

We refer to our model and framework as the ‘Text-to-Text Transfer Transformer’ (T5).”

Google published an article on their AI blog that further explains how Common Crawl data (which contains content scraped from the Internet) was used to create C4.

They wrote:

“An important ingredient for transfer learning is the unlabeled dataset used for pre-training.

To accurately measure the effect of scaling up the amount of pre-training, one needs a dataset that is not only high quality and diverse, but also massive.

Existing pre-training datasets don’t meet all three of these criteria — for example, text from Wikipedia is high quality, but uniform in style and relatively small for our purposes, while the Common Crawl web scrapes are enormous and highly diverse, but fairly low quality.

To satisfy these requirements, we developed the Colossal Clean Crawled Corpus (C4), a cleaned version of Common Crawl that is two orders of magnitude larger than Wikipedia.

Our cleaning process involved deduplication, discarding incomplete sentences, and removing offensive or noisy content.

This filtering led to better results on downstream tasks, while the additional size allowed the model size to increase without overfitting during pre-training.”

Google, OpenAI, even Oracle’s Open Data are using Internet content, your content, to create datasets that are then used to create AI applications like ChatGPT.

Common Crawl Can Be Blocked

It is possible to block Common Crawl and subsequently opt-out of all the datasets that are based on Common Crawl.

But if the site has already been crawled then the website data is already in datasets. There is no way to remove your content from the Common Crawl dataset and any of the other derivative datasets like C4 and .

Using the Robots.txt protocol will only block future crawls by Common Crawl, it won’t stop researchers from using content already in the dataset.

How to Block Common Crawl From Your Data

Blocking Common Crawl is possible through the use of the Robots.txt protocol, within the above discussed limitations.

The Common Crawl bot is called, CCBot.

It is identified using the most up to date CCBot User-Agent string: CCBot/2.0

Blocking CCBot with Robots.txt is accomplished the same as with any other bot.

Here is the code for blocking CCBot with Robots.txt.

User-agent: CCBot
Disallow: /

CCBot crawls from Amazon AWS IP addresses.

CCBot also follows the nofollow Robots meta tag:

<meta name="robots" content="nofollow">

What If You’re Not Blocking Common Crawl?

Web content can be downloaded without permission, which is how browsers work, they download content.

Google or anybody else does not need permission to download and use content that is published publicly.

Website Publishers Have Limited Options

The consideration of whether it is ethical to train AI on web content doesn’t seem to be a part of any conversation about the ethics of how AI technology is developed.

It seems to be taken for granted that Internet content can be downloaded, summarized and transformed into a product called ChatGPT.

Does that seem fair? The answer is complicated.

Featured image by Shutterstock/Krakenimages.com



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

What Is Schema Markup & Why Is It Important For SEO?

Published

on

By

What Is Schema Markup & Why Is It Important For SEO?

Schema.org is a collection of vocabulary (or schemas) used to apply structured data markup to web pages and content. Correctly applying schema can improve SEO outcomes through rich snippets.

Structured data markup is translated by platforms such as Google and Microsoft to provide enhanced rich results (or rich snippets) in search engine results pages or emails. For example, you can markup your ecommerce product pages with variants schema to help Google understand product variations.

Schema.org is an independent project that has helped establish structured data consistency across the internet. It began collaborating with search engines such as Google, Yahoo, Bing, and Yandex back in 2011.

The Schema vocabulary can be applied to pages through encodings such as RDFa, Microdata, and JSON-LD. JSON-LD schema is preferred by Google as it is the easiest to apply and maintain.

Schema is not a ranking factor.

However, your webpage becomes eligible for rich snippets in SERPs only when you use schema markup. This can enhance your search visibility and increase CTR on your webpage from search results.

Schema can also be used to build a knowledge graph of entities and topics. Using semantic markup in this way aligns your website with how AI algorithms categorize entities, assisting search engines in understanding your website and content.

This means that search engines should have additional information to help them figure out what the webpage is about.

You can even link your entities directly to sites like Wikipedia or Google’s knowledge graph to build explicit connections. Using Schema this way can have positive SEO results, according to Martha van Berkel, CEO of Schema App:

By helping search engines understand content, you are assisting them in saving resources (especially important when you have a large website with millions of pages) and increasing the chances for your content to be interpreted properly and ranked well. While this may not be a ranking factor directly, Schema helps your SEO efforts by giving search engines the best chance of interpreting your content correctly, giving users the best chance of discovering it.

Listed above are some of the most popular uses of schema, which are supported by Google and other search engines.

You may have an object type that has a schema.org definition but is not supported by search engines.

In such cases, it is advised to implement them, as search engines may start supporting them in the future, and you may benefit from them as you already have that implementation.

Google recommends JSON-LD as the preferred format for structured data. Microdata is still supported, but JSON-LD schema is recommended.

In certain circumstances, it isn’t possible to implement JSON-LD schema due to website technical infrastructure limitations such as old content management systems). In these cases, the only option is to markup HTML via Microdata or RDFa.

You can now mix JSON-LD and Microdata formats by matching the @id attribute of JSON-LD schema with the itemid attribute of Microdata schema. This approach helps reduce the HTML size of your pages.

For example, in a FAQ section with extensive text, you can use Microdata for the content and JSON-LD for the structured data without duplicating the text, thus avoiding an increase in page size. We will dive deeper into this below in the article when discussing each type in detail.

JSON-LD encodes data using JSON, making it easy to integrate structured data into web pages. JSON-LD allows connecting different schema types using a graph with @ids, improving data integration and reducing redundancy.

Let’s look at an example. Let’s say that you own a store that sells high-quality routers. If you were to look at the source code of your homepage, you would likely see something like this:

Once you dive into the code, you’ll want to find the portion of your webpage that discusses what your business offers. In this example, that data can be found between the two

tags.

The following JSON-LD formatted text will markup the information within that HTML fragment on your webpage, which you may want to include in your webpage’s

section.



This snippet of code defines your business as a store via the attribute"@type": "Store".

Then, it details its location, contact information, hours of operation from Monday to Saturday, and different operational hours for Sunday.

By structuring your webpage data this way, you provide critical information directly to search engines, which can improve how they index and display your site in search results. Just like adding tags in the initial HTML, inserting this JSON-LD script tells search engines specific aspects of your business.

Let’s review another example of WebPage schema connected with Organization and Author schemas via @id. JSON-LD is the format Google recommends and other search engines because it’s extremely flexible, and this is a great example.



In the example:

  • Website links to the organization as the publisher with @id.
  • The organization is described with detailed properties.
  • WebPage links to the WebSite with isPartOf.
  • NewsArticle links to the WebPage with isPartOf, and back to the WebPage with mainEntityOfPage, and includes the author property via @id.

You can see how graph nodes are linked to each other using the"@id"attribute. This way, we inform Google that it is a webpage published by the publisher described in the schema.

The use of hashes (#) for IDs is optional. You should only ensure that different schema types don’t have the same ID by accident. Adding custom hashes (#) can be helpful, as it provides an extra layer of insurance that they will not be repeated.

You may wonder why we use"@id"to connect graph nodes. Can’t we just drop organization, author, and webpage schemas separately on the same page, and it is intuitive that those are connected?

The issue is that Google and other search engines cannot reliably interpret these connections unless explicitly linked using @id.

Adding to the graph additional schema types is as easy as constructing Lego bricks. Say we want to add an image to the schema:

{
   "@type": "ImageObject",
   "@id": "https://www.example.com/#post-image",
   "url": "https://www.example.com/example.png",
   "contentUrl": "https://www.example.com/example.png",
   "width": 2160,
   "height": 1215,
   "thumbnail": [
     {
        "@type": "ImageObject",
        "url": "https://example.com/4x3/photo.jpg",
        "width": 1620,
        "height": 1215
      },
      {
        "@type": "ImageObject",
        "url": "https://example.com/16x9/photo.jpg",
        "width": 1440,
        "height": 810
      },
      {
        "@type": "ImageObject",
        "url": "https://example.com/1x1/photo.jpg",
        "width": 1000,
        "height": 1000
      }
    ]
}

As you already know from the NewsArticle schema, you need to add it to the above schema graph as a parent node and link via @id.

As you do that, it will have this structure:



Quite easy, isn’t it? Now that you understand the main principle, you can build your own schema based on the content you have on your website.

And since we live in the age of AI, you may also want to use ChatGPT or other chatbots to help you build any schema you want.

2. Microdata Schema Format

Microdata is a set of tags that aims to make annotating HTML elements with machine-readable tags much easier.

However, the one downside to using Microdata is that you have to mark every individual item within the body of your webpage. As you can imagine, this can quickly get messy.

Take a look at this sample HTML code, which corresponds to the above JSON schema with NewsArticle:

Our Company

Example Company, also known as Example Co., is a leading innovator in the tech industry.

Founded in 2000, we have grown to a team of 200 dedicated employees.

Our slogan is: "Innovation at its best".

Contact us at +1-800-555-1212 for customer service.

Our Founder

Our founder, Jane Smith, is a pioneer in the tech industry.

Connect with Jane on Twitter and LinkedIn.

About Us

This is the About Us page for Example Company.

Example News Headline

This is an example news article.

This is the full content of the example news article. It provides detailed information about the news event or topic covered in the article.

Author: John Doe. Connect with John on Twitter and LinkedIn.

If we convert the above JSON-LD schema into Microdata format, it will look like this:

Our Company

Example Company, also known as Example Co., is a leading innovator in the tech industry.

Founded in 2000-01-01, we have grown to a team of 200 dedicated employees.

Our slogan is: Innovation at its best.

Contact us at +1-800-555-1212 for Customer Service.

Example Company Logo

Connect with us on: Facebook, Twitter, LinkedIn

Our Founder

Our founder, Jane Smith, is a pioneer in the tech industry.

Connect with Jane on Twitter and LinkedIn.

About Us

This is the About Us page for Example Company.

Example News Headline

This is an example news article.

This is the full content of the example news article. It provides detailed information about the news event or topic covered in the article.

Author:

Example image

This example shows how complicated it becomes compared to JSON-LD since the markup is spread over HTML. Let’s understand what is in the markup.

You can see

tags like:


By adding this tag, we’re stating that the HTML code contained between the

blocks identifies a specific item.

Next, we have to identify what that item is by using the ‘itemtype’ attribute to identify the type of item (Person).


An item type comes in the form of a URL (such as https://schema.org/Person). Let’s say, for example, you have a product you may use http://schema.org/Product.

To make things easier, you can browse a list of item types here and view extensions to identify the specific entity you’re looking for. Keep in mind that this list is not all-encompassing but only includes ones that are supported by Google, so there is a possibility that you won’t find the item type for your specific niche.

It may look complicated, but Schema.org provides examples of how to use the different item types so you can see what the code is supposed to do.

Don’t worry; you won’t be left out in the cold trying to figure this out on your own!

If you’re still feeling a little intimidated by the code, Google’s Structured Data Markup Helper makes it super easy to tag your webpages.

To use this amazing tool, just select your item type, paste in the URL of the target page or the content you want to target, and then highlight the different elements so that you can tag them.

3. RDFa Schema Format

RDFa is an acronym for Resource Description Framework in Attributes. Essentially, RDFa is an extension to HTML5 designed to aid users in marking up structured data.

RDFa isn’t much different from Microdata. RDFa tags incorporate the preexisting HTML code in the body of your webpage. For familiarity, we’ll look at the same code above.

The HTML for the same JSON-LD news article will look like:

vocab="https://schema.org/" typeof="WebSite" resource="https://www.example.com/#website">

Our Company

Example Company, also known as Example Co., is a leading innovator in the tech industry.

Founded in 2000-01-01, we have grown to a team of 200 dedicated employees.

Our slogan is: Innovation at its best.

Contact us at +1-800-555-1212 for Customer Service.

https://www.example.com Example Company Logo

Connect with us on: Facebook, Twitter, LinkedIn

Our Founder

Our founder, Jane Smith, is a pioneer in the tech industry.

Connect with Jane on Twitter and LinkedIn.

About Us

This is the About Us page for Example Company.

https://www.example.com/about

Example News Headline

This is an example news article.

This is the full content of the example news article. It provides detailed information about the news event or topic covered in the article.

Author: John Doe Profile Twitter LinkedIn

Example image

Unlike Microdata, which uses a URL to identify types, RDFa uses one or more words to classify types.

vocab=”http://schema.org/” typeof=”WebPage”>

If you wish to identify a property further, use the ‘typeof’ attribute.

Let’s compare JSON-LD, Microdata, and RDFa side by side. The @type attribute of JSON-LD is equivalent to the itemtype attribute of Microdata format and the typeof attribute in RDFa. Furthermore, the propertyName of JSON-LD attribute would be the equivalent of the itemprop and property attributes.

Attribute Name JSON-LD Microdata RDFa
Type @type itemtype typeof
ID @id itemid resource
Property propertyName itemprop property
Name name itemprop=”name” property=”name”
Description description itemprop=”description” property=”description”

For further explanation, you can visit Schema.org to check lists and view examples. You can find which kinds of elements are defined as properties and which are defined as types.

To help, every page on Schema.org provides examples of how to apply tags properly. Of course, you can also fall back on Google’s Structured Data Testing Tool.

4. Mixing Different Formats Of Structured Data With JSON-LD

If you use JSON-LD schema but certain parts of pages aren’t compatible with it, you can mix schema formats by linking them via @id.

For example, if you have live blogging on the website and a JSON-LD schema, including all live blogging items in the JSON schema would mean having the same content twice on the page, which may increase HTML size and affect First Contentful Paint and Largest Contentful Paint page speed metrics.

You can solve this either by generating JSON-LD dynamically with JavaScript when the page loads or by marking up HTML tags of live blogging via the Microdata format, then linking to your JSON-LD schema in the head section via “@id“.

Here is an example of how to do it.

Say we have this HTML with Microdata markup with itemid="https://www.example.com/live-blog-page/#live-blog"

Explore the biggest announcements from DevDay

1:45 PM ET Nov 6, 2023

OpenAI is taking the first step in gradual deployment of GPTs – tailored ChatGPT for a specific purpose – for safety purposes.

1:44 PM ET Nov 6, 2023

ChatGPT now uses GPT-4 turbo with current knowledge.

It also knows which tool to choose for a task with GPT-4 All Tools.

1:43 PM ET Nov 6, 2023

Microsoft CEO Satya Nadella joined Altman to announce deeper partnership with OpenAI to help developers bring more AI advancements.

We can link to it from the sample JSON-LD example we had like this:



If you copy and paste HTML and JSON examples underneath in the schema validator tool, you will see that they are validating properly.

The schema validator does validate the above example.The schema validator does validate the above example.

The SEO Impact Of Structured Data

This article explored the different schema encoding types and all the nuances regarding structured data implementation.

Schema is much easier to apply than it seems, and it’s a best practice you must incorporate into your webpages. While you won’t receive a direct boost in your SEO rankings for implementing Schema, it can:

  • Make your pages eligible to appear in rich results.
  • Ensure your pages get seen by the right users more often.
  • Avoid confusion and ambiguity.

The work may seem tedious. However, given time and effort, properly implementing Schema markup is good for your website and can lead to better user journeys through the accuracy of information you’re supplying to search engines.


Image Credits

Featured Image: Paulo Bobita
Screenshot taken by author

 

Read more:

VIP CONTRIBUTOR
Chuck Price

Founder at Measurable SEO

Looking for a Content Marketing Solution to Increase Traffic and Revenue? I’m the founder of Measurable SEO and former COO ...

Advanced Technical SEO: A Complete Guide



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Gen Z Ditches Google, Turns To Reddit For Product Searches

Published

on

By

In this photo illustration, the Reddit logo is displayed on a smartphone screen.

A new report from Reddit, in collaboration with GWI and AmbassCo, sheds light on the evolving search behaviors of Generation Z consumers.

The study surveyed over 3,000 internet users across the UK, US, and Germany, highlighting significant changes in how young people discover and research products online.

Here’s an overview of key findings and the implications for marketers.

Decline In Traditional Search

The study found that Gen Z uses search engines to find new brands and products less often.

That’s because they shop online differently. They’re less interested in looking for expert reviews or spending much time searching for products.

There are also frustrations with mobile-friendliness and complex interfaces on traditional search platforms.

Because of this, traditional SEO strategies might not work well for reaching younger customers.

Takeaway

Companies trying to reach Gen Z might need to try new methods instead of just focusing on being visible on Google and other search engines.

Rise Of Social Media Discovery

Screenshot from Reddit study titled: “From search to research: How search marketers can keep up with Gen Z.”, June 2024.

Gen Z is increasingly using social media to find new brands and products.

The study shows that Gen Z has used social media for product discovery 36% more frequently since 2018.

This change is affecting how young people shop online. Instead of searching for products, they expect brands to appear in their social media feeds.

1719123963 547 Gen Z Ditches Google Turns To Reddit For Product SearchesScreenshot from Reddit study titled: “From search to research: How search marketers can keep up with Gen Z.”, June 2024.

Because of this, companies trying to reach young customers need to pay more attention to how they present themselves on social media.

Takeaway

To succeed at marketing to Gen Z, businesses will likely need to focus on two main things:

  1. Ensure that your content appears more often in social media feeds.
  2. Create posts people want to share and interact with.

Trust Issues With Influencer Marketing

Even though more people are finding products through social media, the report shows that Gen Z is less likely to trust what social media influencers recommend.

These young shoppers often don’t believe in posts that influencers are paid to make or products they promote.

Instead, they prefer to get information from sources that feel more real and are driven by regular people in online communities.

Takeaway

Because of this lack of trust, companies must focus on being genuine and building trust when they try to get their websites to appear in search results or create ads.

Some good ways to connect with these young consumers might be to use content created by regular users, encourage honest product reviews, and create authentic conversations within online communities.

Challenges With Current Search Experiences

The research shows that many people are unhappy with how search engines work right now.

More than 60% of those surveyed want search results to be more trustworthy. Almost half of users don’t like looking through many search result pages.

Gen Z is particularly bothered by inaccurate information and unreliable reviews.

1719123963 785 Gen Z Ditches Google Turns To Reddit For Product SearchesScreenshot from Reddit study titled: “From search to research: How search marketers can keep up with Gen Z.”, June 2024.

Takeaway

Given the frustration with search quality, marketers should prioritize creating accurate, trustworthy content.

This can help build brand credibility, leading to more direct visits.

Reddit: A Trusted Alternative

The report suggests that Gen Z trusts Reddit when looking up products—it’s their third most trusted source, after friends and family and review websites.

1719123963 403 Gen Z Ditches Google Turns To Reddit For Product SearchesScreenshot from Reddit study titled: “From search to research: How search marketers can keep up with Gen Z.”, June 2024.

Young users like Reddit because it’s community-based and provides specific answers to users’ questions, making it feel more real.

It’s worth noting that this report comes from Reddit itself, which probably influenced why it’s suggesting its own platform.

Takeaway

Companies should focus more on being part of smaller, specific online groups frequented by Gen Z.

That could include Reddit or any other forum.

Why SEJ Cares

As young people change how they look for information online, this study gives businesses important clues about connecting with future customers.

Here’s what to remember:

  • Traditional search engine use is declining among Gen Z.
  • Social media is increasingly vital for product discovery.
  • There’s growing skepticism towards influencer marketing.
  • Current search experiences often fail to meet user expectations.
  • Community-based platforms like Reddit are gaining trust.

Featured Image: rafapress/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google Clarifies Organization Merchant Returns Structured Data

Published

on

By

Google updates organization structured data for merchant returns

Google quietly updated their organization structured data documentation in order to clarify two points about merchant returns in response to feedback about an ambiguity in the previous version.

Organization Structured Data and Merchant Returns

Google recently expanded their Organization structured data so that it could now accommodate a merchant return policy. The change added support for adding a sitewide merchant return policy.

The original reason for adding this support:

“Adding support for Organization-level return policies

What: Added documentation on how to specify a general return policy for an Organization as a whole.

Why: This makes it easier to define and maintain general return policies for an entire site.”

However that change left unanswered about what will happen if a site has a sitewide return policy but also has a different policy for individual products.

The clarification applies for the specific scenario of when a site uses both a sitewide return policy in their structured data and another one for specific products.

What Takes Precedence?

What happens if a merchant uses both a sitewide and product return structured data? Google’s new documentation states that Google will ignore the sitewide product return policy in favor of a more granular product-level policy in the structured data.

The clarification states:

“If you choose to provide both organization-level and product-level return policy markup, Google defaults to the product-level return policy markup.”

Change Reflected Elsewhere

Google also updated the documentation to reflect the scenario of the use of two levels of merchant return policies in another section that discusses whether structured data or merchant feed data takes precedence. There is no change to the policy, merchant center data still takes precedence.

This is the old documentation:

“If you choose to use both markup and settings in Merchant Center, Google will only use the information provided in Merchant Center for any products submitted in your Merchant Center product feeds, including automated feeds.”

This is the same section but updated with additional wording:

“If you choose to use both markup (whether at the organization-level or product-level, or both) and settings in Merchant Center, Google will only use the information provided in Merchant Center for any products submitted in your Merchant Center product feeds, including automated feeds.”

Read the newly updated Organization structured data documentation:

Organization (Organization) structured data – MerchantReturnPolicy

Featured Image by Shutterstock/sutlafk

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending