Connect with us

SEO

8 of the Most Important HTML Tags for SEO

Published

on

Are you utilizing HTML tags in your SEO process?

HTML tags are code elements with the back-end of all web pages, but there are also specific HTML code types which provide search engines with key info for SERP display. Essentially, these elements highlight parts of your content that are relevant for search, and they describe those elements for search crawlers.

That said, you don’t have to use all of these extra code tools. Search engines are getting smarter, and there’s much less use for HTML tags these days than there was in times past. But a few tags are still holding on – and some have even gained in SEO value.

In this post, we’ll go over some of the key SEO HTML tags that still make sense in 2020.

1. Title tag

Title tags are used to set up those clickable headlines that you see in the SERP:

Generally speaking, it’s up to Google to create a SERP headline for your page, and it could use any of the section headings from within the page – or it may even create a new headline altogether.

But the first place Google is going to check for headline ideas is the title tag, and where a title tag is present, Google will very likely make it the main headline in the relevant listing. As such, optimizing the title tag gives you some control over the way your page is represented in the SERP.

Best practices

On the one hand, your title should contain the keywords that will help it appear in search results. On the other, your title should be attractive enough for users to actually click through, so a balance is required between search optimization and user experience:

  • Watch the length – Google will only display the first 50-60 characters of your title, and cut the rest. It’s not a problem to have a title that’s longer than 60 characters, so long as you fit the important information before the cut-off point.​
  • Include a reasonable number of keywords – Keyword stuffing is likely to get penalized, but one or two keywords will be fine. Just make sure that your title forms a coherent sentence.​
  • Write good copy – Don’t be salesy, and don’t be generic. Create descriptive titles that highlight the value of your content, and set up proper expectations, so that users aren’t let down when they visit the page.
  • Add your brand name – If you have a recognizable brand that’s likely to increase your click-through, then feel free to add it to the title as well.

HTML code

Below is a bit of code retrieved from a BBC article on coronavirus statistics. You can see a properly set up title tag sitting right on top of the meta description tag – which is what we’re going to discuss next:

2. Meta description tag

Meta description tags are used to set up descriptions within search result snippets:

Google doesn’t always use meta description tags to create these snippets, but if the meta tag is there, then there’s a good chance that your meta description will make it onto the SERP.

Keep in mind, however, that sometimes Google will ignore the meta description tag, and instead quote a bit of copy from the page. This normally happens when the quote is a better match for a particular query than the meta description would have been.

Basically, Google will choose the best option for increasing your chances of click-through.

Best practices

The rules for meta descriptions are not overly strict – after all, if you fail to write a good one, even if you fail to write one altogether, then Google will write one for you.

  • Watch the length – Same as with headlines, Google will keep the first 150-160 characters of your meta description, and cut the rest. Ensure that the important aspects are included early on to maximize searcher interest.​
  • Write good copy – While the meta description is not used for ranking, it’s still important to optimize it for search intent. The more relevant your description is, based on the respective query, the more likely a user will visit your page.
  • Consider skipping meta description – It can be difficult to create good copy for particularly long-tailed keywords, or for pages that target a variety of keywords. In those cases, consider leaving the meta description out – Google will scrape your page and populate your snippet with a few relevant quotes either way.

HTML code

Below is a bit of code retrieved from the same BBC article on coronavirus statistics, and you can see that following the title tag is a meta description tag, which provides a brief summary of what the article is about:

3. Heading (H1-H6) tags

Heading tags are used to structure your pages for both the reader and search engines:

It’s no secret that barely anyone reads through an article anymore – what we generally do instead is we scan the article until we find the section we like, we read that one section, and then we bounce. And if the article isn’t split into sections, then many will bounce right away, because it’s just too much. So, from a user perspective, headings are handy reading aids.

From the perspective of the search engine, however, heading tags form the core of the content, and help search crawler bots understand what the page is about. 

Best practices

The rules for headings are derived from the general copywriting practices – break your copy into bite-sized pieces and maintain consistent formatting:

  • Don’t use more than one H1 – H1 heading stands apart from other headings because it’s treated by search engines as the title of the page. Not to be confused with the title tag – the title tag is displayed in search results, while the H1 tag is displayed on your website.​
  • Maintain shallow structure – There’s rarely a need to go below H3. Use H1 for the title, H2 for section headings, and H3 for subsections. Anything more tends to get confusing.​
  • Form query-like headings – Treat each heading as an additional opportunity to rank in search. To this end, each heading should sound either like a query or an answer to a query – keywords included.
  • Be consistent with all headings – All of your headings should be written in such a way that if you were to remove all the text and keep only the headings, they would read like a list.

HTML code

Below is a snippet of code retrieved from the same BBC article on coronavirus statistics, and you can see that there’s a properly set up H2 heading, followed by two paragraphs:

4. Image alt text

While the main goal of alt text is web accessibility, the SEO goal of the alt attribute is image indexing.

The key goal of image alt text is to help users understand the image when it cannot be viewed, say, by a visitor who is vision impaired. In this instance, along with times when, say, there’s a problem and the image won’t load, the alt text can be used to describe what’s in the image, instead of viewing it.

From an SEO perspective, alt text is a big part of how images are indexed in Google search. So if there is a visual component to what you do – be it the images of your products, your work, your stock images, your art – then you should definitely consider using image alt texts.

Best practices

A prerequisite to adding alt text tags is finding all the images without it.

You can use a tool like WebSite Auditor to crawl your website and compile a list of images with missing alt text. 

Once you’ve created your list, apply these guidelines:

  • Be concise, but descriptive – Good alt text is about a line or two of text that would help a visually impaired person understand what’s pictured.​
  • Don’t be too concise – One word, or even a few words, are probably not going to cut it – there would be no way to differentiate the image from other images. Think of all possible properties displayed: object type, color, material, shape, finish, lighting, etc.​
  • Don’t do keyword stuffing – There’s no place left where keyword stuffing still works, and alt text is no exception.

HTML code

Here’s an example of an alt text snippet from an image of disease cells:

5. Schema markup

Schema markup is used to enhance regular SERP snippets with rich snippet features:

Schema.org hosts a collection of tags developed jointly by Google, Bing, Yahoo!, and Yandex, and the tags are used by webmasters to provide search engines with additional information about different types of pages. In turn, search engines use this information to enhance their SERP snippets with various rich features.

There is no certainty as to whether using Schema markup improves one’s chances of ranking – there’s no question, however, that the resulting snippets look much more attractive than regular snippets, and thus improve one’s standing in search.

Best practices

The only best practice is to visit schema.org and see whether they’ve got any tags that can be applied to your types of pages. There are hundreds, if not thousands, of tags, so there’s likely going to be an option that applies, and may help improve your website listings.

HTML code

Here’s a sample snippet of code specifying nutrition facts for a recipe. You can visit schema.org for a full list of items available for markup:

6. HTML5 semantic tags

HTML5 elements are used for better descriptions of various page components:

Before the introduction of HTML5 elements, we mostly used div tags to split HTML code into separate components, and then we used classes and ids to further specify those components. Each webmaster specified the components in their own custom way, and as such, it ended up being a bit of a mess, and a challenge for search engines to understand what was what on each page.

With the introduction of semantic HTML5 elements, we’ve got a set of intuitive tags, each describing a separate page component. So, instead of tagging our content with a bunch of confusing divs, we now have a way of describing the components in an easy-to-understand, standardized way.

As you can imagine, search engines are very enthusiastic about semantic HTML5.

HTML code

Here are some of the handiest semantic HTML5 elements, use them to improve your communication with search engines:

  • article – isolates a post from the rest of the code, makes it portable
  • section – isolates a group of posts within a blog or a group of headings within a post
  • aside – isolates supplementary content that is not part of the main content
  • header – isolates the top part of the document, article, section, may contain navigation
  • footer – isolates the bottom of the document, article, section, contains meta information
  • nav – isolates navigation menus, groups of navigational elements

7. Meta robots tag

Robots meta tag is all about the rules of engagement between the websites and the search engines.

This is where website owners get to outline a set of rules for crawling and indexing their pages. Some of these rules are obligatory, while others are more like suggestions – not all crawlers will respect robots meta tags, but mainstream search engines often will. And if there is no meta robots tag, then the crawlers will do as they please.

Best practices

Meta robots tag should be placed in the head section of the page code, and it should specify which crawlers it addresses and which instructions should be applied:

  • Address robots by name – Use robots if your instructions are for all crawlers, but use specific names to address individual crawlers. Google’s standard web crawler, for example, is called Googlebot. Addressing individual robots is usually done to ban malicious crawlers from the page while allowing well-intentioned crawlers to carry on.​
  • Match instructions to your goals – You’d normally want to use robots meta tags to keep search engines from indexing documents, internal search results, duplicate pages, staging areas, and whatever else you don’t want to show up in search.

HTML code

Below are some of the parameters most commonly used with robots meta tags. You can use any number of them in a single meta robots tag, separated by a comma:

  • noindex — page should not be indexed
  • nofollow — links on the page should not be followed
  • follow — links on the page should be followed, even if the page is not to be indexed
  • noimageindex — images on the page should not be indexed
  • noarchive — search results should not show a cached version of the page
  • unavailable_after — page should not be indexed beyond a certain date.

8. Canonical tag

Canonical tag spares you from the risk of duplicate content:

The gist of it is that any given page, through no fault of your own, can have several addresses. Sometimes they result from various artifacts – like http/https and various tracking tags – and other times they result from various sorting and customization options available in product catalogs.

It’s not a big problem, to be honest, except that all those addresses might be taxing on the crawl budget, and on your page authority, and it can also mess with your performance tracking. The alternative is to use a canonical tag to tell a search engine which of those page addresses is the main one.

Best practices

To avoid potential SEO complications, apply the canonical tag to the following pages:

  • Pages available via different URLs
  • Pages with very similar content
  • Dynamic pages that create their own URL parameters

Final thoughts

These are my top HTML tags to still worry about in 2020, though I believe some of them are firmly on their way out. As noted, with search engines getting ever smarter, there’s less and less need for HTML tag optimization, because most things can now be deduced algorithmically. Also, most modern CMS systems automatically add these elements, at least in some capacity.

Still, I wouldn’t leave it entirely up to Google to interpret my content – it’s best to meet it halfway where you can.

Socialmediatoday.com

SEO

Google SEO Tips For News Articles: Lastmod Tag, Separate Sitemaps

Published

on

Google SEO Tips For News Articles: Lastmod Tag, Separate Sitemaps

Google Search Advocate John Mueller and Analyst Gary Illyes share SEO tips for news publishers during a recent office-hours Q&A recording.

Taking turns answering questions, Mueller addresses the correct use of the lastmod tag, while Illyes discusses the benefits of separate sitemaps.

When To Use The Lastmod Tag?

In an XML sitemap file, lastmod is a tag that stores information about the last time a webpage was modified.

Its intended use is to help search engines track and index significant changes to webpages.

Google provides guidelines for using the lastmod tag, which could be used to alter search snippets.

The presence of the lastmod tag may prompt Googlebot to change the publication date in search results, making the content appear more recent and more attractive to click on.

As a result, there may be an inclination to use the lastmod tag even for minor changes to an article so that it appears as if it was recently published.

A news publisher asks whether they should use the lastmod tag to indicate the date of the latest article update or the date of the most recent comment.

Mueller says the date in the lastmod field should reflect the date when the page’s content has changed significantly enough to require re-crawling.

However, using the last comment date is acceptable if comments are a critical part of the page.

He also reminds the publisher to use structured data and ensure the page date is consistent with the lastmod tag.

“Since the site map file is all about finding the right moment to crawl a page based on its changes, the lastmod date should reflect the date when the content has significantly changed enough to merit being re-crawled.

If comments are a critical part of your page, then using that date is fine. Ultimately, this is a decision that you can make. For the date of the article itself, I’d recommend looking at our guidelines on using dates on a page.

In particular, make sure that you use the dates on a page consistently and that you structured data, including the time zone, within the markup.”

Separate Sitemap For News?

A publisher inquires about Google’s stance on having both a news sitemap and a general sitemap on the same website.

They also ask if it’s acceptable for both sitemaps to include duplicate URLs.

Illyes explained that it’s possible to have just one sitemap with the news extension added to the URLs that need it, but it’s simpler to have separate sitemaps for news and general content. URLs older than 30 days should be removed from the news sitemap.

Regarding sitemaps sharing the duplicate URLs, it’s not recommended, but it won’t cause any problems.

Illyes states:

“You can have just one site map, a traditional web sitemap as defined by sitemaps.org, and then add the news extension to the URLs that need it. Just keep in mind that, you’ll need to remove the news extension from URLs that are older than 30 days. For this reason it’s usually simpler to have separate site map for news and for web.

Just remove the URLs altogether from the news site map when they become too old for news. Including the URLs in both site maps, while not very nice, but it will not cause any issues for you.”

These tips from Mueller and Illyes can help news publishers optimize their websites for search engines and improve the visibility and engagement of their articles.


Source: Google Search Central

Featured Image: Rawpixel.com/Shutterstock



Source link

Continue Reading

SEO

Google Business Profile Optimization For The Financial Vertical

Published

on

Google Business Profile Optimization For The Financial Vertical

The financial vertical is a dynamic, challenging, and highly regulated space.

As such, for businesses in this vertical, optimizing local search presence and, specifically, Google Business Profile listings requires a greater level of sensitivity and specialization than industries like retail or restaurant.

The inherent challenges stem from a host of considerations, such as internal branding guidelines, accessibility considerations, regulatory measures, and governance considerations among lines of business within the financial organization, among others.

This means that local listings in this vertical are not “one size fits all” but rather vary based on function and fall into one of several listing types, including branches, loan officers, financial advisors, and ATMS (which may be inclusive of walk-up ATMs, drive-through ATMs, and “smart ATMs”).

Each of these types of listings requires a unique set of hours, categories, hyper-local content, attributes, and a unique overall optimization strategy.

The goal of this article is to dive deeper into why having a unique optimization strategy matters for businesses in the financial vertical and share financial brand-specific best practices for listing optimization strategy.

Financial Brand Listing Type Considerations

One reason listing optimization is so nuanced in the financial vertical is that, in addition to all the listing features that vary by business function as mentioned above, Google also has essentially different classifications (or types) of listings by definition – each with its own set of guidelines (read “rules”) that apply according to a listing scenario.

This includes the distinction between a listing for an organization (e.g., for a bank branch) vs. that of an individual practitioner (used to represent a loan officer that may or may not sit at the branch, which has a separate listing).

Somewhere between those two main divisions, there may be a need for a department listing (e.g., for consumer banking vs. mortgages).

Again, each listing classification has rules and criteria around how (and how many) listings can be established for a given address and how they are represented.

Disregarding Google’s guidelines here carries the risk of disabled listings or even account-level penalties.

While that outcome is relatively rare, those risks are ill-advised and theoretically catastrophic to revenue and reputation in such a tightly regulated and competitive industry.

Editor’s note: If you have 10+ locations, you can request bulk verification.

Google Business Profile Category Selection

Category selection in Google Business Profile (GBP) is one of the most influential, and thus important, activities involved in creating and optimizing listings – in the context of ranking, visibility, and traffic attributable to the listing.

Keep in mind you can’t “keyword optimize” a GBP listing (unless you choose to violate Business Title guidelines), and this is by design on Google’s part.

Because of this, the primary and secondary categories that you select are collectively one of the strongest cues that you can send to Google around who should see your listing in the local search engine results pages (SERPs), and for what queries (think relevancy).

Suffice it to say this is a case where quality and specificity are more important than quantity.

This is, in part, because Google only allows for one primary category to be selected – but also because of the practice of spamming the secondary category field with as many entries as Google will allow (especially with categories that are only tangentially relevant for the listing) can have consequences that are both unintuitive and unintended.

The point is too many categories can (and often do) muddy the signal for Google’s algorithm regarding surfacing listings for appropriate queries and audiences.

This can lead to poor alignment with users’ needs and experiences and drive the wrong traffic.

It can also cause confusion for the algorithm around relevancy, resulting in the listing being suppressed or ranking poorly, thus driving less traffic.

Governance Vs. Cannibalization

Just as we already discussed the distinction between the choice of classification types and the practice of targeting categories appropriately according to the business functions and objectives represented by a given listing, these considerations play together to help frame a strategy around governance within the context of the organic local search channel.

The idea here is to create separation between lines of business (LOBs) to prevent internal competition over rankings and visibility for search terms that are misaligned for one or more LOB, such that they inappropriately cannibalize each other.

In simpler terms, users searching for a financial advisor or loan officer should not be served a listing for a consumer bank branch, and vice versa.

This creates a poor user experience that will ultimately result in frustrated users, complaints, and potential loss of revenue.

The Importance Of Category Selection

To illustrate this, see the example below.

A large investment bank might have the following recommended categories for Branches and Advisors, respectively (an asterisk refers to the primary category):

Branch Categories

  • *Investment Service.
  • Investment Company.
  • Financial Institution.

Advisor Categories

  • *Financial Consultant.
  • Financial Planner.
  • Financial Broker.

Notice the Branch categories signal relevance for the institution as a whole, whereas the Advisor categories align with Advisors (i.e., individual practitioners.) Obviously, these listings serve separate but complementary functions.

When optimized strategically, their visibility will align with the needs of users seeking out information about those functions accordingly.

Category selection is not the only factor involved in crafting a proper governance strategy, albeit an important one.

That said, all the other available data fields and content within the listings should be similarly planned and optimized in alignment with appropriate governance considerations, in addition to the overall relevancy and content strategy as applicable for the associated LOBs.

Specialized Financial Brand Listing Attributes

GBP attributes are data points about a listing that help communicate details about the business being represented.

They vary by primary category and are a great opportunity to serve users’ needs while boosting performance by differentiating against the competition, and feeding Google’s algorithm more relevant information about a given listing.

This is often referred to as the “listing completeness” aspect of Google’s local algorithm, which translates to “the more information Google has about a listing, the more precisely it can provide that listing to users according to the localized queries they use.”

The following is a list of attributes that are helpful for the financial vertical:

  • Online Appointments.
  • Black-Owned.
  • Family-Led.
  • Veteran-Led.
  • Women-Led.
  • Appointment Links.
  • Wheelchair Accessible Elevator.
  • Wheelchair Accessible Entrance.
  • Wheelchair Accessible Parking Lot.

The following chart helps to illustrate which attributes are best suited for listing based on listing/LOB/ORG type:

Image from Rio SEO, December 2022

Managing Hours Of Operation

This is an important and often overlooked aspect of listings management in the financial space and in general.

Hours of operation, first and foremost, should be present in the listings, not left out. While providing hours is not mandatory, not doing so will impact user experience and visibility.

Like most of the previous items, hours for a bank branch (e.g., 10 am to 5 pm) will be different than those of the drive-through ATM (open 24 hours), and that of a mortgage loan officer and financial advisor that both have offices at the same address.

Each of these services and LOBs can best be represented by separate listings, each with its own hours of operation.

Leaving these details out, or using the same set of operating hours across all of these LOBs and listing types, sets users up for frustration and prevents Google from properly serving and messaging users around a given location’s availability (such as “open now,” “closing soon,” or “closed,” as applicable.)

All of this leads to either missed opportunities when hours are omitted, allowing a competitor (that Google knows is open) to rank higher in the SERPs, or frustrated customers that arrive at an investment banking office expecting to make a consumer deposit or use an ATM.

Appointment URL With Local Attribution Tracking

This is especially relevant for individual practitioner listings such as financial advisors, mortgage loan officers, and insurance agents.

Appointment URLs allow brands to publish a link where clients can book appointments with the individual whose listing the user finds and interacts within search.

This is a low-hanging fruit tactic that can make an immediate and significant impact on lead generation and revenue.

Taking this another step, these links can be tagged with UTM parameters (for brands using Google Analytics and similarly tagged for other analytic platforms) to track conversion events, leads, and revenue associated with this listing feature.

Editorial note: Here is an example of a link with UTM parameters: https://www.domain.com/?utm_source=source&utm_medium=medium&utm_campaign=campaign

 

Financial vertical appointment booking exampleImage from Google, December 2022

Leveraging Services

Services can be added to a listing to let potential customers know what services are available at a given location.

add-services-google-business-profileScreenshot from Google, January 2023

Services in GBP are subject to availability by primary category, another reason category selection is so important, as discussed above.

Specifically, once services are added to a listing, they will be prominently displayed on the listing within the mobile SERPs under the “Services” tab of the listing.

financial-brand-services-google-business-profile-mobileScreenshot from Google, January 2023

This not only feeds more data completeness, which benefits both mobile and desktop performance, and increases engagement in the mobile SERPs (click to website, call, driving directions) which are bottom-funnel key performance indicators (KPIs) that drive revenue.

Google Posts

Google Posts represent a content marketing opportunity that is valuable on multiple levels.

An organization can post relevant, evergreen content that is strategically optimized for key localized phrases, services, and product offerings.

While there is no clear evidence or admission by Google that relevant content will have a direct impact on rankings overall for that listing, what we can say for certain from observation is that listings with well-optimized posts do present in the local SERPs landscape for keyword queries that align with that content.

This happens in the form of “related to your search” snippets and has been widely observed since 2019.

This has a few different implications, reinforcing the benefits of leveraging Google Posts in your local search strategy.

First, given that Post snippets are triggered, it is fair to infer that if a given listing did not have the relevant post, that listing may not have surfaced at all in the SERPs. Thus, we can infer a benefit around visibility, which leads to more traffic.

Second, it is well-documented that featured snippets are associated with boosts in click-through rate (CTR), which amplifies the traffic increases that result from the increased visibility alone.

Additional Post Benefits

Beyond these two very obvious benefits of Google Posts, they also provide many benefits around messaging potential visitors and clients with relevant information about the location, including products, services, promotions, events, limited-time offers, and potentially many others.

Use cases for this can include consumer banks that feature free checking or direct deposit or financial advisors that offer a free 60-minute initial consultation.

Taking the time to publish posts that highlight these differentiators could have a measurable impact on traffic, CTR, and revenue.

Another great aspect of Google Posts is that, for a while, they were designed to be visible according to specific date ranges – and, at one time, would “expire” or fall out of the SERPs once the time period passed.

Certain post types will surface long after the expiration date of the post if there is a relevancy match between the user’s query and the content.

Concluding Thoughts

To summarize, the financial vertical requires a highly specialized, precise GBP optimization strategy, which is well-vetted for the needs of users, LOBs, and regulatory compliance.

Considerations like primary and secondary categories, hours, attributes, services, and content (in the form of Google Posts) all play a critical role in defining that overall strategy, including setting up and maintaining crucial governance boundaries between complementary LOBs.

Undertaking all these available listing features holistically and strategically allows financial institutions and practitioners to maximize visibility, engagement, traffic, revenue, and overall performance from local search while minimizing cannibalism, complaints, and poor user experience.

More resources: 


Featured Image: Andrey_Popov/Shutterstock



Source link

Continue Reading

SEO

11 Disadvantages Of ChatGPT Content

Published

on

11 Disadvantages Of ChatGPT Content

ChatGPT produces content that is comprehensive and plausibly accurate.

But researchers, artists, and professors warn of shortcomings to be aware of which degrade the quality of the content.

In this article, we’ll look at 11 disadvantages of ChatGPT content. Let’s dive in.

1. Phrase Usage Makes It Detectable As Non-Human

Researchers studying how to detect machine-generated content have discovered patterns that make it sound unnatural.

One of these quirks is how AI struggles with idioms.

An idiom is a phrase or saying with a figurative meaning attached to it, for example, “every cloud has a silver lining.” 

A lack of idioms within a piece of content can be a signal that the content is machine-generated – and this can be part of a detection algorithm.

This is what the 2022 research paper Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers says about this quirk in machine-generated content:

“Complex phrasal features are based on the frequency of specific words and phrases within the analyzed text that occur more frequently in human text.

…Of these complex phrasal features, idiom features retain the most predictive power in detection of current generative models.”

This inability to use idioms contributes to making ChatGPT output sound and read unnaturally.

2. ChatGPT Lacks Ability For Expression

An artist commented on how the output of ChatGPT mimics what art is, but lacks the actual qualities of artistic expression.

Expression is the act of communicating thoughts or feelings.

ChatGPT output doesn’t contain expressions, only words.

It cannot produce content that touches people emotionally on the same level as a human can – because it has no actual thoughts or feelings.

Musical artist Nick Cave, in an article posted to his Red Hand Files newsletter, commented on a ChatGPT lyric that was sent to him, which was created in the style of Nick Cave.

He wrote:

“What makes a great song great is not its close resemblance to a recognizable work.

…it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; it is the redemptive artistic act that stirs the heart of the listener, where the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering.”

Cave called the ChatGPT lyrics a mockery.

This is the ChatGPT lyric that resembles a Nick Cave lyric:

“I’ve got the blood of angels, on my hands
I’ve got the fire of hell, in my eyes
I’m the king of the abyss, I’m the ruler of the dark
I’m the one that they fear, in the shadows they hark”

And this is an actual Nick Cave lyric (Brother, My Cup Is Empty):

“Well I’ve been sliding down on rainbows
I’ve been swinging from the stars
Now this wretch in beggar’s clothing
Bangs his cup across the bars
Look, this cup of mine is empty!
Seems I’ve misplaced my desires
Seems I’m sweeping up the ashes
Of all my former fires”

It’s easy to see that the machine-generated lyric resembles the artist’s lyric, but it doesn’t really communicate anything.

Nick Cave’s lyrics tell a story that resonates with the pathos, desire, shame, and willful deception of the person speaking in the song. It expresses thoughts and feelings.

It’s easy to see why Nick Cave calls it a mockery.

3. ChatGPT Does Not Produce Insights

An article published in The Insider quoted an academic who noted that academic essays generated by ChatGPT lack insights about the topic.

ChatGPT summarizes the topic but does not offer a unique insight into the topic.

Humans create through knowledge, but also through their personal experience and subjective perceptions.

Professor Christopher Bartel of Appalachian State University is quoted by The Insider as saying that, while a ChatGPT essay may exhibit high grammar qualities and sophisticated ideas, it still lacked insight.

Bartel said:

“They are really fluffy. There’s no context, there’s no depth or insight.”

Insight is the hallmark of a well-done essay and it’s something that ChatGPT is not particularly good at.

This lack of insight is something to keep in mind when evaluating machine-generated content.

4. ChatGPT Is Too Wordy

A research paper published in January 2023 discovered patterns in ChatGPT content that makes it less suitable for critical applications.

The paper is titled, How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.

The research showed that humans preferred answers from ChatGPT in more than 50% of questions answered related to finance and psychology.

But ChatGPT failed at answering medical questions because humans preferred direct answers – something the AI didn’t provide.

The researchers wrote:

“…ChatGPT performs poorly in terms of helpfulness for the medical domain in both English and Chinese.

The ChatGPT often gives lengthy answers to medical consulting in our collected dataset, while human experts may directly give straightforward answers or suggestions, which may partly explain why volunteers consider human answers to be more helpful in the medical domain.”

ChatGPT tends to cover a topic from different angles, which makes it inappropriate when the best answer is a direct one.

Marketers using ChatGPT must take note of this because site visitors requiring a direct answer will not be satisfied with a verbose webpage.

And good luck ranking an overly wordy page in Google’s featured snippets, where a succinct and clearly expressed answer that can work well in Google Voice may have a better chance to rank than a long-winded answer.

OpenAI, the makers of ChatGPT, acknowledges that giving verbose answers is a known limitation.

The announcement article by OpenAI states:

“The model is often excessively verbose…”

The ChatGPT bias toward providing long-winded answers is something to be mindful of when using ChatGPT output, as you may encounter situations where shorter and more direct answers are better.

5. ChatGPT Content Is Highly Organized With Clear Logic

ChatGPT has a writing style that is not only verbose but also tends to follow a template that gives the content a unique style that isn’t human.

This inhuman quality is revealed in the differences between how humans and machines answer questions.

The movie Blade Runner has a scene featuring a series of questions designed to reveal whether the subject answering the questions is a human or an android.

These questions were a part of a fictional test called the “Voigt-Kampff test“.

One of the questions is:

“You’re watching television. Suddenly you realize there’s a wasp crawling on your arm. What do you do?”

A normal human response would be to say something like they would scream, walk outside and swat it, and so on.

But when I posed this question to ChatGPT, it offered a meticulously organized answer that summarized the question and then offered logical multiple possible outcomes – failing to answer the actual question.

Screenshot Of ChatGPT Answering A Voight-Kampff Test Question

Screenshot from ChatGPT, January 2023

The answer is highly organized and logical, giving it a highly unnatural feel, which is undesirable.

6. ChatGPT Is Overly Detailed And Comprehensive

ChatGPT was trained in a way that rewarded the machine when humans were happy with the answer.

The human raters tended to prefer answers that had more details.

But sometimes, such as in a medical context, a direct answer is better than a comprehensive one.

What that means is that the machine needs to be prompted to be less comprehensive and more direct when those qualities are important.

From OpenAI:

“These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.”

7. ChatGPT Lies (Hallucinates Facts)

The above-cited research paper, How Close is ChatGPT to Human Experts?, noted that ChatGPT has a tendency to lie.

It reports:

“When answering a question that requires professional knowledge from a particular field, ChatGPT may fabricate facts in order to give an answer…

For example, in legal questions, ChatGPT may invent some non-existent legal provisions to answer the question.

…Additionally, when a user poses a question that has no existing answer, ChatGPT may also fabricate facts in order to provide a response.”

The Futurism website documented instances where machine-generated content published on CNET was wrong and full of “dumb errors.”

CNET should have had an idea this could happen, because OpenAI published a warning about incorrect output:

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

CNET claims to have submitted the machine-generated articles to human review prior to publication.

A problem with human review is that ChatGPT content is designed to sound persuasively correct, which may fool a reviewer who is not a topic expert.

8. ChatGPT Is Unnatural Because It’s Not Divergent

The research paper, How Close is ChatGPT to Human Experts? also noted that human communication can have indirect meaning, which requires a shift in topic to understand it.

ChatGPT is too literal, which causes the answers to sometimes miss the mark because the AI overlooks the actual topic.

The researchers wrote:

“ChatGPT’s responses are generally strictly focused on the given question, whereas humans’ are divergent and easily shift to other topics.

In terms of the richness of content, humans are more divergent in different aspects, while ChatGPT prefers focusing on the question itself.

Humans can answer the hidden meaning under the question based on their own common sense and knowledge, but the ChatGPT relies on the literal words of the question at hand…”

Humans are better able to diverge from the literal question, which is important for answering “what about” type questions.

For example, if I ask:

“Horses are too big to be a house pet. What about raccoons?”

The above question is not asking if a raccoon is an appropriate pet. The question is about the size of the animal.

ChatGPT focuses on the appropriateness of the raccoon as a pet instead of focusing on the size.

Screenshot of an Overly Literal ChatGPT Answer

11 Disadvantages Of ChatGPT ContentScreenshot from ChatGPT, January 2023

9. ChatGPT Contains A Bias Towards Being Neutral

The output of ChatGPT is generally neutral and informative. It’s a bias in the output that can appear helpful but isn’t always.

The research paper we just discussed noted that neutrality is an unwanted quality when it comes to legal, medical, and technical questions.

Humans tend to pick a side when offering these kinds of opinions.

10. ChatGPT Is Biased To Be Formal

ChatGPT output has a bias that prevents it from loosening up and answering with ordinary expressions. Instead, its answers tend to be formal.

Humans, on the other hand, tend to answer questions with a more colloquial style, using everyday language and slang – the opposite of formal.

ChatGPT doesn’t use abbreviations like GOAT or TL;DR.

The answers also lack instances of irony, metaphors, and humor, which can make ChatGPT content overly formal for some content types.

The researchers write:

“…ChatGPT likes to use conjunctions and adverbs to convey a logical flow of thought, such as “In general”, “on the other hand”, “Firstly,…, Secondly,…, Finally” and so on.

11. ChatGPT Is Still In Training

ChatGPT is currently still in the process of training and improving.

OpenAI recommends that all content generated by ChatGPT should be reviewed by a human, listing this as a best practice.

OpenAI suggests keeping humans in the loop:

“Wherever possible, we recommend having a human review outputs before they are used in practice.

This is especially critical in high-stakes domains, and for code generation.

Humans should be aware of the limitations of the system, and have access to any information needed to verify the outputs (for example, if the application summarizes notes, a human should have easy access to the original notes to refer back).”

Unwanted Qualities Of ChatGPT

It’s clear that there are many issues with ChatGPT that make it unfit for unsupervised content generation. It contains biases and fails to create content that feels natural or contains genuine insights.

Further, its inability to feel or author original thoughts makes it a poor choice for generating artistic expressions.

Users should apply detailed prompts in order to generate content that is better than the default content it tends to output.

Lastly, human review of machine-generated content is not always enough, because ChatGPT content is designed to appear correct, even when it’s not.

That means it’s important that human reviewers are subject-matter experts who can discern between correct and incorrect content on a specific topic.

More resources: 


Featured image by Shutterstock/fizkes



Source link

Continue Reading

Trending

en_USEnglish