Ta kontakt med oss

SEO

Drupal Warns of Two Critical Vulnerabilities

Publicerad

Drupal Warns of Two Critical Vulnerabilities

Drupal announced two vulnerabilities affecting versions 9.2 and 9.3 that could allow an attacker to upload malicious files and take control of a site. The threat levels of the two vulnerabilities are rated as Moderately Critical.

The United States Cybersecurity & Infrastructure Security Agency (CISA) warned that the exploits could lead to an attacker taking control of a vulnerable Drupal-based website.

CISA stated:

“Drupal has released security updates to address vulnerabilities affecting Drupal 9.2 and 9.3.

An attacker could exploit these vulnerabilities to take control of an affected system.”

Drupal

Drupal is a popular open source content management system written in the PHP programming language.

Many major organizations like Smithsonian Institution, Universal Music Group, Pfizer, Johnson & Johnson, Princeton University, and Columbia University use Drupal for their websites.

Form API – Improper Input Validation

The first vulnerability affects Drupal’s form API. The vulnerability is an improper input validation, which means that what is uploaded via the form API is not validated as to whether it is allowed or not.

Validating what is uploaded or input into a form is a common best practice. In general, the input validation is done with an Allow List approach where the form expects specific inputs and will reject anything that does not correspond with the expected input or upload.

When a form fails to validate an input then that leaves the website open to the upload of files that can trigger unwanted behavior in the web application.

Drupal’s announcement explained the specific issue:

“Drupal core’s form API has a vulnerability where certain contributed or custom modules’ forms may be vulnerable to improper input validation. This could allow an attacker to inject disallowed values or overwrite data. Affected forms are uncommon, but in certain cases an attacker could alter critical or sensitive data.”

Drupal Core – Access Bypass

Access bypass is a form of vulnerability where there may be a way to access to a part of the site through a path that is missing an access control check, resulting in some cases a user being able to gain access to levels they don’t have permissions for.

Drupal’s announcement described the vulnerability:

“Drupal 9.3 implemented a generic entity access API for entity revisions. However, this API was not completely integrated with existing permissions, resulting in some possible access bypass for users who have access to use revisions of content generally, but who do not have access to individual items of node and media content.”

Publishers Encouraged to Review Security Advisories and Apply Updates

The United States Cybersecurity and Infrastructure Security Agency (CISA) and Drupal encourage publishers to review the security advisories and update to the latest versions.

Citations

Read the Official CISA Drupal Vulnerability Bulletin

Drupal Releases Security Updates

Read the Two Drupal Security Announcements

Drupal core – Moderately critical – Improper input validation – SA-CORE-2022-008

Drupal core – Moderately critical – Access bypass – SA-CORE-2022-009



Källlänk

SEO

Google SEO Tips For News Articles: Lastmod Tag, Separate Sitemaps

Publicerad

Google SEO Tips For News Articles: Lastmod Tag, Separate Sitemaps

Google Search Advocate John Mueller and Analyst Gary Illyes share SEO tips for news publishers during a recent office-hours Q&A recording.

Taking turns answering questions, Mueller addresses the correct use of the lastmod tag, while Illyes discusses the benefits of separate sitemaps.

When To Use The Lastmod Tag?

In an XML sitemap file, lastmod is a tag that stores information about the last time a webpage was modified.

Its intended use is to help search engines track and index significant changes to webpages.

Google provides guidelines for using the lastmod tag, which could be used to alter search snippets.

The presence of the lastmod tag may prompt Googlebot to change the publication date in search results, making the content appear more recent and more attractive to click on.

As a result, there may be an inclination to use the lastmod tag even for minor changes to an article so that it appears as if it was recently published.

A news publisher asks whether they should use the lastmod tag to indicate the date of the latest article update or the date of the most recent comment.

Mueller says the date in the lastmod field should reflect the date when the page’s content has changed significantly enough to require re-crawling.

However, using the last comment date is acceptable if comments are a critical part of the page.

He also reminds the publisher to use structured data and ensure the page date is consistent with the lastmod tag.

“Since the site map file is all about finding the right moment to crawl a page based on its changes, the lastmod date should reflect the date when the content has significantly changed enough to merit being re-crawled.

If comments are a critical part of your page, then using that date is fine. Ultimately, this is a decision that you can make. For the date of the article itself, I’d recommend looking at our guidelines on using dates on a page.

In particular, make sure that you use the dates on a page consistently and that you structured data, including the time zone, within the markup.”

Separate Sitemap For News?

A publisher inquires about Google’s stance on having both a news sitemap and a general sitemap on the same website.

They also ask if it’s acceptable for both sitemaps to include duplicate URLs.

Illyes explained that it’s possible to have just one sitemap with the news extension added to the URLs that need it, but it’s simpler to have separate sitemaps for news and general content. URLs older than 30 days should be removed from the news sitemap.

Regarding sitemaps sharing the duplicate URLs, it’s not recommended, but it won’t cause any problems.

Illyes states:

“You can have just one site map, a traditional web sitemap as defined by sitemaps.org, and then add the news extension to the URLs that need it. Just keep in mind that, you’ll need to remove the news extension from URLs that are older than 30 days. For this reason it’s usually simpler to have separate site map for news and for web.

Just remove the URLs altogether from the news site map when they become too old for news. Including the URLs in both site maps, while not very nice, but it will not cause any issues for you.”

These tips from Mueller and Illyes can help news publishers optimize their websites for search engines and improve the visibility and engagement of their articles.


Source: Google Search Central

Featured Image: Rawpixel.com/Shutterstock



Källlänk

Fortsätt läsa

SEO

Google Business Profile Optimization For The Financial Vertical

Publicerad

Google Business Profile Optimization For The Financial Vertical

The financial vertical is a dynamic, challenging, and highly regulated space.

As such, for businesses in this vertical, optimizing local search presence and, specifically, Google Business Profile listings requires a greater level of sensitivity and specialization than industries like retail or restaurant.

The inherent challenges stem from a host of considerations, such as internal branding guidelines, accessibility considerations, regulatory measures, and governance considerations among lines of business within the financial organization, among others.

This means that local listings in this vertical are not “one size fits all” but rather vary based on function and fall into one of several listing types, including branches, loan officers, financial advisors, and ATMS (which may be inclusive of walk-up ATMs, drive-through ATMs, and “smart ATMs”).

Each of these types of listings requires a unique set of hours, categories, hyper-local content, attributes, and a unique overall optimization strategy.

The goal of this article is to dive deeper into why having a unique optimization strategy matters for businesses in the financial vertical and share financial brand-specific best practices for listing optimization strategy.

Financial Brand Listing Type Considerations

One reason listing optimization is so nuanced in the financial vertical is that, in addition to all the listing features that vary by business function as mentioned above, Google also has essentially different classifications (or types) of listings by definition – each with its own set of guidelines (read “rules”) that apply according to a listing scenario.

This includes the distinction between a listing for an organization (e.g., for a bank branch) vs. that of an individual practitioner (used to represent a loan officer that may or may not sit at the branch, which has a separate listing).

Somewhere between those two main divisions, there may be a need for a department listing (e.g., for consumer banking vs. mortgages).

Again, each listing classification has rules and criteria around how (and how many) listings can be established for a given address and how they are represented.

Disregarding Google’s guidelines here carries the risk of disabled listings or even account-level penalties.

While that outcome is relatively rare, those risks are ill-advised and theoretically catastrophic to revenue and reputation in such a tightly regulated and competitive industry.

Editor’s note: If you have 10+ locations, you can request bulk verification.

Google Business Profile Category Selection

Category selection in Google Business Profile (GBP) is one of the most influential, and thus important, activities involved in creating and optimizing listings – in the context of ranking, visibility, and traffic attributable to the listing.

Keep in mind you can’t “keyword optimize” a GBP listing (unless you choose to violate Business Title guidelines), and this is by design on Google’s part.

Because of this, the primary and secondary categories that you select are collectively one of the strongest cues that you can send to Google around who should see your listing in the local search engine results pages (SERPs), and for what queries (think relevancy).

Suffice it to say this is a case where quality and specificity are more important than quantity.

This is, in part, because Google only allows for one primary category to be selected – but also because of the practice of spamming the secondary category field with as many entries as Google will allow (especially with categories that are only tangentially relevant for the listing) can have consequences that are both unintuitive and unintended.

The point is too many categories can (and often do) muddy the signal for Google’s algorithm regarding surfacing listings for appropriate queries and audiences.

This can lead to poor alignment with users’ needs and experiences and drive the wrong traffic.

It can also cause confusion for the algorithm around relevancy, resulting in the listing being suppressed or ranking poorly, thus driving less traffic.

Governance Vs. Cannibalization

Just as we already discussed the distinction between the choice of classification types and the practice of targeting categories appropriately according to the business functions and objectives represented by a given listing, these considerations play together to help frame a strategy around governance within the context of the organic local search channel.

The idea here is to create separation between lines of business (LOBs) to prevent internal competition over rankings and visibility for search terms that are misaligned for one or more LOB, such that they inappropriately cannibalize each other.

In simpler terms, users searching for a financial advisor or loan officer should not be served a listing for a consumer bank branch, and vice versa.

This creates a poor user experience that will ultimately result in frustrated users, complaints, and potential loss of revenue.

The Importance Of Category Selection

To illustrate this, see the example below.

A large investment bank might have the following recommended categories for Branches and Advisors, respectively (an asterisk refers to the primary category):

Branch Categories

  • *Investment Service.
  • Investment Company.
  • Financial Institution.

Advisor Categories

  • *Financial Consultant.
  • Financial Planner.
  • Financial Broker.

Notice the Branch categories signal relevance for the institution as a whole, whereas the Advisor categories align with Advisors (i.e., individual practitioners.) Obviously, these listings serve separate but complementary functions.

When optimized strategically, their visibility will align with the needs of users seeking out information about those functions accordingly.

Category selection is not the only factor involved in crafting a proper governance strategy, albeit an important one.

That said, all the other available data fields and content within the listings should be similarly planned and optimized in alignment with appropriate governance considerations, in addition to the overall relevancy and content strategy as applicable for the associated LOBs.

Specialized Financial Brand Listing Attributes

GBP attributes are data points about a listing that help communicate details about the business being represented.

They vary by primary category and are a great opportunity to serve users’ needs while boosting performance by differentiating against the competition, and feeding Google’s algorithm more relevant information about a given listing.

This is often referred to as the “listing completeness” aspect of Google’s local algorithm, which translates to “the more information Google has about a listing, the more precisely it can provide that listing to users according to the localized queries they use.”

The following is a list of attributes that are helpful for the financial vertical:

  • Online Appointments.
  • Black-Owned.
  • Family-Led.
  • Veteran-Led.
  • Women-Led.
  • Appointment Links.
  • Wheelchair Accessible Elevator.
  • Wheelchair Accessible Entrance.
  • Wheelchair Accessible Parking Lot.

The following chart helps to illustrate which attributes are best suited for listing based on listing/LOB/ORG type:

Image from Rio SEO, December 2022

Managing Hours Of Operation

This is an important and often overlooked aspect of listings management in the financial space and in general.

Hours of operation, first and foremost, should be present in the listings, not left out. While providing hours is not mandatory, not doing so will impact user experience and visibility.

Like most of the previous items, hours for a bank branch (e.g., 10 am to 5 pm) will be different than those of the drive-through ATM (open 24 hours), and that of a mortgage loan officer and financial advisor that both have offices at the same address.

Each of these services and LOBs can best be represented by separate listings, each with its own hours of operation.

Leaving these details out, or using the same set of operating hours across all of these LOBs and listing types, sets users up for frustration and prevents Google from properly serving and messaging users around a given location’s availability (such as “open now,” “closing soon,” or “closed,” as applicable.)

All of this leads to either missed opportunities when hours are omitted, allowing a competitor (that Google knows is open) to rank higher in the SERPs, or frustrated customers that arrive at an investment banking office expecting to make a consumer deposit or use an ATM.

Appointment URL With Local Attribution Tracking

This is especially relevant for individual practitioner listings such as financial advisors, mortgage loan officers, and insurance agents.

Appointment URLs allow brands to publish a link where clients can book appointments with the individual whose listing the user finds and interacts within search.

This is a low-hanging fruit tactic that can make an immediate and significant impact on lead generation and revenue.

Taking this another step, these links can be tagged with UTM parameters (for brands using Google Analytics and similarly tagged for other analytic platforms) to track conversion events, leads, and revenue associated with this listing feature.

Editorial note: Here is an example of a link with UTM parameters: https://www.domain.com/?utm_source=source&utm_medium=medium&utm_campaign=campaign

 

Financial vertical appointment booking exampleImage from Google, December 2022

Leveraging Services

Services can be added to a listing to let potential customers know what services are available at a given location.

add-services-google-business-profileScreenshot from Google, January 2023

Services in GBP are subject to availability by primary category, another reason category selection is so important, as discussed above.

Specifically, once services are added to a listing, they will be prominently displayed on the listing within the mobile SERPs under the “Services” tab of the listing.

financial-brand-services-google-business-profile-mobileScreenshot from Google, January 2023

This not only feeds more data completeness, which benefits both mobile and desktop performance, and increases engagement in the mobile SERPs (click to website, call, driving directions) which are bottom-funnel key performance indicators (KPIs) that drive revenue.

Google Posts

Google Posts represent a content marketing opportunity that is valuable on multiple levels.

An organization can post relevant, evergreen content that is strategically optimized for key localized phrases, services, and product offerings.

While there is no clear evidence or admission by Google that relevant content will have a direct impact on rankings overall for that listing, what we can say for certain from observation is that listings with well-optimized posts do present in the local SERPs landscape for keyword queries that align with that content.

This happens in the form of “related to your search” snippets and has been widely observed since 2019.

This has a few different implications, reinforcing the benefits of leveraging Google Posts in your local search strategy.

First, given that Post snippets are triggered, it is fair to infer that if a given listing did not have the relevant post, that listing may not have surfaced at all in the SERPs. Thus, we can infer a benefit around visibility, which leads to more traffic.

Second, it is well-documented that featured snippets are associated with boosts in click-through rate (CTR), which amplifies the traffic increases that result from the increased visibility alone.

Additional Post Benefits

Beyond these two very obvious benefits of Google Posts, they also provide many benefits around messaging potential visitors and clients with relevant information about the location, including products, services, promotions, events, limited-time offers, and potentially many others.

Use cases for this can include consumer banks that feature free checking or direct deposit or financial advisors that offer a free 60-minute initial consultation.

Taking the time to publish posts that highlight these differentiators could have a measurable impact on traffic, CTR, and revenue.

Another great aspect of Google Posts is that, for a while, they were designed to be visible according to specific date ranges – and, at one time, would “expire” or fall out of the SERPs once the time period passed.

Certain post types will surface long after the expiration date of the post if there is a relevancy match between the user’s query and the content.

Concluding Thoughts

To summarize, the financial vertical requires a highly specialized, precise GBP optimization strategy, which is well-vetted for the needs of users, LOBs, and regulatory compliance.

Considerations like primary and secondary categories, hours, attributes, services, and content (in the form of Google Posts) all play a critical role in defining that overall strategy, including setting up and maintaining crucial governance boundaries between complementary LOBs.

Undertaking all these available listing features holistically and strategically allows financial institutions and practitioners to maximize visibility, engagement, traffic, revenue, and overall performance from local search while minimizing cannibalism, complaints, and poor user experience.

Fler resurser: 


Featured Image: Andrey_Popov/Shutterstock



Källlänk

Fortsätt läsa

SEO

11 Disadvantages Of ChatGPT Content

Publicerad

11 Disadvantages Of ChatGPT Content

ChatGPT produces content that is comprehensive and plausibly accurate.

But researchers, artists, and professors warn of shortcomings to be aware of which degrade the quality of the content.

In this article, we’ll look at 11 disadvantages of ChatGPT content. Let’s dive in.

1. Phrase Usage Makes It Detectable As Non-Human

Researchers studying how to detect machine-generated content have discovered patterns that make it sound unnatural.

One of these quirks is how AI struggles with idioms.

An idiom is a phrase or saying with a figurative meaning attached to it, for example, “every cloud has a silver lining.” 

A lack of idioms within a piece of content can be a signal that the content is machine-generated – and this can be part of a detection algorithm.

This is what the 2022 research paper Adversarial Robustness of Neural-Statistical Features in Detection of Generative Transformers says about this quirk in machine-generated content:

“Complex phrasal features are based on the frequency of specific words and phrases within the analyzed text that occur more frequently in human text.

…Of these complex phrasal features, idiom features retain the most predictive power in detection of current generative models.”

This inability to use idioms contributes to making ChatGPT output sound and read unnaturally.

2. ChatGPT Lacks Ability For Expression

An artist commented on how the output of ChatGPT mimics what art is, but lacks the actual qualities of artistic expression.

Expression is the act of communicating thoughts or feelings.

ChatGPT output doesn’t contain expressions, only words.

It cannot produce content that touches people emotionally on the same level as a human can – because it has no actual thoughts or feelings.

Musical artist Nick Cave, in an article posted to his Red Hand Files newsletter, commented on a ChatGPT lyric that was sent to him, which was created in the style of Nick Cave.

He wrote:

“What makes a great song great is not its close resemblance to a recognizable work.

…it is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; it is the redemptive artistic act that stirs the heart of the listener, where the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering.”

Cave called the ChatGPT lyrics a mockery.

This is the ChatGPT lyric that resembles a Nick Cave lyric:

“I’ve got the blood of angels, on my hands
I’ve got the fire of hell, in my eyes
I’m the king of the abyss, I’m the ruler of the dark
I’m the one that they fear, in the shadows they hark”

And this is an actual Nick Cave lyric (Brother, My Cup Is Empty):

“Well I’ve been sliding down on rainbows
I’ve been swinging from the stars
Now this wretch in beggar’s clothing
Bangs his cup across the bars
Look, this cup of mine is empty!
Seems I’ve misplaced my desires
Seems I’m sweeping up the ashes
Of all my former fires”

It’s easy to see that the machine-generated lyric resembles the artist’s lyric, but it doesn’t really communicate anything.

Nick Cave’s lyrics tell a story that resonates with the pathos, desire, shame, and willful deception of the person speaking in the song. It expresses thoughts and feelings.

It’s easy to see why Nick Cave calls it a mockery.

3. ChatGPT Does Not Produce Insights

An article published in The Insider quoted an academic who noted that academic essays generated by ChatGPT lack insights about the topic.

ChatGPT summarizes the topic but does not offer a unique insight into the topic.

Humans create through knowledge, but also through their personal experience and subjective perceptions.

Professor Christopher Bartel of Appalachian State University is quoted by The Insider as saying that, while a ChatGPT essay may exhibit high grammar qualities and sophisticated ideas, it still lacked insight.

Bartel said:

“They are really fluffy. There’s no context, there’s no depth or insight.”

Insight is the hallmark of a well-done essay and it’s something that ChatGPT is not particularly good at.

This lack of insight is something to keep in mind when evaluating machine-generated content.

4. ChatGPT Is Too Wordy

A research paper published in January 2023 discovered patterns in ChatGPT content that makes it less suitable for critical applications.

The paper is titled, How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection.

The research showed that humans preferred answers from ChatGPT in more than 50% of questions answered related to finance and psychology.

But ChatGPT failed at answering medical questions because humans preferred direct answers – something the AI didn’t provide.

The researchers wrote:

“…ChatGPT performs poorly in terms of helpfulness for the medical domain in both English and Chinese.

The ChatGPT often gives lengthy answers to medical consulting in our collected dataset, while human experts may directly give straightforward answers or suggestions, which may partly explain why volunteers consider human answers to be more helpful in the medical domain.”

ChatGPT tends to cover a topic from different angles, which makes it inappropriate when the best answer is a direct one.

Marketers using ChatGPT must take note of this because site visitors requiring a direct answer will not be satisfied with a verbose webpage.

And good luck ranking an overly wordy page in Google’s featured snippets, where a succinct and clearly expressed answer that can work well in Google Voice may have a better chance to rank than a long-winded answer.

OpenAI, the makers of ChatGPT, acknowledges that giving verbose answers is a known limitation.

The announcement article by OpenAI states:

“The model is often excessively verbose…”

The ChatGPT bias toward providing long-winded answers is something to be mindful of when using ChatGPT output, as you may encounter situations where shorter and more direct answers are better.

5. ChatGPT Content Is Highly Organized With Clear Logic

ChatGPT has a writing style that is not only verbose but also tends to follow a template that gives the content a unique style that isn’t human.

This inhuman quality is revealed in the differences between how humans and machines answer questions.

The movie Blade Runner has a scene featuring a series of questions designed to reveal whether the subject answering the questions is a human or an android.

These questions were a part of a fictional test called the “Voigt-Kampff test“.

One of the questions is:

“You’re watching television. Suddenly you realize there’s a wasp crawling on your arm. What do you do?”

A normal human response would be to say something like they would scream, walk outside and swat it, and so on.

But when I posed this question to ChatGPT, it offered a meticulously organized answer that summarized the question and then offered logical multiple possible outcomes – failing to answer the actual question.

Screenshot Of ChatGPT Answering A Voight-Kampff Test Question

Screenshot from ChatGPT, January 2023

The answer is highly organized and logical, giving it a highly unnatural feel, which is undesirable.

6. ChatGPT Is Overly Detailed And Comprehensive

ChatGPT was trained in a way that rewarded the machine when humans were happy with the answer.

The human raters tended to prefer answers that had more details.

But sometimes, such as in a medical context, a direct answer is better than a comprehensive one.

What that means is that the machine needs to be prompted to be less comprehensive and more direct when those qualities are important.

From OpenAI:

“These issues arise from biases in the training data (trainers prefer longer answers that look more comprehensive) and well-known over-optimization issues.”

7. ChatGPT Lies (Hallucinates Facts)

The above-cited research paper, How Close is ChatGPT to Human Experts?, noted that ChatGPT has a tendency to lie.

It reports:

“When answering a question that requires professional knowledge from a particular field, ChatGPT may fabricate facts in order to give an answer…

For example, in legal questions, ChatGPT may invent some non-existent legal provisions to answer the question.

…Additionally, when a user poses a question that has no existing answer, ChatGPT may also fabricate facts in order to provide a response.”

The Futurism website documented instances where machine-generated content published on CNET was wrong and full of “dumb errors.”

CNET should have had an idea this could happen, because OpenAI published a warning about incorrect output:

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

CNET claims to have submitted the machine-generated articles to human review prior to publication.

A problem with human review is that ChatGPT content is designed to sound persuasively correct, which may fool a reviewer who is not a topic expert.

8. ChatGPT Is Unnatural Because It’s Not Divergent

The research paper, How Close is ChatGPT to Human Experts? also noted that human communication can have indirect meaning, which requires a shift in topic to understand it.

ChatGPT är för bokstavligt, vilket gör att svaren ibland missar målet eftersom AI:n förbiser det faktiska ämnet.

The researchers wrote:

"ChatGPT:s svar är i allmänhet strikt fokuserade på den givna frågan, medan människors är divergerande och lätt växlar till andra ämnen.

När det gäller innehållsrikedomen är människor mer divergerande i olika aspekter, medan ChatGPT föredrar att fokusera på själva frågan.

Människor kan svara på den dolda innebörden under frågan baserat på sitt eget sunt förnuft och kunskap, men ChatGPT förlitar sig på de bokstavliga orden i frågan..."

Människor kan bättre avvika från den bokstavliga frågan, som är viktig för att svara på frågor av typen "vad sägs om".

Om jag till exempel frågar:

"Hästar är för stora för att vara ett husdjur. Hur är det med tvättbjörnar?”

Ovanstående fråga frågar inte om en tvättbjörn är ett lämpligt husdjur. Frågan är om storleken på djuret.

ChatGPT fokuserar på tvättbjörnens lämplighet som husdjur istället för att fokusera på storleken.

Skärmdump av ett alltför bokstavligt ChatGPT-svar

11 Disadvantages Of ChatGPT ContentScreenshot from ChatGPT, January 2023

9. ChatGPT innehåller en partiskhet mot att vara neutral

Utdata från ChatGPT är i allmänhet neutral och informativ. Det är en snedvridning i utdata som kan verka hjälpsam men som inte alltid är det.

Forskningsdokumentet vi just diskuterade noterade att neutralitet är en oönskad egenskap när det gäller juridiska, medicinska och tekniska frågor.

Människor tenderar att välja sida när de ger den här typen av åsikter.

10. ChatGPT är partisk för att vara formell

ChatGPT-utdata har en bias som hindrar den från att lossna och svara med vanliga uttryck. Istället tenderar dess svar att vara formella.

Människor, å andra sidan, tenderar att svara på frågor med en mer vardaglig stil, med vardagsspråk och slang – motsatsen till formellt.

ChatGPT använder inte förkortningar som GOAT eller TL;DR.

Svaren saknar också tillfällen av ironi, metaforer och humor, vilket kan göra ChatGPT-innehåll alltför formellt för vissa innehållstyper.

Forskarna skriver:

"...ChatGPT gillar att använda konjunktioner och adverb för att förmedla ett logiskt tankeflöde, som "I allmänhet", "å andra sidan", "För det första,..., För det andra,..., Slutligen" och så vidare.

11. ChatGPT pågår fortfarande

ChatGPT håller för närvarande på att träna och förbättras.

OpenAI rekommenderar att allt innehåll som genereras av ChatGPT ska granskas av en människa, vilket listar detta som en bästa praxis.

OpenAI föreslår att hålla människor i ögat:

"Där det är möjligt rekommenderar vi att ha en mänsklig granskning innan de används i praktiken.

Detta är särskilt viktigt i domäner med hög insats och för kodgenerering.

Människor bör vara medvetna om systemets begränsningar och ha tillgång till all information som behövs för att verifiera utdata (till exempel om applikationen sammanfattar anteckningar ska en människa ha enkel tillgång till de ursprungliga anteckningarna för att kunna hänvisa tillbaka).

Oönskade egenskaper hos ChatGPT

Det är tydligt att det finns många problem med ChatGPT som gör det olämpligt för oövervakad innehållsgenerering. Det innehåller fördomar och misslyckas med att skapa innehåll som känns naturligt eller innehåller genuina insikter.

Dessutom gör dess oförmåga att känna eller skapa ursprungliga tankar det ett dåligt val för att skapa konstnärliga uttryck.

Användare bör tillämpa detaljerade uppmaningar för att generera innehåll som är bättre än standardinnehållet det tenderar att mata ut.

Slutligen är mänsklig granskning av maskingenererat innehåll inte alltid tillräckligt, eftersom ChatGPT-innehåll är designat för att verka korrekt, även när det inte är det.

Det betyder att det är viktigt att mänskliga granskare är ämnesexperter som kan skilja mellan korrekt och felaktigt innehåll om ett specifikt ämne.

Fler resurser: 


Utvald bild av Shutterstock/fizkes



Källlänk

Fortsätt läsa

Trendigt

sv_SESvenska