SEO
How NLP & NLU Work For Semantic Search

Natural language processing (NLP) and natural language understanding (NLU) are two often-confused technologies that make search more intelligent and ensure people can search and find what they want.
This intelligence is a core component of semantic search.
NLP and NLU are why you can type “dresses” and find that long-sought-after “NYE Party Dress” and why you can type “Matthew McConnahey” and get Mr. McConnaughey back.
With these two technologies, searchers can find what they want without having to type their query exactly as it’s found on a page or in a product.
NLP is one of those things that has built up such a large meaning that it’s easy to look past the fact that it tells you exactly what it is: NLP processes natural language, specifically into a format that computers can understand.
These kinds of processing can include tasks like normalization, spelling correction, or stemming, each of which we’ll look at in more detail.
NLU, on the other hand, aims to “understand” what a block of natural language is communicating.
It performs tasks that can, for example, identify verbs and nouns in sentences or important items within a text. People or programs can then use this information to complete other tasks.
Computers seem advanced because they can do a lot of actions in a short period of time. However, in a lot of ways, computers are quite daft.
They need the information to be structured in specific ways to build upon it. For natural language data, that’s where NLP comes in.
It takes messy data (and natural language can be very messy) and processes it into something that computers can work with.
Text Normalization
When searchers type text into a search bar, they are trying to find a good match, not play “guess the format.”
For example, to require a user to type a query in exactly the same format as the matching words in a record is unfair and unproductive.
We use text normalization to do away with this requirement so that the text will be in a standard format no matter where it’s coming from.
As we go through different normalization steps, we’ll see that there is no approach that everyone follows. Each normalization step generally increases recall and decreases precision.
A quick aside: “recall” means a search engine finds results that are known to be good.
Precision means a search engine finds only good results.
Search results could have 100% recall by returning every document in an index, but precision would be poor.
Conversely, a search engine could have 100% recall by only returning documents that it knows to be a perfect fit, but sit will likely miss some good results.
Again, normalization generally increases recall and decreases precision.
Whether that movement toward one end of the recall-precision spectrum is valuable depends on the use case and the search technology. It isn’t a question of applying all normalization techniques but deciding which ones provide the best balance of precision and recall.
Letter Normalization
The simplest normalization you could imagine would be the handling of letter case.
In English, at least, words are generally capitalized at the beginning of sentences, occasionally in titles, and when they are proper nouns. (There are other rules, too, depending on whom you ask.)
But in German, all nouns are capitalized. Other languages have their own rules.
These rules are useful. Otherwise, we wouldn’t follow them.
For example, capitalizing the first words of sentences helps us quickly see where sentences begin.
That usefulness, however, is diminished in an information retrieval context.
The meanings of words don’t change simply because they are in a title and have their first letter capitalized.
Even trickier is that there are rules, and then there is how people actually write.
If I text my wife, “SOMEONE HIT OUR CAR!” we all know that I’m talking about a car and not something different because the word is capitalized.
We can see this clearly by reflecting on how many people don’t use capitalization when communicating informally – which is, incidentally, how most case-normalization works.
Of course, we know that sometimes capitalization does change the meaning of a word or phrase. We can see that “cats” are animals, and “Cats” is a musical.
In most cases, though, the increased precision that comes with not normalizing on case, is offset by decreasing recall by far too much.
The difference between the two is easy to tell via context, too, which we’ll be able to leverage through natural language understanding.
While less common in English, handling diacritics is also a form of letter normalization.
Diacritics are the marks, or “glyphs,” attached to letters, as in á, ë, or ç.
Words can otherwise be spelled the same, but added diacritics can change the meaning. In French, “élève” means “student,” while “élevé” means “elevated.”
Nonetheless, many people will not include the diacritics when searching, and so another form of normalization is to strip all diacritics, leaving behind the simple (and now ambiguous) “eleve.”
Tokenization
The next normalization challenge is breaking down the text the searcher has typed in the search bar and the text in the document.
This step is necessary because word order does not need to be exactly the same between the query and the document text, except when a searcher wraps the query in quotes.
Breaking queries, phrases, and sentences into words may seem like a simple task: Just break up the text at each space.
Problems show up quickly with this approach. Again, let’s start with English.
Separating on spaces alone means that the phrase “Let’s break up this phrase!” yields us let’s, break, up, this, and phrase! as words.
For search, we almost surely don’t want the exclamation point at the end of the word “phrase.”
Whether we want to keep the contracted word “let’s” together is not as clear.
Some software will break the word down even further (“let” and “‘s”) and some won’t.
Some will not break down “let’s” while breaking down “don’t” into two pieces.
This process is called “tokenization.”
We call it tokenization for reasons that should now be clear: What we end up with are not words but discrete groups of characters. This is even more true for languages other than English.
German speakers, for example, can merge words (more accurately “morphemes,” but close enough) together to form a larger word. The German word for “dog house” is “Hundehütte,” which contains the words for both “dog” (“Hund”) and “house” (“Hütte”).
Nearly all search engines tokenize text, but there are further steps an engine can take to normalize the tokens. Two related approaches are stemming and lemmatization.
Stemming And Lemmatization
Stemming and lemmatization take different forms of tokens and break them down for comparison.
For example, take the words “calculator” and “calculation,” or “slowing” and “slowly.”
We can see there are some clear similarities.
Stemming breaks a word down to its “stem,” or other variants of the word it is based on. Stemming is fairly straightforward; you could do it on your own.
What’s the stem of “stemming?”
You can probably guess that it’s “stem.” Often stemming means removing prefixes or suffixes, as in this case.
There are multiple stemming algorithms, and the most popular is the Porter Stemming Algorithm, which has been around since the 1980s. It is a series of steps applied to a token to get to the stem.
Stemming can sometimes lead to results that you wouldn’t foresee.
Looking at the words “carry” and “carries,” you might expect that the stem of each of these is “carry.”
The actual stem, at least according to the Porter Stemming Algorithm, is “carri.”
This is because stemming attempts to compare related words and break down words into their smallest possible parts, even if that part is not a word itself.
On the other hand, if you want an output that will always be a recognizable word, you want lemmatization. Again, there are different lemmatizers, such as NLTK using Wordnet.
Lemmatization breaks a token down to its “lemma,” or the word which is considered the base for its derivations. The lemma from Wordnet for “carry” and “carries,” then, is what we expected before: “carry.”
Lemmatization will generally not break down words as much as stemming, nor will as many different word forms be considered the same after the operation.
The stems for “say,” “says,” and “saying” are all “say,” while the lemmas from Wordnet are “say,” “say,” and “saying.” To get these lemma, lemmatizers are generally corpus-based.
If you want the broadest recall possible, you’ll want to use stemming. If you want the best possible precision, use neither stemming nor lemmatization.
Which you go with ultimately depends on your goals, but most searches can generally perform very well with neither stemming nor lemmatization, retrieving the right results, and not introducing noise.
Plurals
If you decide not to include lemmatization or stemming in your search engine, there is still one normalization technique that you should consider.
That is the normalization of plurals to their singular form.
Generally, ignoring plurals is done through the use of dictionaries.
Even if “de-pluralization” seems as simple as chopping off an “-s,” that’s not always the case. The first problem is with irregular plurals, such as “deer,” “oxen,” and “mice.”
A second problem is pluralization with an “-es” suffix, such as “potato.” Finally, there are simply the words that end in an “s” but aren’t plural, like “always.”
A dictionary-based approach will ensure that you introduce recall, but not incorrectly.
Just as with lemmatization and stemming, whether you normalize plurals is dependent on your goals.
Cast a wider net by normalizing plurals, a more precise one by avoiding normalization.
Usually, normalizing plurals is the right choice, and you can remove normalization pairs from your dictionary when you find them causing problems.
One area, however, where you will almost always want to introduce increased recall is when handling typos.
Typo Tolerance And Spell Check
We have all encountered typo tolerance and spell check within search, but it’s useful to think about why it’s present.
Sometimes, there are typos because fingers slip and hit the wrong key.
Other times, the searcher thinks a word is spelled differently than it is.
Increasingly, “typos” can also result from poor speech-to-text understanding.
Finally, words can seem like they have typos but really don’t, such as in comparing “scream” and “cream.”
The simplest way to handle these typos, misspellings, and variations, is to avoid trying to correct them at all. Some algorithms can compare different tokens.
One of these is the Damerau-Levenshtein Distance algorithm.
This measure looks at how many edits are needed to go from one token to another.
You can then filter out all tokens with a distance that is too high.
(Two is generally a good threshold, but you will probably want to adjust this based on the length of the token.)
After filtering, you can use the distance for sorting results or feeding into a ranking algorithm.
Many times, context can matter when determining if a word is misspelled or not. The word “scream” is probably correct after “I,” but not after “ice.”
Machine learning can be a solution for this by bringing context to this NLP task.
This spell check software can use the context around a word to identify whether it is likely to be misspelled and its most likely correction.
Typos In Documents
One thing that we skipped over before is that words may not only have typos when a user types it into a search bar.
Words may also have typos inside a document.
This is especially true when the documents are made of user-generated content.
This detail is relevant because if a search engine is only looking at the query for typos, it is missing half of the information.
The best typo tolerance should work across both query and document, which is why edit distance generally works best for retrieving and ranking results.
Spell check can be used to craft a better query or provide feedback to the searcher, but it is often unnecessary and should never stand alone.
Natural Language Understanding
While NLP is all about processing text and natural language, NLU is about understanding that text.
Named Entity Recognition
A task that can aid in search is that of named entity recognition, or NER. NER identifies key items, or “entities,” inside of text.
While some people will call NER natural language processing and others will call it natural language understanding, what’s clear is that it can find what’s important within a text.
For the query “NYE party dress” you would perhaps get back an entity of “dress” that is mapped to a type of “category.”
NER will always map an entity to a type, from as generic as “place” or “person,” to as specific as your own facets.
NER can also use context to identify entities.
A query of “white house” may refer to a place, while “white house paint” might refer to a color of “white” and a product category of “paint.”
Query Categorization
Named entity recognition is valuable in search because it can be used in conjunction with facet values to provide better search results.
Recalling the “white house paint” example, you can use the “white” color and the “paint” product category to filter down your results to only show those that match those two values.
This would give you high precision.
If you don’t want to go that far, you can simply boost all products that match one of the two values.
Query categorization can also help with recall.
For searches with few results, you can use the entities to include related products.
Imagine that there are no products that match the keywords “white house paint.”
In this case, leveraging the product category of “paint” can return other paints that might be a decent alternative, such as that nice eggshell color.
Document Tagging
Another way that named entity recognition can help with search quality is by moving the task from query time to ingestion time (when the document is added to the search index).
When ingesting documents, NER can use the text to tag those documents automatically.
These documents will then be easier to find for the searchers.
Either the searchers use explicit filtering, or the search engine applies automatic query-categorization filtering, to enable searchers to go directly to the right products using facet values.
Intent Detection
Related to entity recognition is intent detection, or determining the action a user wants to take.
Intent detection is not the same as what we talk about when we say “identifying searcher intent.”
Identifying searcher intent is getting people to the right content at the right time.
Intent detection maps a request to a specific, pre-defined intent.
It then takes action based on that intent. A user searching for “how to make returns” might trigger the “help” intent, while “red shoes” might trigger the “product” intent.
In the first case, you could route the search to your help desk search.
In the second one, you could route it to the product search. This isn’t so different from what you see when you search for the weather on Google.
Look, and notice that you get a weather box at the very top of the page. (Newly launched web search engine Andi takes this concept to the extreme, bundling search in a chatbot.)
For most search engines, intent detection, as outlined here, isn’t necessary.
Most search engines only have a single content type on which to search at a time.
When there are multiple content types, federated search can perform admirably by showing multiple search results in a single UI at the same time.
Other NLP And NLU tasks
There are plenty of other NLP and NLU tasks, but these are usually less relevant to search.
Tasks like sentiment analysis can be useful in some contexts, but search isn’t one of them.
You could imagine using translation to search multi-language corpuses, but it rarely happens in practice, and is just as rarely needed.
Question answering is an NLU task that is increasingly implemented into search, especially search engines that expect natural language searches.
Once again, you can see this on major web search engines.
Google, Bing, and Kagi will all immediately answer the question “how old is the Queen of England?” without needing to click through to any results.
Some search engine technologies have explored implementing question answering for more limited search indices, but outside of help desks or long, action-oriented content, the usage is limited.
Few searchers are going to an online clothing store and asking questions to a search bar.
Summarization is an NLU task that is more useful for search.
Much like with the use of NER for document tagging, automatic summarization can enrich documents. Summaries can be used to match documents to queries, or to provide a better display of the search results.
This better display can help searchers be confident that they have gotten good results and get them to the right answers more quickly.
Even including newer search technologies using images and audio, the vast, vast majority of searches happen with text. To get the right results, it’s important to make sure the search is processing and understanding both the query and the documents.
Semantic search brings intelligence to search engines, and natural language processing and understanding are important components.
NLP and NLU tasks like tokenization, normalization, tagging, typo tolerance, and others can help make sure that searchers don’t need to be search experts.
Instead, they can go from need to solution “naturally” and quickly.
More resources:
Featured Image: ryzhi/Shutterstock
SEO
A Year Of AI Developments From OpenAI

Today, ChatGPT celebrates one year since its launch in research preview.
Try talking with ChatGPT, our new AI system which is optimized for dialogue. Your feedback will help us improve it. https://t.co/sHDm57g3Kr
— OpenAI (@OpenAI) November 30, 2022
From its humble beginnings, ChatGPT has continually pushed the boundaries of what we perceive as possible with generative AI for almost any task.
a year ago tonight we were probably just sitting around the office putting the finishing touches on chatgpt before the next morning’s launch.
what a year it’s been…
— Sam Altman (@sama) November 30, 2023
In this article, we take a journey through the past year, highlighting the significant milestones and updates that have shaped ChatGPT into the versatile and powerful tool it is today.
a year ago tonight we were placing bets on how many total users we’d get by sunday
20k, 80k, 250k… i jokingly said “8B”.
little did we know… https://t.co/8YtO8GbLPy— rapha gontijo lopes (@rapha_gl) November 30, 2023
ChatGPT: From Research Preview To Customizable GPTs
This story unfolds over the course of nearly a year, beginning on November 30, when OpenAI announced the launch of its research preview of ChatGPT.
As users began to offer feedback, improvements began to arrive.
Before the holiday, on December 15, 2022, ChatGPT received general performance enhancements and new features for managing conversation history.

As the calendar turned to January 9, 2023, ChatGPT saw improvements in factuality, and a notable feature was added to halt response generation mid-conversation, addressing user feedback and enhancing control.
Just a few weeks later, on January 30, the model was further upgraded for enhanced factuality and mathematical capabilities, broadening its scope of expertise.
February 2023 was a landmark month. On February 9, ChatGPT Plus was introduced, bringing new features and a faster ‘Turbo’ version to Plus users.
This was followed closely on February 13 with updates to the free plan’s performance and the international availability of ChatGPT Plus, featuring a faster version for Plus users.
March 14, 2023, marked a pivotal moment with the introduction of GPT-4 to ChatGPT Plus subscribers.


This new model featured advanced reasoning, complex instruction handling, and increased creativity.
Less than ten days later, on March 23, experimental AI plugins, including browsing and Code Interpreter capabilities, were made available to selected users.
On May 3, users gained the ability to turn off chat history and export data.
Plus users received early access to experimental web browsing and third-party plugins on May 12.
On May 24, the iOS app expanded to more countries with new features like shared links, Bing web browsing, and the option to turn off chat history on iOS.
June and July 2023 were filled with updates enhancing mobile app experiences and introducing new features.
The mobile app was updated with browsing features on June 22, and the browsing feature itself underwent temporary removal for improvements on July 3.
The Code Interpreter feature rolled out in beta to Plus users on July 6.
Plus customers enjoyed increased message limits for GPT-4 from July 19, and custom instructions became available in beta to Plus users the next day.
July 25 saw the Android version of the ChatGPT app launch in selected countries.
As summer progressed, August 3 brought several small updates enhancing the user experience.
Custom instructions were extended to free users in most regions by August 21.
The month concluded with the launch of ChatGPT Enterprise on August 28, offering advanced features and security for enterprise users.
Entering autumn, September 11 witnessed limited language support in the web interface.
Voice and image input capabilities in beta were introduced on September 25, further expanding ChatGPT’s interactive abilities.
An updated version of web browsing rolled out to Plus users on September 27.
The fourth quarter of 2023 began with integrating DALL·E 3 in beta on October 16, allowing for image generation from text prompts.
The browsing feature moved out of beta for Plus and Enterprise users on October 17.
Customizable versions of ChatGPT, called GPTs, were introduced for specific tasks on November 6 at OpenAI’s DevDay.


On November 21, the voice feature in ChatGPT was made available to all users, rounding off a year of significant advancements and broadening the horizons of AI interaction.
And here, we have ChatGPT today, with a sidebar full of GPTs.


Looking Ahead: What’s Next For ChatGPT
The past year has been a testament to continuous innovation, but it is merely the prologue to a future rich with potential.
The upcoming year promises incremental improvements and leaps in AI capabilities, user experience, and integrative technologies that could redefine our interaction with digital assistants.
With a community of users and developers growing stronger and more diverse, the evolution of ChatGPT is poised to surpass expectations and challenge the boundaries of today’s AI landscape.
As we step into this next chapter, the possibilities are as limitless as generative AI continues to advance.
Featured image: photosince/Shutterstock
SEO
Is AI Going To E-E-A-T Your Experience For Breakfast? The LinkedIn Example

Are LinkedIn’s collaborative articles part of SEO strategies nowadays?
More to the point, should they be?
The search landscape has changed dramatically in recent years, blurring the lines between search engines and where searches occur.
Following the explosive adoption of AI in content marketing and the most recent Google HCU, core, and spam updates, we’re looking at a very different picture now in search versus 12 months ago.
User-generated and community-led content seems to be met with renewed favourability by the algorithm (theoretically, mirroring what people reward, too).
LinkedIn’s freshly launched “collaborative articles” seem to be a perfect sign of our times: content that combines authority (thanks to LinkedIn’s authority), AI-generated content, and user-generated content.
What could go wrong?
In this article, we’ll cover:
- What are “collaborative articles” on LinkedIn?
- Why am I discussing them in the context of SEO?
- The main issues with collaborative articles.
- How is Google treating them?
- How they can impact your organic performance.
What Are LinkedIn Collaborative Articles?
First launched in March 2023, LinkedIn says about collaborative articles:
“These articles begin as AI-powered conversation starters, developed with our editorial team, but they aren’t complete without insights from our members. A select group of experts have been invited to contribute their own ideas, examples and experiences within the articles.“
Essentially, each of these articles starts as a collection of AI-generated answers to FAQs/prompts around any given topic. Under each of these sections, community members can add their own perspectives, insights, and advice.
What’s in it for contributors? To earn, ultimately, a “Top Voice” badge on their profile.
The articles are indexable and are all placed under the same folder (https://www.linkedin.com/advice/).
They look like this:

On the left-hand side, there are always FAQs relevant to the topic answered by AI.
On the right-hand side is where the contributions by community members get posted. Users can react to each contribution in the same way as to any LinkedIn post on their feed.
How Easy Is It To Contribute And Earn A Badge For Your Insights?
Pretty easy.
I first got invited to contribute on September 19, 2023 – though I had already found a way to contribute a few weeks before this.


My notifications included updates from connections who had contributed to an article.
By clicking on these, I was transferred to the article and was able to contribute to it, too (as well as additional articles, linked at the bottom).
I wanted to test how hard it was to earn a Top SEO Voice badge. Eight article contributions later (around three to four hours of my time), I had earned three.


How? Apparently, simply by earning likes for my contributions.
A Mix Of Brilliance, Fuzzy Editorial Rules, And Weird Uncle Bob
Collaborative articles sound great in principle – a win-win for both sides.
- LinkedIn struck a bullseye: creating and scaling content (theoretically) oozing with E-E-A-T, with minimal investment.
- Users benefit from building their personal brand (and their company’s) for a fragment of the effort and cost this usually takes. The smartest ones complement their on-site content strategy with this off-site golden ticket.
What isn’t clear from LinkedIn’s Help Center is what this editorial mix of AI and human input looks like.
Things like:
- How much involvement do the editors have before the topic is put to the community?
- Are they only determining and refining the prompts?
- Are they editing the AI-generated responses?
- More importantly, what involvement (if any) do they have after they unleash the original AI-generated piece into the world?
- And more.
I think of this content like weird Uncle Bob, always joining the family gatherings with his usual, unoriginal conversation starters. Only, this time, he’s come bearing gifts.
Do you engage? Or do you proceed to consume as many canapés as possible, pretending you haven’t seen him yet?
Why Am I Talking About LinkedIn Articles And SEO?
When I first posted about LinkedIn’s articles, it was the end of September. Semrush showed clear evidence of their impact and potential in Search. (Disclosure: I work for Semrush.)
Only six months after their launch, LinkedIn articles were on a visible, consistent upward trend.
- They were already driving 792.5K organic visits a month. (This was a 75% jump in August.)
- They ranked for 811,700 keywords.
- Their pages were ranking in the top 10 for 78,000 of them.
- For 123,700 of them, they appeared in a SERP feature, such as People Also Ask and Featured Snippets.
- Almost 72% of the keywords had informational intent, followed by commercial keywords (22%).
Here’s a screenshot with some of the top keywords for which these pages ranked at the top:


Now, take the page that held the Featured Snippet for competitive queries like “how to enter bios” (monthly search volume of 5,400 and keyword difficulty of 84, based on Semrush data).
It came in ahead of pages on Tom’s Hardware, Hewlett-Packard, or Reddit.


See anything weird? Even at the time of writing this post, this collaborative article had precisely zero (0) contributions.
This means a page with 100% AI-generated content (and unclear interference of human editors) was rewarded with the Featured Snippet against highly authoritative and relevant domains and pages.
A Sea Of Opportunity Or A Storm Ready To Break Out?
Let’s consider these articles in the context of Google’s guidelines for creating helpful, reliable, people-first content and its Search Quality Rater Guidelines.
Of particular importance here, I believe, is the most recently added “E” in “E-E-A-T,” which takes experience into account, alongside expertise, authoritativeness, and trustworthiness.
For so many of these articles to have been ranking so well must mean that they were meeting the guidelines and proving helpful and reliable for content consumers.
After all, they rely on “a select group of experts to contribute their own ideas, examples and experiences within the articles,” so they must be worthy of strong organic performances, right?
Possibly. (I’ve yet to see such an example, but I want to believe somewhere in the thousands of pages these do exist).
But, based on what I’ve seen, there are too many examples of poor-quality content to justify such big rewards in the search engine results pages (SERPs).
The common issues I’ve spotted:
1. Misinformation
I can’t tell how much vetting or editing there is going on behind the scenes, but the amount of misinformation in some collaborative articles is alarming. This goes for AI-generated content and community contributions alike.
I don’t really envy the task of fact-checking what LinkedIn describes as “thousands of collaborative articles on 2,500+ skills.” Still, if it’s quality and helpfulness we’re concerned with here, I’d start brewing my coffee a little stronger if I were LinkedIn.
At the moment, it feels a little too much like a free-for-all.
Here are some examples of topics like SEO or content marketing.


2. Thin Content
To a degree, some contributions seem to do nothing more than mirror the points made in the original AI-generated piece.
For example, are these contributions enough to warrant a high level of “experience” in these articles?


The irony to think that some of these contributions may have also been generated by AI…
3. Missing Information
While many examples don’t provide new or unique perspectives, some articles simply don’t provide…any perspectives at all.
This piece about analytical reasoning ranked in the top 10 for 128 keywords when I first looked into it last September (down to 80 in October).


It even held the Featured Snippet for competitive keywords like “inductive reasoning examples” for a while (5.4K monthly searches in the US), although it had no contributions on this subsection.
Most of its sections remain empty, so we’re talking about mainly AI-generated content.
Does this mean that Google really doesn’t care whether your content comes from humans or AI?
I’m not convinced.
How Have The Recent Google Updates Impacted This Content?
After August and October 2023 Google core updates (at the time of writing, the November 2023 Google core update is rolling out), the September 2023 helpful content update, and the October 2023 spam update, the performance of this section seems to be declining.
According to Semrush data:


- Organic traffic to these pages was down to 453,000 (a 43% drop from September, bringing their performance close to August levels).
- They ranked for 465,100 keywords (down by 43% MoM).
- Keywords in the Top 10 dropped by 33% (51,900 vs 78,000 in September).
- Keywords in the top 10 accounted for 161,800 visits (vs 287,200 in September, down by 44% MoM).
The LinkedIn domain doesn’t seem to have been impacted negatively overall.


Is this a sign that Google has already picked up the weaknesses in this content and has started balancing actual usefulness versus the overall domain authority that might have propelled it originally?
Will we see it declining further in the coming months? Or are there better things to come for this feature?
Should You Already Be On The Bandwagon If You’re In SEO?
I was on the side of caution before the Google algorithm updates of the past couple of months.
Now, I’d be even more hesitant to invest a substantial part of my resources towards baking this content into my strategy.
As with any other new, third-party feature (or platform – does anyone remember Threads?), it’s always a case of balancing being an early adopter with avoiding over-investment. At least while being unclear on the benefits.
Collaborative articles are a relatively fresh, experimental, external feature you have minimal control over as part of your SEO strategy.
Now, we also have signs from Google that this content may not be as “cool” as we initially thought.
This Is What I’d Do
That’s not to say it’s not worth trying some small-scale experiments.
Or, maybe, use it as part of promoting your own personal brand (but I’ve yet to see any data around the impact of the “Top Voice” badges on perceived value).
Treat this content as you would any other owned content.
- Follow Google’s guidelines.
- Add genuine value for your audience.
- Add your own unique perspective.
- Highlight gaps and misinformation.
Experience shows us that when tactics get abused, and the user experience suffers, Google eventually steps in (from guest blogging to parasite SEO, most recently).
It might make algorithmic tweaks when launching updates, launch a new system, or hand out manual actions – the point is that you don’t know how things will progress. Only LinkedIn and Google have control over that.
As things stand, I can easily see any of the below potential outcomes:
- This content becomes the AI equivalent of the content farms of the pre-Panda age, leading to Google clamping down on its search performance.
- LinkedIn’s editors stepping in more for quality control (provided LinkedIn deems the investment worthwhile).
- LinkedIn starts pushing its initiative much more to encourage participation and engagement. (This could be what makes the difference between a dead content farm and Reddit-like value.)
Anything could happen. I believe the next few months will give us a clearer picture.
What’s Next For AI And Its Role In SEO And Social Media?
When it comes to content creation, I think it’s safe to say that AI isn’t quite ready to E-E-A-T your experience for breakfast. Yet.
We can probably expect more of these kinds of movements from social media platforms and forums in the coming months, moving more toward mixing AI with human experience.
What do you think is next for LinkedIn’s collaborative articles? Let me know on LinkedIn!
More resources:
Featured Image: BestForBest/Shutterstock
SEO
What It Really Is & How to Build One

Building a personal brand is undeniably hard work, but it isn’t as tricky as you might think.
I spoke with two influencers—Wes Kao and Matt Diggity—for their best tips on establishing a name for yourself online.
A personal brand is how people perceive you and what you’re known for. It’s the skills, experience, and values that give you an edge over others.
Neuroscientist Andrew Huberman is one example. He helms and hosts the science/health podcast Huberman Lab, lectures at Stanford Medicine, and has earned media mentions from the likes of BBC, TIME, and more.
Andrew’s personal brand is built on his credibility and areas of expertise. Many of his posts attract thousands of likes and hundreds of comments on X and LinkedIn.
If we want to dig deeper, Maven and altMBA co-founder Wes Kao has a somewhat alternative take on the definition:
In my opinion, it’s better to reframe ‘personal branding’ into ‘personal credibility.’ Personal branding has a superficial undertone. It assumes you have your work, then you tack on an artificial layer of ‘branding’ to shape perceptions.
She suggests that personal credibility is about substance: Showing people what you do, how you think, and how you can contribute. Wes adds:
In this way, you build deeper connections with people who believe in your work—which means stronger relationships, more control, and more opportunities.
In this podcast interview snippet with Nick Bennett, SparkToro’s Amanda Natividad echoes Wes’ sentiment:
People generally don’t like the term [personal brand] because it sounds disingenuous and icky. Acknowledging the existence of your personal brand is admitting that you care what others think about you, and that you find ways to manage those expectations at scale.
Wild as it sounds, building a solid personal brand gives you more control over your life.
A strong following could:
- Expand your realm of influence, particularly in your area of expertise (i.e., be viewed as a subject matter expert).
- Boost your credibility, in turn allowing you to promote your company/product better.
- Build a loyal following independent of the company you’re working for (or if you own that company, create more positive sentiment towards it).
- Open doors to job, networking, and investment opportunities.
Chiangmai SEO conference founder Matt Diggity shares some excellent points in his Facebook post on the topic, too.


There’s no linear path to building your personal brand.
As a precursor to the below steps, let’s first talk about finding your “voice.”
Wes and Matt both emphasize the importance of staying true to yourself. That means not crafting an online persona of who you think you should be.
I try to write like how I sound in person. Talking and writing are different media, so you shouldn’t try to match the two in a literal sense, but you want to capture your overall spirit. For example, I have a hint of snark in my writing because that’s how I sound in person.
Matt echoes this sentiment:
How I talk on the internet is how I talk IRL. If I’m not having a f**king blast on my YouTube videos, I won’t do them. It has to be fun.
Keep this idea in mind as you go through the steps below.
Step 1: Position yourself
Think of yourself as a product: What are your strengths, obsessions, and areas of expertise?
If you’re well-versed in technical SEO or a seasoned entrepreneur, these might be your unique selling points.
From there, double down on something you would be excited to think, write, and talk about for years—because “it will likely take years to get to where you want to go,” says Wes.
As an (optional) next step, consider solidifying your position with a spiky POV—a term coined by Wes, and which she cautions should be used with care.
A spiky POV is not about a contrarian hot take for the sake of it. In 2023, social platforms are flooded with hot takes and generic advice. I think about respecting the intelligence of my audience and teaching them something they don’t already know. A true spiky POV is rooted in deep expertise, including recognizing the limitations and counterpoints of your idea. This builds your reputation as someone who is rigorous and worth the time to engage with.
Here’s a LinkedIn post by Wes that combines all of the above: a unique perspective backed by her personal experiences, with a takeaway for the audience too. In other words—a spiky, worthy POV.
Step 2: Start sharing publicly
You already knew this, but social media platforms are one of the best ways to get growth and build your name. It’s your chance to build your reputation in a public arena.
Wes, Amanda, and Matt each utilized a combination of online channels to promote their voice and content. It’s one of the first things you should do—because your content is really only as good as its reach.
This is the first thing I did to build a personal brand and authority in the SEO industry, and I still do it to this day…
Take an hour a week, go to SEO social media hangouts (SEO Facebook groups, Twitter, LinkedIn, etc) and go from top-to-bottom answering people’s questions.…
— Matt Diggity (@mattdiggityseo) September 27, 2023
This doesn’t mean cross-posting your content across more platforms than you can manage, of course.
Study where your target audience spends most of their time, then hone in on those platforms (ideally, stick to no more than 2-3).
In Matt’s case, his followers are primarily on Twitter, Facebook, and YouTube—and that’s where his SEO-led content thrives.


If creating whole posts from scratch seems daunting, start by commenting thoughtfully in relevant online communities. Obviously, do it with heart:
This is the first thing I did to build a personal brand and authority in the SEO industry, and I still do it to this day…
Take an hour a week, go to SEO social media hangouts (SEO Facebook groups, Twitter, LinkedIn, etc) and go from top-to-bottom answering people’s questions.…
— Matt Diggity (@mattdiggityseo) September 27, 2023
Here are some simple ways to start.
LinkedIn: Contribute to a collaborative article
You might have seen these articles floating around LinkedIn—perhaps even been invited to add your insights to them.
These blog posts are similar to Wikipedia pages: LinkedIn users build on each AI-generated article with their perspectives, and readers can choose to react to these additions or engage with the content.


Here’s an example of what a contribution looks like:


Reddit: Weigh in on discussions
- Go to a relevant subreddit, e.g. r/bigSEO
- Sort by “Top” and “This Week”
- Browse the questions or discussions and offer your two cents where relevant.


Ride on trending topics
Found an interesting insight on X or someplace else? Turn it into a poll, question, or post. (Be sure to also tag and credit the author!)
Bring it all together
If some of your responses or posts get traction, repurpose those answers into new content: a blog post, video, or series of social posts.
(PSST: Learn more about my process behind curating and repurposing content for Ahrefs’ X account.)
This segues into our next and final step:
Step 3: Double down on what works
By now, you should have an idea of which topics you’re most comfortable discussing at length—and what resonates most with your target audience.
You can further maximize your reach by doubling down on the things that have brought you success. Or, more specifically, by repurposing popular content in other formats and creating more content about similar things.
For instance, we turned this popular video on how to use ChatGPT for SEO into a Twitter thread and LinkedIn post—and later, a blog post.




Wes has also done this plenty with her “eaten the bear” analogy over the years. She first wrote about it in this 2019 blog post, rewrote it in 2023, and shares variations of the analogy on LinkedIn and X every few months.


Each time, these posts garner hundreds or thousands of likes
Too much backstory is one of the biggest killers of good stories.
Backstory scope creep is real. We’ve all been there: Long-winded, stream of conscious explanations—all in the name of “giving context.“
I’ve been guilty of it myself.
The solution?
Minimum viable backstory pic.twitter.com/XFe2wAJysg
— Wes Kao 🏛 (@wes_kao) October 3, 2023
Don’t let your success die there, though. You can find more content ideas that will resonate with your audience by doing some keyword research around your topic. Here’s how:
- Plug your target topic into Ahrefs’ Keywords Explorer
- Go to the Matching terms report
For example, if we enter “chatgpt seo,” we see that people are searching for ChatGPT prompts for SEO and ChatGPT SEO extensions:


Given how our audience is interested in ChatGPT and SEO, these would be great topics to create content about—whether that be social media posts, videos, blog posts, or something else.
If you don’t have a paid account with us, you can plug your topic into our free keyword generator tool to view related phrases/questions.
Extra tips to build your personal brand
We mentioned some of these in some shape or form earlier, but they’re worth expanding on.
Maintain human connections
Who are you without the people who consume your content? Engage consistently with your followers and others’ content. Human connections are worth their weight in gold when you’re trying to get your personal brand off the ground.
Maintain consistency across your social media profiles
This means using the same profile picture across all platforms, and a standardized bio so others can quickly get a sense of who you are and what you often post about.
Jack Appleby is a great example. The creator/consultant is behind Future Social, an independent social strategy newsletter with 56,000+ subscribers.
Notice how he maintains consistency on X and LinkedIn:




Ahrefs’ Tim Soulo further explains the importance of your profile picture in personal branding here:
Your profile pic is your “personal branding” tool. (duh!)
My journey so far:
2009 – “I have no idea what I’m doing;“
2014 – I want to stand out & be memorable;
2018 – I want to look provocative;
2020 – I want to look professional.I can expand this into a thread if you want 😉 pic.twitter.com/W7FtZTcYGO
— Tim Soulo 🇺🇦 (@timsoulo) September 14, 2020
Be yourself
Remember how Wes and Matt shared the importance of staying true to yourself? We couldn’t emphasize that enough.
Final thoughts
These steps aren’t exhaustive, obviously. To truly stand out online, Wes suggests having a combination of these things: social proof, good design sense, strong writing, interesting insights, and a track record of contribution.
As she puts it:
All these things will make people think, ‘This person knows their craft.’
Have a thought about this blog post? Ping me on X.
-
FACEBOOK6 days ago
Indian Government Warns Facebook, YouTube About Deepfakes, Misinformation Violations
-
MARKETING5 days ago
Whiteboard Friday Recap 2023: AI Edition
-
SOCIAL7 days ago
Meta Stock: Still Room For Upside In A Maturing Market (NASDAQ:META)
-
SOCIAL7 days ago
Instagram Will Now Enable All Users to Download Publicly Posted Reels Clips
-
MARKETING7 days ago
OpenAI: The return of the king
-
MARKETING6 days ago
Making the Most of Electronic Resumes (Pro Tips and Tricks)
-
SEARCHENGINES5 days ago
No Estimate To Share For Completion Of Google November Core & Reviews Updates
-
SEARCHENGINES4 days ago
Google Merchant Center Automatically Creating Promotions
You must be logged in to post a comment Login