SEO
Can You Work Faster & Smarter?
AI for SEO is at a tipping point where the technology used by big corporations is increasingly within reach for smaller businesses.
The increasing use of this new technology is permanently changing the practice of SEO today.
But is it right for your business? These are the surprising facts.
What Is AI For SEO
AI, or artificial intelligence, is already a part of our daily lives. Anyone who uses Alexa or Google Maps is using AI software to make their lives better in some way.
Popular writing assistant Grammarly is an AI software that illustrates the power of AI to improve performance.
It takes a so-so piece of content and makes it better by fixing grammar and spelling mistakes and catching repetitive use of words.
AI for SEO works similarly to improve performance and, to a certain degree, democratize SEO by making scale and sophisticated data analyses within reach for everybody.
How Can AI Be Used In SEO
Mainstream AI SEO platforms automate data analysis, providing high-level views that identify patterns and trends that are not otherwise visible.
Mark Traphagen of seoClarity describes why AI SEO automation is essential:
“A decade ago, the best SEOs were great excel jockeys, downloading and correlating data from different sources and parts of the SEO lifecycle, all by hand.
If SEOs were doing that today, they’d be left in the dust.
By the time humans can process – results have changed, algorithms updated, SERPs shifted, etc.
And that’s not to mention the access and depth of data available in this decade, fast-paced changes in search engine algorithms, varying ranking factors that are different for every query, intent-based results that change seasonally, and the immense complexity of modern enterprise websites.
These realities have made utilizing AI now essential at the enterprise level.”
AI In Onsite Optimization
AI SEO automation platform WordLift helps publishers automate structured data, internal linking, and other on-page-related factors.
Andrea Volpini, CEO of WordLift, comments:
“WordLift automatically ingests the latest version of the schema vocabulary to support all possible entity types.
We can reuse this data to build internal links, render context cards on web pages, and recommend similar content.
Much like Google, a publisher can use this network of entities to let the readers discover related content.
WordLift enables many SEO workflows as the knowledge graph of the website gets built.
Some use WordLift’s NLP to manage internal links to their important pages; others use the data in the knowledge graph to instruct the internal search engine or to fine-tune a language model for content generation.
By automating structured data, publishing entities, and adding internal links, it’s not uncommon to see substantial growth in organic traffic for content creators.”
AI For SEO At Scale
AI for SEO can be applied to a wide range of activities that minimize engaging in repetitive tasks and improves productivity.
A partial list includes:
- Content planning.
- Content analysis.
- Data analysis.
- Creation of local knowledge graphs.
- Automate the creation of Schema structured data.
- Optimization of interlinking.
- Page by Page content optimization.
- Automatically optimized meta descriptions.
- Programmatic title elements.
- Optimized headings at scale.
AI In Content Creation
Content creation consists of multiple subjective choices. What one writer feels is relevant to a topic might be different from what users think it is.
A writer may assume that a topic is about Topic X. The search engine may identify that users prefer content about X, Y, and Z. Consequently, the content may experience poor search performance.
AI content tools help content developers form tighter relationships between content and what users are looking for by providing an objective profile of what a given piece of content is about.
AI tools allow search marketers to work with content in a way that is light years ahead of the decades-old practice of first identifying high-traffic keywords and then building content around them.
AI In Content Optimization
Search engines understand search queries and content better by identifying what users mean and what webpages are about.
Today’s AI content tools do the same for SEO from the entire content development workflow.
There’s more to this as well.
In 2018 Google developed what they referred to as the Topic Layer, which helps it understand the content and how the topics and subtopics relate to each other.
Google described it like this:
“So we’ve taken our existing Knowledge Graph—which understands connections between people, places, things and facts about them—and added a new layer, called the Topic Layer, engineered to deeply understand a topic space and how interests can develop over time as familiarity and expertise grow.
The Topic Layer is built by analyzing all the content that exists on the web for a given topic and develops hundreds and thousands of subtopics.
For these subtopics, we can identify the most relevant articles and videos—the ones that have shown themselves to be evergreen and continually useful, as well as fresh content on the topic.
We then look at patterns to understand how these subtopics relate to each other, so we can more intelligently surface the type of content you might want to explore next.”
AI content tools help search marketers align their activities with the reality of how search engines work.
AI In Keyword Research
Beyond that, they introduce content workflow efficiency by enabling the entire process to scale, reducing the time between research and publishing content online.
Mark Traphagen of seoClarity emphasized that AI tools take over the tedious parts of SEO.
Mark explained:
“seoClarity long ago moved from being a data provider to leveraging AI in every part of the SEO lifecycle to move clients quickly from data to insights to execution.
We use:
AI in surfacing insights and recommendations from different data sources (rankings -> SERP opportunities -> technical issues)
AI in delivering the most accurate data possible in search demand, keyword difficulty, and topic intent — all in real-time and trended views
AI in content optimization and analysis
AI-assisted automation in instant execution of SEO enables changes at massive scale.
The future of AI in SEO isn’t AI “doing SEO” for us, but rather AI taking over the most time-consuming tasks freeing SEOs to be directors implementing the best-informed actions at scale at unheard of speeds.”
A key value of using AI for SEO is increasing productivity and efficiency while also increasing expertise, authoritativeness, and content relevance.
Jeff Coyle of Market Muse outlines AI’s benefits as creating justification for how much is budgeted for content and what value it brings to the bottom line.
Jeff commented:
“When more of the content strategy you budget for turns into a success, it becomes immediately apparent that using AI to predict content budget needs and drive efficiency rates is the most important thing one can invest in for a content organization.
For operations, human resource efficiency is the top priority. Where do you have humans performing manual tasks for research, planning, prioritizing, briefing, writing, editing, production, and optimization? How much time is lost, and how many feedback or rework loops exist?
Data-driven, predictive, defendable content creation and optimization plans that yield single sources of truth in the form of content briefs and project plans are the foundation of a team focused on using technology to improve human resource efficiencies.
For optimization, picking the content to update, understanding how to update it and whether it needs to be parlayed with creation, repurposing, and transformation are the critical advantages of using AI for content analysis.
Knowing if a page is high quality, exhibits expertise, appeals to the right target intent, and is integrated into the site correctly gives a team the best chance to succeed.”
Drawbacks And Ethical Considerations
Publishing content that is entirely created by AI can result in a negative outcome because Google explicitly prohibits autogenerated content.
Google’s spam guidelines warn that publishing autogenerated content may result in a manual action, removing the content from Google’s search results.
The guidelines explain:
“To be eligible to appear in Google web search results (web pages, images, videos, news content, or other material that Google finds from across the web), content shouldn’t violate Google Search’s overall policies or the spam policies listed on this page.
…Spammy automatically generated (or “auto-generated”) content is content that’s been generated programmatically without producing anything original or adding sufficient value; instead, it’s been generated for the primary purpose of manipulating search rankings and not helping users.”
There’s no ban on publishing autogenerated content and no law against it. Google even suggests ways to exclude that kind of content from Google’s search engine if you choose to use that kind of content.
But using automatically generated content is not viable if the goal is to rank well in Google’s search engine.
Can Google Identify AI-Generated Content?
Yes, Google and other search engines can likely identify content that is entirely generated by AI.
Content contains word use patterns unique to both human and AI-generated content. Statistical analysis reveals which content is created by AI.
The Future of Tools Is Now
Many AI-based tools are available that are appropriate for different levels of users.
Not every business needs to scale its SEO for hundreds of thousands of products.
But even a small to medium online business can benefit from the streamlined and efficient workflow that an AI-based content tool offers.
More resources:
Featured image by Shutterstock/Master1305
SEO
8% Of Automattic Employees Choose To Resign
WordPress co-founder and Automattic CEO announced today that he offered Automattic employees the chance to resign with a severance pay and a total of 8.4 percent. Mullenweg offered $30,000 or six months of salary, whichever one is higher, with a total of 159 people taking his offer.
Reactions Of Automattic Employees
Given the recent controversies created by Mullenweg, one might be tempted to view the walkout as a vote of no-confidence in Mullenweg. But that would be a mistake because some of the employees announcing their resignations either praised Mullenweg or simply announced their resignation while many others tweeted how happy they are to stay at Automattic.
One former employee tweeted that he was sad about recent developments but also praised Mullenweg and Automattic as an employer.
He shared:
“Today was my last day at Automattic. I spent the last 2 years building large scale ML and generative AI infra and products, and a lot of time on robotics at night and on weekends.
I’m going to spend the next month taking a break, getting married, and visiting family in Australia.
I have some really fun ideas of things to build that I’ve been storing up for a while. Now I get to build them. Get in touch if you’d like to build AI products together.”
Another former employee, Naoko Takano, is a 14 year employee, an organizer of WordCamp conferences in Asia, a full-time WordPress contributor and Open Source Project Manager at Automattic announced on X (formerly Twitter) that today was her last day at Automattic with no additional comment.
She tweeted:
“Today was my last day at Automattic.
I’m actively exploring new career opportunities. If you know of any positions that align with my skills and experience!”
Naoko’s role at at WordPress was working with the global WordPress community to improve contributor experiences through the Five for the Future and Mentorship programs. Five for the Future is an important WordPress program that encourages organizations to donate 5% of their resources back into WordPress. Five for the Future is one of the issues Mullenweg had against WP Engine, asserting that they didn’t donate enough back into the community.
Mullenweg himself was bittersweet to see those employees go, writing in a blog post:
“It was an emotional roller coaster of a week. The day you hire someone you aren’t expecting them to resign or be fired, you’re hoping for a long and mutually beneficial relationship. Every resignation stings a bit.
However now, I feel much lighter. I’m grateful and thankful for all the people who took the offer, and even more excited to work with those who turned down $126M to stay. As the kids say, LFG!”
Read the entire announcement on Mullenweg’s blog:
Featured Image by Shutterstock/sdx15
SEO
YouTube Extends Shorts To 3 Minutes, Adds New Features
YouTube expands Shorts to 3 minutes, adds templates, AI tools, and the option to show fewer Shorts on the homepage.
- YouTube Shorts will allow 3-minute videos.
- New features include templates, enhanced remixing, and AI-generated video backgrounds.
- YouTube is adding a Shorts trends page and comment previews.
SEO
How To Stop Filter Results From Eating Crawl Budget
Today’s Ask An SEO question comes from Michal in Bratislava, who asks:
“I have a client who has a website with filters based on a map locations. When the visitor makes a move on the map, a new URL with filters is created. They are not in the sitemap. However, there are over 700,000 URLs in the Search Console (not indexed) and eating crawl budget.
What would be the best way to get rid of these URLs? My idea is keep the base location ‘index, follow’ and newly created URLs of surrounded area with filters switch to ‘noindex, no follow’. Also mark surrounded areas with canonicals to the base location + disavow the unwanted links.”
Great question, Michal, and good news! The answer is an easy one to implement.
First, let’s look at what you’re trying and apply it to other situations like ecommerce and publishers. This way, more people can benefit. Then, go into your strategies above and end with the solution.
What Crawl Budget Is And How Parameters Are Created That Waste It
If you’re not sure what Michal is referring to with crawl budget, this is a term some SEO pros use to explain that Google and other search engines will only crawl so many pages on your website before it stops.
If your crawl budget is used on low-value, thin, or non-indexable pages, your good pages and new pages may not be found in a crawl.
If they’re not found, they may not get indexed or refreshed. If they’re not indexed, they cannot bring you SEO traffic.
This is why optimizing a crawl budget for efficiency is important.
Michal shared an example of how “thin” URLs from an SEO point of view are created as customers use filters.
The experience for the user is value-adding, but from an SEO standpoint, a location-based page would be better. This applies to ecommerce and publishers, too.
Ecommerce stores will have searches for colors like red or green and products like t-shirts and potato chips.
These create URLs with parameters just like a filter search for locations. They could also be created by using filters for size, gender, color, price, variation, compatibility, etc. in the shopping process.
The filtered results help the end user but compete directly with the collection page, and the collection would be the “non-thin” version.
Publishers have the same. Someone might be on SEJ looking for SEO or PPC in the search box and get a filtered result. The filtered result will have articles, but the category of the publication is likely the best result for a search engine.
These filtered results can be indexed because they get shared on social media or someone adds them as a comment on a blog or forum, creating a crawlable backlink. It might also be an employee in customer service responded to a question on the company blog or any other number of ways.
The goal now is to make sure search engines don’t spend time crawling the “thin” versions so you can get the most from your crawl budget.
The Difference Between Indexing And Crawling
There’s one more thing to learn before we go into the proposed ideas and solutions – the difference between indexing and crawling.
- Crawling is the discovery of new pages within a website.
- Indexing is adding the pages that are worthy of showing to a person using the search engine to the database of pages.
Pages can get crawled but not indexed. Indexed pages have likely been crawled and will likely get crawled again to look for updates and server responses.
But not all indexed pages will bring in traffic or hit the first page because they may not be the best possible answer for queries being searched.
Now, let’s go into making efficient use of crawl budgets for these types of solutions.
Using Meta Robots Or X Robots
The first solution Michal pointed out was an “index,follow” directive. This tells a search engine to index the page and follow the links on it. This is a good idea, but only if the filtered result is the ideal experience.
From what I can see, this would not be the case, so I would recommend making it “noindex,follow.”
Noindex would say, “This is not an official page, but hey, keep crawling my site, you’ll find good pages in here.”
And if you have your main menu and navigational internal links done correctly, the spider will hopefully keep crawling them.
Canonicals To Solve Wasted Crawl Budget
Canonical links are used to help search engines know what the official page to index is.
If a product exists in three categories on three separate URLs, only one should be “the official” version, so the two duplicates should have a canonical pointing to the official version. The official one should have a canonical link that points to itself. This applies to the filtered locations.
If the location search would result in multiple city or neighborhood pages, the result would likely be a duplicate of the official one you have in your sitemap.
Have the filtered results point a canonical back to the main page of filtering instead of being self-referencing if the content on the page stays the same as the original category.
If the content pulls in your localized page with the same locations, point the canonical to that page instead.
In most cases, the filtered version inherits the page you searched or filtered from, so that is where the canonical should point to.
If you do both noindex and have a self-referencing canonical, which is overkill, it becomes a conflicting signal.
The same applies to when someone searches for a product by name on your website. The search result may compete with the actual product or service page.
With this solution, you’re telling the spider not to index this page because it isn’t worth indexing, but it is also the official version. It doesn’t make sense to do this.
Instead, use a canonical link, as I mentioned above, or noindex the result and point the canonical to the official version.
Disavow To Increase Crawl Efficiency
Disavowing doesn’t have anything to do with crawl efficiency unless the search engine spiders are finding your “thin” pages through spammy backlinks.
The disavow tool from Google is a way to say, “Hey, these backlinks are spammy, and we don’t want them to hurt us. Please don’t count them towards our site’s authority.”
In most cases, it doesn’t matter, as Google is good at detecting spammy links and ignoring them.
You do not want to add your own site and your own URLs to the disavow tool. You’re telling Google your own site is spammy and not worth anything.
Plus, submitting backlinks to disavow won’t prevent a spider from seeing what you want and do not want to be crawled, as it is only for saying a link from another site is spammy.
Disavowing won’t help with crawl efficiency or saving crawl budget.
How To Make Crawl Budgets More Efficient
The answer is robots.txt. This is how you tell specific search engines and spiders what to crawl.
You can include the folders you want them to crawl by marketing them as “allow,” and you can say “disallow” on filtered results by disallowing the “?” or “&” symbol or whichever you use.
If some of those parameters should be crawled, add the main word like “?filter=location” or a specific parameter.
Robots.txt is how you define crawl paths and work on crawl efficiency. Once you’ve optimized that, look at your internal links. A link from one page on your site to another.
These help spiders find your most important pages while learning what each is about.
Internal links include:
- Breadcrumbs.
- Menu navigation.
- Links within content to other pages.
- Sub-category menus.
- Footer links.
You can also use a sitemap if you have a large site, and the spiders are not finding the pages you want with priority.
I hope this helps answer your question. It is one I get a lot – you’re not the only one stuck in that situation.
More resources:
Featured Image: Paulo Bobita/Search Engine Journal
-
SEO7 days ago
How to Estimate It and Source Data
-
WORDPRESS2 days ago
WordPress biz Automattic details WP Engine deal demands • The Register
-
SEO6 days ago
Yoast Co-Founder Suggests A WordPress Contributor Board
-
SEARCHENGINES4 days ago
Daily Search Forum Recap: September 30, 2024
-
SEO5 days ago
6 Things You Can Do to Compete With Big Sites
-
SEARCHENGINES6 days ago
Google’s 26th Birthday Doodle Is Missing
-
SEARCHENGINES5 days ago
Google Volatility With Gains & Losses, Updated Web Spam Policies, Cache Gone & More Search News
-
SEARCHENGINES3 days ago
Daily Search Forum Recap: October 1, 2024
You must be logged in to post a comment Login