Connect with us

SEO

How I Rank #1 on Google

Published

on

How I Rank #1 on Google

SEO content is content optimized to rank high on search engines.

I’ve created plenty of them in my career. In fact, I’ve written 111 articles for this blog, of which ~80%—90% are SEO content.

Altogether, they receive an estimated 121,000 monthly search visits from Google.

Amount of organic search traffic I'm acquiring from my articlesAmount of organic search traffic I'm acquiring from my articles

Suffice it to say, I know a little something about writing SEO content. Follow along as I show you how I put together this article on how to create SEO content (meta, I know).

If we want to acquire search traffic, we need to target topics that people are searching for on Google. In this case, I’m targeting the keyword “seo content creation”.

How do I know people are searching for this keyword? Well, according to Ahrefs’ Keywords Explorer, this keyword has a search volume of 500 and a traffic potential of 1,100.

Search volume and traffic potential for "seo content creation"Search volume and traffic potential for "seo content creation"

A search volume of 500 means on average, there are 500 searches per month for this keyword on Google. And a Traffic Potential (TP) of 1,100 means I could potentially acquire 1,100 monthly search traffic from targeting this keyword, if I manage to rank #1 on Google.

Sidenote.

Why the discrepancy? That’s because there are many ways to search for the same thing. Google understands that and ranks nearly the same pages for all variations. Therefore, your page could potentially rank for these different keywords and generate search traffic from all of them.

How did I find this keyword? I found it by analyzing what our competitors were ranking for. After all, if they rank for it, it’s likely relevant to us and something we can potentially rank for too.

To find what our competitors rank for, I entered our competitor’s website into Ahrefs’ Site Explorer and went to the Top pages report.

Our competitor's top pages reportOur competitor's top pages report

This report shows which pages on our competitor’s website get the most organic search traffic. For example, SEMRush’s page on competitive analysis ranks for ~3,000 keywords and gets an estimated total of 24,000 monthly search visits. The #1 keyword sending them the most search traffic is “analyse competitors”, which they rank for in position one.

I went through the report and that’s where I found this keyword:

How I found the keyword "seo content creation"How I found the keyword "seo content creation"

We want to rank high on Google, but we don’t want to do that for any random topic. We want to make sure we only target topics that can generate us sales eventually.

We do this by assigning a business potential score to every relevant topic we find. The business potential score is simply how easy it will be to pitch your product while covering a certain topic.

Business potential chartBusiness potential chart

We want to prioritize topics that score at least a “2” and above.

In this case, I scored “seo content creation” a “2”—Our product isn’t essential, but boy is it a yuge timesaver.

To know what type of content I need to create, I need to figure out why searchers are searching for “seo content creation”. This is known as matching search intent.

Since Google’s aim is to rank relevant content, I can look at the SERPs to figure search intent. I did this by entering “seo content creation” into Keywords Explorer, scrolling down to SERP Overview, and clicking Identify intents.

Identify intents feature in Ahrefs' Keywords ExplorerIdentify intents feature in Ahrefs' Keywords Explorer

I see that searchers want a step-by-step guide on how to create SEO content. And I also see that the main audience for this topic are beginners.

With the outline approved, it’s time to move on to the next step. This depends on what you’ve pitched.

For example, the unique angle for my post on the best marketing books was to get recommendations from other marketers. So, rather than dive right into drafting, bulk of the work involved reaching out to people on LinkedIn or email.

My outreach to marketers asking for their book recommendationMy outreach to marketers asking for their book recommendation

For this post, I’m writing from my lived experience, so it was more of a key-bashing-and-backspacing-session on Google Docs. (You can’t see it, but I backspaced a lot.)

Unfortunately, I’m no Anthony Trollope and don’t have a fixed routine for you to copy.

Anthony Trollope's routineAnthony Trollope's routine

My one non-negotiable is a cup of coffee. I’m sure most people who write will agree with me. Otherwise, I’m all over the place. If I feel like Charles Darwin, I’ll set a 30-minute timer and start writing. Or I’ll go for a walk.

Charles Darwin rantingCharles Darwin ranting

Sorry to disappoint you, but neither ChatGPT, Claude, Gemini, or Llama feature here. Call me trad, but I still prefer to write, not generate content. Writing is thinking, after all. I often surprise myself by discovering things I never knew simply by writing.

Beyond the productivity advice, the things I try to do in my drafts (after getting whipped into shape by Ahrefs over the past five years) are:

  • Ensuring I’m including use cases of Ahrefs naturally within the narrative (this post is an example of how I’m doing this.)
  • Making sure every statement is as accurate as possible. No hype, no lying, qualifiers like “could”, “perhaps”, or “may” widely accepted.
  • Be clear. No fluff, and only use jargon when needed. Include images where possible.

Once I’m sufficiently satisfied with the draft, I tag Ryan on Basecamp (where we track the progress of drafts) for his feedback. Here’s his feedback for this post:

Ryan's feedback for my draftRyan's feedback for my draft

Since there are no major changes, it’s ready to be uploaded and published (after making the edits.)

Hol’ up, not so fast. Before we actually publish, I need to make sure the on-page SEO for this post is done. Matching search intent is 80% of the way, but there’s no harm in ensuring that Google clearly understands what your page is about.

Think of it like the icing on a cake. The cake is already edible, but the icing just makes it better and prettier.

On-page SEO is really a simple checklist, like:

  • Including the target keyword in the title, URL, H1, and the intro paragraph.
  • Writing an engaging meta description.
  • Linking to other useful pages on our website.
  • Adding alt text to all our images.

In my opinion, getting the title right is the most important. Beyond the SEO benefits, it’s the first thing any human sees. So it must do the job to convince them to click.

The title is the first thing a searcher seesThe title is the first thing a searcher sees

I follow Ryan’s advice when it comes to titles.

I try to brainstorm at least ten titles in varying styles for every blog post I write. This takes up a lot of brain juice, so this is also where I introduce my best friend, ChatGPT:

Using ChatGPT to generate title ideasUsing ChatGPT to generate title ideas

I eventually stuck with my original title, but it’s a good exercise to get your ideas going.

Final thoughts

You might have been expecting some secret SEO tricks I use to rank, but unfortunately, there’s none of that. It’s really just a simple process of keyword research, matching search intent, making something unique, and adding that final touch of on-page SEO.

As this meme explains:

Midwit meme on how not to complicate SEOMidwit meme on how not to complicate SEO

My process isn’t fancy, but it works.



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

8% Of Automattic Employees Choose To Resign

Published

on

By

8% Of Automattic Employees Choose To Resign

WordPress co-founder and Automattic CEO announced today that he offered Automattic employees the chance to resign with a severance pay and a total of 8.4 percent. Mullenweg offered $30,000 or six months of salary, whichever one is higher, with a total of 159 people taking his offer.

Reactions Of Automattic Employees

Given the recent controversies created by Mullenweg, one might be tempted to view the walkout as a vote of no-confidence in Mullenweg. But that would be a mistake because some of the employees announcing their resignations either praised Mullenweg or simply announced their resignation while many others tweeted how happy they are to stay at Automattic.

One former employee tweeted that he was sad about recent developments but also praised Mullenweg and Automattic as an employer.

He shared:

“Today was my last day at Automattic. I spent the last 2 years building large scale ML and generative AI infra and products, and a lot of time on robotics at night and on weekends.

I’m going to spend the next month taking a break, getting married, and visiting family in Australia.

I have some really fun ideas of things to build that I’ve been storing up for a while. Now I get to build them. Get in touch if you’d like to build AI products together.”

Another former employee, Naoko Takano, is a 14 year employee, an organizer of WordCamp conferences in Asia, a full-time WordPress contributor and Open Source Project Manager at Automattic announced on X (formerly Twitter) that today was her last day at Automattic with no additional comment.

She tweeted:

“Today was my last day at Automattic.

I’m actively exploring new career opportunities. If you know of any positions that align with my skills and experience!”

Naoko’s role at at WordPress was working with the global WordPress community to improve contributor experiences through the Five for the Future and Mentorship programs. Five for the Future is an important WordPress program that encourages organizations to donate 5% of their resources back into WordPress. Five for the Future is one of the issues Mullenweg had against WP Engine, asserting that they didn’t donate enough back into the community.

Mullenweg himself was bittersweet to see those employees go, writing in a blog post:

“It was an emotional roller coaster of a week. The day you hire someone you aren’t expecting them to resign or be fired, you’re hoping for a long and mutually beneficial relationship. Every resignation stings a bit.

However now, I feel much lighter. I’m grateful and thankful for all the people who took the offer, and even more excited to work with those who turned down $126M to stay. As the kids say, LFG!”

Read the entire announcement on Mullenweg’s blog:

Automattic Alignment

Featured Image by Shutterstock/sdx15

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

YouTube Extends Shorts To 3 Minutes, Adds New Features

Published

on

By

YouTube Extends Shorts To 3 Minutes, Adds New Features

YouTube expands Shorts to 3 minutes, adds templates, AI tools, and the option to show fewer Shorts on the homepage.

  • YouTube Shorts will allow 3-minute videos.
  • New features include templates, enhanced remixing, and AI-generated video backgrounds.
  • YouTube is adding a Shorts trends page and comment previews.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

How To Stop Filter Results From Eating Crawl Budget

Published

on

By

How To Find The Right Long-tail Keywords For Articles

Today’s Ask An SEO question comes from Michal in Bratislava, who asks:

“I have a client who has a website with filters based on a map locations. When the visitor makes a move on the map, a new URL with filters is created. They are not in the sitemap. However, there are over 700,000 URLs in the Search Console (not indexed) and eating crawl budget.

What would be the best way to get rid of these URLs? My idea is keep the base location ‘index, follow’ and newly created URLs of surrounded area with filters switch to ‘noindex, no follow’. Also mark surrounded areas with canonicals to the base location + disavow the unwanted links.”

Great question, Michal, and good news! The answer is an easy one to implement.

First, let’s look at what you’re trying and apply it to other situations like ecommerce and publishers. This way, more people can benefit. Then, go into your strategies above and end with the solution.

What Crawl Budget Is And How Parameters Are Created That Waste It

If you’re not sure what Michal is referring to with crawl budget, this is a term some SEO pros use to explain that Google and other search engines will only crawl so many pages on your website before it stops.

If your crawl budget is used on low-value, thin, or non-indexable pages, your good pages and new pages may not be found in a crawl.

If they’re not found, they may not get indexed or refreshed. If they’re not indexed, they cannot bring you SEO traffic.

This is why optimizing a crawl budget for efficiency is important.

Michal shared an example of how “thin” URLs from an SEO point of view are created as customers use filters.

The experience for the user is value-adding, but from an SEO standpoint, a location-based page would be better. This applies to ecommerce and publishers, too.

Ecommerce stores will have searches for colors like red or green and products like t-shirts and potato chips.

These create URLs with parameters just like a filter search for locations. They could also be created by using filters for size, gender, color, price, variation, compatibility, etc. in the shopping process.

The filtered results help the end user but compete directly with the collection page, and the collection would be the “non-thin” version.

Publishers have the same. Someone might be on SEJ looking for SEO or PPC in the search box and get a filtered result. The filtered result will have articles, but the category of the publication is likely the best result for a search engine.

These filtered results can be indexed because they get shared on social media or someone adds them as a comment on a blog or forum, creating a crawlable backlink. It might also be an employee in customer service responded to a question on the company blog or any other number of ways.

The goal now is to make sure search engines don’t spend time crawling the “thin” versions so you can get the most from your crawl budget.

The Difference Between Indexing And Crawling

There’s one more thing to learn before we go into the proposed ideas and solutions – the difference between indexing and crawling.

  • Crawling is the discovery of new pages within a website.
  • Indexing is adding the pages that are worthy of showing to a person using the search engine to the database of pages.

Pages can get crawled but not indexed. Indexed pages have likely been crawled and will likely get crawled again to look for updates and server responses.

But not all indexed pages will bring in traffic or hit the first page because they may not be the best possible answer for queries being searched.

Now, let’s go into making efficient use of crawl budgets for these types of solutions.

Using Meta Robots Or X Robots

The first solution Michal pointed out was an “index,follow” directive. This tells a search engine to index the page and follow the links on it. This is a good idea, but only if the filtered result is the ideal experience.

From what I can see, this would not be the case, so I would recommend making it “noindex,follow.”

Noindex would say, “This is not an official page, but hey, keep crawling my site, you’ll find good pages in here.”

And if you have your main menu and navigational internal links done correctly, the spider will hopefully keep crawling them.

Canonicals To Solve Wasted Crawl Budget

Canonical links are used to help search engines know what the official page to index is.

If a product exists in three categories on three separate URLs, only one should be “the official” version, so the two duplicates should have a canonical pointing to the official version. The official one should have a canonical link that points to itself. This applies to the filtered locations.

If the location search would result in multiple city or neighborhood pages, the result would likely be a duplicate of the official one you have in your sitemap.

Have the filtered results point a canonical back to the main page of filtering instead of being self-referencing if the content on the page stays the same as the original category.

If the content pulls in your localized page with the same locations, point the canonical to that page instead.

In most cases, the filtered version inherits the page you searched or filtered from, so that is where the canonical should point to.

If you do both noindex and have a self-referencing canonical, which is overkill, it becomes a conflicting signal.

The same applies to when someone searches for a product by name on your website. The search result may compete with the actual product or service page.

With this solution, you’re telling the spider not to index this page because it isn’t worth indexing, but it is also the official version. It doesn’t make sense to do this.

Instead, use a canonical link, as I mentioned above, or noindex the result and point the canonical to the official version.

Disavow To Increase Crawl Efficiency

Disavowing doesn’t have anything to do with crawl efficiency unless the search engine spiders are finding your “thin” pages through spammy backlinks.

The disavow tool from Google is a way to say, “Hey, these backlinks are spammy, and we don’t want them to hurt us. Please don’t count them towards our site’s authority.”

In most cases, it doesn’t matter, as Google is good at detecting spammy links and ignoring them.

You do not want to add your own site and your own URLs to the disavow tool. You’re telling Google your own site is spammy and not worth anything.

Plus, submitting backlinks to disavow won’t prevent a spider from seeing what you want and do not want to be crawled, as it is only for saying a link from another site is spammy.

Disavowing won’t help with crawl efficiency or saving crawl budget.

How To Make Crawl Budgets More Efficient

The answer is robots.txt. This is how you tell specific search engines and spiders what to crawl.

You can include the folders you want them to crawl by marketing them as “allow,” and you can say “disallow” on filtered results by disallowing the “?” or “&” symbol or whichever you use.

If some of those parameters should be crawled, add the main word like “?filter=location” or a specific parameter.

Robots.txt is how you define crawl paths and work on crawl efficiency. Once you’ve optimized that, look at your internal links. A link from one page on your site to another.

These help spiders find your most important pages while learning what each is about.

Internal links include:

  • Breadcrumbs.
  • Menu navigation.
  • Links within content to other pages.
  • Sub-category menus.
  • Footer links.

You can also use a sitemap if you have a large site, and the spiders are not finding the pages you want with priority.

I hope this helps answer your question. It is one I get a lot – you’re not the only one stuck in that situation.

More resources: 


Featured Image: Paulo Bobita/Search Engine Journal

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending