Connect with us

SEO

How To Identify And Reduce Render-Blocking Resources

Published

on

How To Identify And Reduce Render-Blocking Resources


Identifying and reducing resources responsible for blocking the rendering of your web page is a critical optimization point that can make or break your page speed.

It can be so critical that it can pay dividends to your site’s page experience metrics (and your user’s satisfaction) as a result.

In 2021, the average time it took to fully render a mobile web page was 22 seconds. In 2018, it was 15 seconds.

Clearly, this is a substantially higher number than Google’s recommended time of 2-3 seconds. It’s also substantially higher than it used to be.

What could be causing these issues with render-blocking resources?

What is driving this increase in overall page render speed?

One interesting trend to note is that there has been an increasing reliance on third-party fonts compared to system fonts. Using third-party fonts as a resource tends to interfere with the processing and rendering of a page.

With system fonts, the browser does not have to load anything extra, so it doesn’t have that additional processing step as a result.

Screenshot from Web Almanac, January 2022

This reliance across industries is likely to impact this rendering time. Of course, this is not the only cause of this issue with render-blocking resources.

In addition, Google’s own services tend to have a significant impact on rendering time, such as Google Analytics or using a third-party Facebook pixel for tracking purposes.

The desire to rely on such technologies is not necessarily terrible from a marketing perspective.

But, from a render-blocking resources perspective, it can cause significant increases in page load time and how Google (and users) perceives your page.

The ideal solution is to make sure that your page loads for user interaction as quickly as possible.

It’s also a possibility that poor web development practices in use by web developers today are to blame.

Either way, this is something in every website project that should be addressed as part of your Core Web Vitals audits.

Page experience, however, is not just about how fast the entire page loads.

Instead, it’s more about the overall experience of the page as measured by Google’s page experience framework, or Core Web Vitals.

This is why you want to work on improving and optimizing your page speed for the critical rendering path throughout the DOM, or document object model.

What Is The Critical Rendering Path?

The critical rendering path refers to all of the steps that it takes in order to render the entire page, from when the browser first begins receiving data to when it finally compiles the page at the final render.

This is a process that can take only several milliseconds if you optimize it right.

Optimizing for the critical rendering path means making sure that you optimize for the performance of rendering on many different devices.

This is accomplished by optimizing the critical rendering path to get to your first paint as quickly as possible.

Basically, you’re reducing the amount of time users spend looking at a blank white screen to display visual content ASAP (see 0.0s below).

An example of optimized vs unoptimized rendering from Google.Screenshot from Google Web Fundamentals, January 2022

There’s a whole process on how to do this, outlined in Google’s developer guide documentation, but I will be focusing on one heavy hitter in particular: reducing render-blocking resources.

How Does The Critical Rendering Path Work?

The critical rendering path refers to the series of steps a browser takes on its journey to render a page, by converting the HTML, CSS, and JavaScript to actual pixels on the screen.

An example of Critical Rendering Path.Screenshot from Medium, January 2022

Essentially, the browser needs to request, receive, and parse all HTML and CSS files (plus some additional work) before it will start to render any visual content.

This process occurs within a fraction of a second (in most cases). Until the browser completes these steps, users will see a blank white page.

The following is an example of how users may experience how a page loads according to the different stages of the page load process:

How users perceive page rendering.Screenshot from web.dev, January 2022

Improving the critical rendering path can thus improve on the overall page experience, which can help contribute to improved performance on Core Web Vitals metrics.

How Do I Optimize The Critical Rendering Path?

In order to improve the critical rendering path, you have to analyze your render-blocking resources.

Any render-blocking resources may end up blocking in the initial rendering of the page, and negatively impact your Core Web Vitals scores as a result.

This involves an optimization process of:

  • Reducing the quantity of resources that are critical to the rendering path. This can be done by using a defer method for any possible render-blocking resources.
  • Prioritizing content that is above-the-fold, and downloading important media assets as early as you possibly can.
  • Compress the file size of any remaining critical resources.

By doing this, it’s possible to improve both Core Web Vitals and how your page physically renders to the user.

Why Should I Care?

Google’s user behavior data reports that most users abandon a slow site after about 3 seconds.

In addition to studies that show that reducing page load time and improving the page experience leads to greater user satisfaction, there are also several major Google updates on the horizon that you will want to prepare for.

Identifying and optimizing render-blocking resources will be critical to stay on top of the game when these updates hit.

Google will be implementing page experience on the desktop in 2022, beginning their rollout of desktop page experience in February and finishing up in March.

According to Google, the same three Core Web Vitals metrics (LCP, FID, and CLS) along with their associated thresholds will now be linked to desktop ranking.

Also, Google is working on a brand-new possibly experimental Core Web Vitals metric, taking into account maximum event duration, and total event duration.

Their explanation of these factors they are considering are:

Maximum event duration: the interaction latency is equal to the largest single event duration from any event in the interaction group.
Total event duration: the interaction latency is the sum of all event durations, ignoring any overlap.

With many studies linking reductions in page load times to improvements in valuable KPIs (conversions, bounce rate, time on site), improving site latency has become a top-of-mind business goal for many organizations.

SEO professionals are in a unique position to guide this effort, as our role is often to bridge the gap between business goals and web developers’ priorities.

Having the ability to audit a site, analyze results, and identify areas for improvement helps us to work with developers to improve performance and translate results to key stakeholders.

The Goals Of Optimizing Render-Blocking Resources

One of the primary goals of optimizing the critical rendering path is to make sure that the resources that are needed to render that important, above-the-fold content are loaded as quickly as is humanly possible.

Any render-blocking resources must be deprioritized, and any resources that are preventing the page from rendering quickly.

Each optimization point will contribute to the overall improvement of your page speed, page experience, and Core Web Vitals scores.

Why Improve Render-Blocking CSS?

Google has said many times that coding is not necessarily important for ranking.

But, by the same token, gaining a ranking benefit from page speed optimization improvements can potentially help, depending on the query.

When it comes to CSS files, they are considered to be render-blocking resources.

Why is this?

Even though it happens in the midst of a millisecond or less (in most cases), the browser won’t start to render any page content until it is able to request, receive, and handle all CSS styles.

If a browser renders content that’s not styled properly, all you would get is a bunch of ordinary text and links that are not even styled.

This means that your page will basically be “naked” for lack of a better term.

Removing the CSS styles will result in a page that is literally unusable.

The majority of content will need repainting in order to look the least bit palatable for a user.

Example of CSS enabled vs CSS disabled.

If we examine the page rendering process, the gray box below is a representation of the browser time needed to get all CSS resources. This way, it can begin constructing the DOM of CSS (or CCSOM tree).

This could take anywhere from a millisecond to several seconds, depending on what your server needs to do in order to load these resources.

It can also vary, which could depend on the size, along with the quantity, of these CSS files.

The following render tree shows an example of a browser rendering all the files along with CSS within the DOM:

DOM CSSOM Render Tree.Screenshot from Medium, January 2022

In addition, the following shows an example of the rendering sequence of a page, in which all the files load in a process, from the construction of the DOM to the final painting and compositing of the page, which is known as the critical rendering path.

Because CSS is a render-blocking resource by default, it makes sense to improve CSS to the point where it doesn’t have any negative impact on the page rendering process at all.

The Official Google Recommendation States The Following:

“CSS is a render-blocking resource. Get it to the client as soon and as quickly as possible to optimize the time to first render.”

The HTML must be converted into something the browser can work with: the DOM. CSS files are the same way. This must be converted into the CSSOM.

By optimizing the CSS files within the DOM and CSSOM, you can help decrease the time it takes for a browser to render everything, which greatly contributes to an enhanced page experience.

Why Improve Render-Blocking JavaScript?

Did you know that loading JavaScript is not always required?

With JavaScript, downloading and parsing all JavaScript resources is not a necessary step for fully rendering a page.

So, this isn’t really a technically required part of page render.

But, the caveat to this is: Most modern sites are coded in such a way that JavaScript (for example the Bootstrap JS framework) is required in order to render the above-the-fold experience.

But, if a browser finds JavaScript files before the first render of a page, the rendering process can be stopped until later and after JavaScript files are fully executed.

This can be specified otherwise by deferring JavaScript files for later use.

One example of this is if there are JS functions like an alert that’s built into the HTML. This could stop page rendering until after the execution of this JavaScript code.

JavaScript has the sole power to modify both HTML and CSS styles, so this makes sense.

Parsing and execution of JavaScript could be delayed because of the fact that JavaScript can potentially change the entire page content. This delay is built into the browser by default – for just such a “just in case” scenario.

Official Google Recommendation:

“JavaScript can also block DOM construction and delay when the page is rendered. To deliver optimal performance … eliminate any unnecessary JavaScript from the critical rendering path.”

How To Identify Render-Blocking Resources

To identify the critical rendering path and analyze critical resources:

  • Run a test using webpagetest.org and click on the “waterfall” image.
  • Focus on all resources requested and downloaded before the green “Start Render” line.

Analyze your waterfall view; look for CSS or JavaScript files that are requested before the green “start render” line but are not critical for loading above-the-fold content.

Example of start render.Screenshot from WebPageTest, January 2022

After identifying a (potentially) render-blocking resource, test removing it to see if above-the-fold content is affected.

In my example, I noticed some JavaScript requests that may be critical.

Even though they are critical, it’s sometimes a good idea to test removing these scripts to test how shifting elements on the site affect the experience.

Example of web page test results showing render-blocking resources.Screenshot from WebPageTest, January 2022

There are also other ways to improve such resources.

For non-critical JavaScript files, you may want to look into combining the files and deferring them by including these files at the bottom of your page.

For non-critical CSS files, you can also reduce how many CSS files you have by combining them into one file and compressing them.

Improving your coding techniques can also result in a file that’s faster to download and causes less impact on the rendering speed of your page.

Ways To Reduce Render-Blocking Elements On The Page

Once you determine that a render-blocking resource is not critical for rendering content above-the-fold, you will want to explore a myriad of methods that are available in order to improve the rendering of your page and defer non-critical resources.

There are many solutions to this problem, from deferring JavaScript and CSS files to reducing the impact that CSS can have.

One possible solution is to not add CSS using the @import rule.

Make Sure Not To Add CSS Using The @Import Rule

From a performance perspective, even though @import appears to keep your HTML file cleaner, it can actually create issues with performance.

The @import declaration will actually cause the browser to process a CSS file more slowly. Why? Because it is also downloading all of the imported files.

Rendering will be entirely blocked until the process completes.

Indeed, the best solution is to use the standard method of including a CSS stylesheet using the <link rel=”stylesheet”> declaration in the HTML.

Minify Your CSS And JavaScript Files

If you are on WordPress, using a plugin to minify your CSS and JavaScript files can have a tremendous impact.

The process of minification takes all of the unnecessary spaces within a file and compresses it even further, so you can end up with a nice performance boost.

Also, even if you are not on WordPress, you can use the services of a well-qualified developer in order to complete the process manually.

This will take more time but can be well worth it.

Minified files are usually much lighter than their former counterparts, and this means that initial rendering will complete much faster.

In addition to this, after the minification process, you can also expect the download process to be faster, because less time is necessary to download non-render blocking resources.

Use System Fonts Instead Of Third-Party Fonts

While third-party fonts may appear to make a site “prettier,” this is not exactly the case.

While it may look amazing on the surface, these third-party font files often take a longer time to load and can contribute to your render-blocking resources problem.

Because of the external files, the browser has to make external requests in order to download these files to render your page, which may result in significantly higher download times.

If you’re on a team that has less than ideal development best practices, then it could stand to reason that you have many third-party font files that are not necessary for rendering your site.

In which case, removing all these unnecessary files can improve your render-blocking resources significantly and contribute to your overall improvement in Core Web Vitals.

Using system fonts, on the other hand, only keeps the processing within the browser, without external requests.

Also, there are likely system fonts that may be very similar to the third-party fonts you are using.

Improve Your Coding Techniques And Combining Files

If you’re working with code yourself, you may (or may not … no one is judging here) find that techniques are less than optimal.

One example: you are using inline CSS everywhere, and this is causing processing and rendering glitches within the browser.

The easy solution is to make sure that you take all of the inline CSS and code them properly within the CSS stylesheet file.

If another developer’s code is not up to par, this can create major issues with page rendering.

For example: Say that you have a page that’s coded using older techniques rather than modern and leaner ones.

Older techniques could include significant code bloat and result in slower rendering of the page as a result.

To eliminate this, you can improve your coding techniques by creating leaner and less bloated code, resulting in a much better page rendering experience.

Combining files can also improve the situation.

For example: If you have eight or 10 JavaScript files that all contribute to the same task, you can hire the services of a developer who can then combine all of these files for you.

And, if they are less critical JavaScript files, then to further decrease the page rendering problems, these files can also be deferred by adding them to the end of the HTML code on the page.

By combining files and improving your coding techniques, you can contribute significantly to better page rendering experiences.

Key Takeaways

Finding solutions to reduce render-blocking resources have been an SEO audit staple for a while now. It’s important for several reasons:

By reducing render-blocking resources, you make your site faster. You can also reverse engineer your site to take advantage of elements that will play into Google’s overall page experience update.

You also put yourself in a position to take advantage of a boost you will get from Google’s Core Web Vitals metrics.

Core Web Vitals are not going away. They have become a critical optimization point in order to facilitate the fastest potential rendering times possible within your current framework.

With new Core Web Vitals metrics being introduced in the future, making sure that you’re up-to-date on existing metrics is always a great idea.

Finding and repairing render-blocking resources also ensures that you continue keeping your website visitors happy and that it’s always in top shape for prime time.

More resources:


Featured Image: Naumova Marina/Shutterstock





Source link

SEO

Is ChatGPT Use Of Web Content Fair?

Published

on

Is ChatGPT Use Of Web Content Fair?

Large Language Models (LLMs) like ChatGPT train using multiple sources of information, including web content. This data forms the basis of summaries of that content in the form of articles that are produced without attribution or benefit to those who published the original content used for training ChatGPT.

Search engines download website content (called crawling and indexing) to provide answers in the form of links to the websites.

Website publishers have the ability to opt-out of having their content crawled and indexed by search engines through the Robots Exclusion Protocol, commonly referred to as Robots.txt.

The Robots Exclusions Protocol is not an official Internet standard but it’s one that legitimate web crawlers obey.

Should web publishers be able to use the Robots.txt protocol to prevent large language models from using their website content?

Large Language Models Use Website Content Without Attribution

Some who are involved with search marketing are uncomfortable with how website data is used to train machines without giving anything back, like an acknowledgement or traffic.

Hans Petter Blindheim (LinkedIn profile), Senior Expert at Curamando shared his opinions with me.

Hans commented:

“When an author writes something after having learned something from an article on your site, they will more often than not link to your original work because it offers credibility and as a professional courtesy.

It’s called a citation.

But the scale at which ChatGPT assimilates content and does not grant anything back differentiates it from both Google and people.

A website is generally created with a business directive in mind.

Google helps people find the content, providing traffic, which has a mutual benefit to it.

But it’s not like large language models asked your permission to use your content, they just use it in a broader sense than what was expected when your content was published.

And if the AI language models do not offer value in return – why should publishers allow them to crawl and use the content?

Does their use of your content meet the standards of fair use?

When ChatGPT and Google’s own ML/AI models trains on your content without permission, spins what it learns there and uses that while keeping people away from your websites – shouldn’t the industry and also lawmakers try to take back control over the Internet by forcing them to transition to an “opt-in” model?”

The concerns that Hans expresses are reasonable.

In light of how fast technology is evolving, should laws concerning fair use be reconsidered and updated?

I asked John Rizvi, a Registered Patent Attorney (LinkedIn profile) who is board certified in Intellectual Property Law, if Internet copyright laws are outdated.

John answered:

“Yes, without a doubt.

One major bone of contention in cases like this is the fact that the law inevitably evolves far more slowly than technology does.

In the 1800s, this maybe didn’t matter so much because advances were relatively slow and so legal machinery was more or less tooled to match.

Today, however, runaway technological advances have far outstripped the ability of the law to keep up.

There are simply too many advances and too many moving parts for the law to keep up.

As it is currently constituted and administered, largely by people who are hardly experts in the areas of technology we’re discussing here, the law is poorly equipped or structured to keep pace with technology…and we must consider that this isn’t an entirely bad thing.

So, in one regard, yes, Intellectual Property law does need to evolve if it even purports, let alone hopes, to keep pace with technological advances.

The primary problem is striking a balance between keeping up with the ways various forms of tech can be used while holding back from blatant overreach or outright censorship for political gain cloaked in benevolent intentions.

The law also has to take care not to legislate against possible uses of tech so broadly as to strangle any potential benefit that may derive from them.

You could easily run afoul of the First Amendment and any number of settled cases that circumscribe how, why, and to what degree intellectual property can be used and by whom.

And attempting to envision every conceivable usage of technology years or decades before the framework exists to make it viable or even possible would be an exceedingly dangerous fool’s errand.

In situations like this, the law really cannot help but be reactive to how technology is used…not necessarily how it was intended.

That’s not likely to change anytime soon, unless we hit a massive and unanticipated tech plateau that allows the law time to catch up to current events.”

So it appears that the issue of copyright laws has many considerations to balance when it comes to how AI is trained, there is no simple answer.

OpenAI and Microsoft Sued

An interesting case that was recently filed is one in which OpenAI and Microsoft used open source code to create their CoPilot product.

The problem with using open source code is that the Creative Commons license requires attribution.

According to an article published in a scholarly journal:

“Plaintiffs allege that OpenAI and GitHub assembled and distributed a commercial product called Copilot to create generative code using publicly accessible code originally made available under various “open source”-style licenses, many of which include an attribution requirement.

As GitHub states, ‘…[t]rained on billions of lines of code, GitHub Copilot turns natural language prompts into coding suggestions across dozens of languages.’

The resulting product allegedly omitted any credit to the original creators.”

The author of that article, who is a legal expert on the subject of copyrights, wrote that many view open source Creative Commons licenses as a “free-for-all.”

Some may also consider the phrase free-for-all a fair description of the datasets comprised of Internet content are scraped and used to generate AI products like ChatGPT.

Background on LLMs and Datasets

Large language models train on multiple data sets of content. Datasets can consist of emails, books, government data, Wikipedia articles, and even datasets created of websites linked from posts on Reddit that have at least three upvotes.

Many of the datasets related to the content of the Internet have their origins in the crawl created by a non-profit organization called Common Crawl.

Their dataset, the Common Crawl dataset, is available free for download and use.

The Common Crawl dataset is the starting point for many other datasets that created from it.

For example, GPT-3 used a filtered version of Common Crawl (Language Models are Few-Shot Learners PDF).

This is how  GPT-3 researchers used the website data contained within the Common Crawl dataset:

“Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset… constituting nearly a trillion words.

This size of dataset is sufficient to train our largest models without ever updating on the same sequence twice.

However, we have found that unfiltered or lightly filtered versions of Common Crawl tend to have lower quality than more curated datasets.

Therefore, we took 3 steps to improve the average quality of our datasets:

(1) we downloaded and filtered a version of CommonCrawl based on similarity to a range of high-quality reference corpora,

(2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overfitting, and

(3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.”

Google’s C4 dataset (Colossal, Cleaned Crawl Corpus), which was used to create the Text-to-Text Transfer Transformer (T5), has its roots in the Common Crawl dataset, too.

Their research paper (Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer PDF) explains:

“Before presenting the results from our large-scale empirical study, we review the necessary background topics required to understand our results, including the Transformer model architecture and the downstream tasks we evaluate on.

We also introduce our approach for treating every problem as a text-to-text task and describe our “Colossal Clean Crawled Corpus” (C4), the Common Crawl-based data set we created as a source of unlabeled text data.

We refer to our model and framework as the ‘Text-to-Text Transfer Transformer’ (T5).”

Google published an article on their AI blog that further explains how Common Crawl data (which contains content scraped from the Internet) was used to create C4.

They wrote:

“An important ingredient for transfer learning is the unlabeled dataset used for pre-training.

To accurately measure the effect of scaling up the amount of pre-training, one needs a dataset that is not only high quality and diverse, but also massive.

Existing pre-training datasets don’t meet all three of these criteria — for example, text from Wikipedia is high quality, but uniform in style and relatively small for our purposes, while the Common Crawl web scrapes are enormous and highly diverse, but fairly low quality.

To satisfy these requirements, we developed the Colossal Clean Crawled Corpus (C4), a cleaned version of Common Crawl that is two orders of magnitude larger than Wikipedia.

Our cleaning process involved deduplication, discarding incomplete sentences, and removing offensive or noisy content.

This filtering led to better results on downstream tasks, while the additional size allowed the model size to increase without overfitting during pre-training.”

Google, OpenAI, even Oracle’s Open Data are using Internet content, your content, to create datasets that are then used to create AI applications like ChatGPT.

Common Crawl Can Be Blocked

It is possible to block Common Crawl and subsequently opt-out of all the datasets that are based on Common Crawl.

But if the site has already been crawled then the website data is already in datasets. There is no way to remove your content from the Common Crawl dataset and any of the other derivative datasets like C4 and .

Using the Robots.txt protocol will only block future crawls by Common Crawl, it won’t stop researchers from using content already in the dataset.

How to Block Common Crawl From Your Data

Blocking Common Crawl is possible through the use of the Robots.txt protocol, within the above discussed limitations.

The Common Crawl bot is called, CCBot.

It is identified using the most up to date CCBot User-Agent string: CCBot/2.0

Blocking CCBot with Robots.txt is accomplished the same as with any other bot.

Here is the code for blocking CCBot with Robots.txt.

User-agent: CCBot
Disallow: /

CCBot crawls from Amazon AWS IP addresses.

CCBot also follows the nofollow Robots meta tag:

<meta name="robots" content="nofollow">

What If You’re Not Blocking Common Crawl?

Web content can be downloaded without permission, which is how browsers work, they download content.

Google or anybody else does not need permission to download and use content that is published publicly.

Website Publishers Have Limited Options

The consideration of whether it is ethical to train AI on web content doesn’t seem to be a part of any conversation about the ethics of how AI technology is developed.

It seems to be taken for granted that Internet content can be downloaded, summarized and transformed into a product called ChatGPT.

Does that seem fair? The answer is complicated.

Featured image by Shutterstock/Krakenimages.com



Source link

Continue Reading

SEO

Google Updates Discover Follow Feed Guidelines

Published

on

Google Updates Discover Follow Feed Guidelines

Google updated their Google Discover feed guidelines to emphasize the most important elements to include in the feed in order for it to be properly optimized.

Google Discover Feed

The Google Discover follow feed feature offers relevant content to Chrome Android users and represents an importance source of traffic that is matched to user interests.

The Google Discover Follow feature is a component of Google Discover, a way to capture a steady stream of traffic apart from Google News and Google Search.

Google’s Discover Follow feature works by allowing users to choose to receive updates about the latest content on a site they are interested in.

The way to do participate in Discover Follow is through an optimized RSS or Atom feed.

If the feed is properly optimized on a website, users can choose to follow a website or a specific category of a website, depending on how the publisher configures their RSS/Atom feeds.

Audiences that follow a website will see the new content populate their Discover Follow feed which in turn brings fresh waves of traffic to participating websites that are properly optimized.

According to Google:

“The Follow feature lets people follow a website and get the latest updates from that website in the Following tab within Discover in Chrome.

Currently, the Follow button is a feature that’s available to signed-in users in English in the US, New Zealand, South Africa, UK, Canada, and Australia that are using Chrome Android.”

Receiving traffic from the Discover Follow feature only happens for sites with properly optimized feeds that follow the Discover Follow feature guidelines.

Updated Guidance for Google Discover Follow Feature

Google updated their guidelines for the Discover Feed feature to emphasize the importance of the feed <title> and <link> elements, emphasizing that the feed contains these elements.

The new guidance states:

“The most important content for the Follow feature is your feed <title> element and your per item <link> elements. Make sure your feed includes these elements.”

Presumably the absence of these two elements may result in Google being unable to understand the feed and display it for users, resulting in a loss of traffic.

Site publishers who participate in the Google Discover Follow feature should verify that their RSS or Atom feeds properly display the <title> and <link> elements.

Google Discover Optimization

Publishers and SEOs are familiar with optimizing for Google Search.

But many content publishers may be unaware of how to optimize for Google Discover in order to enjoy the loads of traffic that results from properly optimizing for Google Discover and the Google Discover Follow feature.

The Follow Feed feature, a component of Google Discover, is a way to help ensure that the website obtains a steady stream of relevant traffic beyond organic search.

This is why it’s important to make sure that your RSS/Atom feeds are properly optimized.

Read Google’s announcement of the updated guidance and read the complete Follow Feature feed guidelines here.

Featured image by Shutterstock/fizkes



Source link

Continue Reading

SEO

Is Wix Good for SEO? Here’s Everything to Know About Wix SEO

Published

on

Is Wix Good for SEO? Here's Everything to Know About Wix SEO

As of 2023, Wix provides solid options for a basic SEO setup that will cover most needs but still lacks the flexibility of more advanced and granular settings.

In this article, I go through all the nooks and crannies of Wix SEO options so you can decide if Wix is right for your needs. I’ll also share some tips on how to make your Wix website more search-friendly if you’re already a Wix user.

Does Wix have everything you need for SEO?

Wix has a bit of a bad reputation for SEO. This is because from its launch in 2006 until its first big update in 2016, it lacked many basic SEO functionalities like adding alt text and being able to change URL structures.

However, that is no longer the case. You can now do virtually every on-page SEO task using the Wix platform. It even hired expert SEOs, such as Mordy Oberstein and Crystal Carter, who have pushed for more SEO features and better communication.

Back in 2019, we ran a study comparing Wix SEO to WordPress SEO analyzing over 6.4M websites. We found that, on average, far more WordPress websites get organic traffic than Wix websites. 

WordPress vs. Wix organic traffic

However, we believe that this is due to the website owners, not the platforms themselves. On average, Wix website owners are less tech-savvy (and less educated on SEO) than WordPress users, simply because of the extra learning curve that comes with using WordPress.

Editor’s Note

The study and its methodology weren’t great. Given that there are so many variables involved, we didn’t see a way to rerun it properly. We decided to replace the study with this guide, providing more value to readers and being more fair to Wix.

That said, we had the Wix’s SEO team provide feedback on this article as part of the editing process to ensure accuracy and increase objectivity.

Michal Pecánek

Let’s also see what Googlers have to say about Wix.

Here’s a quote by John Mueller, Google’s senior search analyst, on the topic of Wix SEO:

Wix is fine for SEO. A few years back it was pretty bad in terms of SEO, but they’ve made fantastic progress, and are now a fine platform for businesses. The reputation from back then lingers on, but don’t be swayed by it.

What they’ve done in recent years is really good stuff, including making it trivial to have a really fast site (as you see in the Lighthouse scores — admittedly, speed is only a tiny part of SEO).

If Wix works for them, and they don’t need more, there’s no reason to switch.

John Mueller

So overall, Wix has the majority of features most website users would need to manage SEO. But there’s more to the story…

While Wix has no major SEO issues, it does have three minor issues that may stop you from wanting to use it if you’re serious about search:

  1. Website builders will typically load slower than custom code – Wix inevitably has code bloat from features you will never use. This is true even if you use WordPress and install a theme builder like Elementor or Thrive Architect, so this isn’t exclusive to Wix. That said, it’s only a minor issue, and it already has great Core Web Vitals compared to other CMS types.
  2. Less-than-ideal multilingual support – If you plan on publishing your blog posts in multiple languages, you may want to skip Wix. For example, you don’t have full control over the URLs for different language versions of your site. However, some of these aspects are in its feature requests and may be available soon.
  3. Limited advanced SEO control – Wix lacks some advanced SEO features. For example, it’s difficult to edit the auto-generated sitemap. Additionally, Wix generates cryptic file names for images (e.g., 09a0ab7~mv2.jpg/), which is not good for ranking on Google Images. 

Ultimately, Wix’s SEO features will work for most website owners out there. 

If you are a business owner who wants to focus more time on running your business and less time on learning how to build the perfect website with the best features, Wix is an excellent choice.

To help you decide if Wix is right for you, we made this helpful table of who should and shouldn’t use Wix to build their website:

Type of website Is Wix a great solution? Explanation
Personal website Yes Wix provides all you need for small websites.
Local business Yes Wix provides all you need to create a quick and easy local business website and rank in local search results.
Affiliate website Maybe Wix can handle your needs for affiliate marketing and SEO well. But if you’re aiming to create a big, complex website, it may be worth your time to learn WordPress instead.
Content website Maybe Wix can handle your needs for content used to show display ads. But if you’re aiming to create a big, complex website, it may be worth your time to learn WordPress instead.
Services website Maybe If you offer a service such as SaaS, banking, etc., then Wix may be a good choice depending on the specific features needed. You’ll have to do your own research.
E-commerce website Maybe There’s no perfect out-of-the-box CMS for this. The most common choices are Shopify or WooCommerce, but Wix is a solid option for e-commerce SMBs too. It can handle even some of the more complex e-commerce SEO stuff.

I personally would never build a website on Wix over WordPress for myself. That’s because WordPress has more features and customizability. And even though it comes with a much steeper learning curve, that is something I’ve overcome. (I’ve been building WordPress websites for over a decade.)

That said, I built my dad a website for his remodeling business using Wix. I did this because it’s much easier for him to go in and edit things himself than it is with WordPress. And he’s able to rank for local keywords just fine on the Wix platform. The website is fairly new, and I will come back in a few months to update this page with the progress of his rankings.

One last thing to keep in mind is that switching your content management system (CMS) can be a massive pain. So whichever tool you choose, be ready to stick to it for a long time.

Five tips to make your Wix website SEO-friendly

Deciding to stick with Wix? Here are five Wix-specific tips to help you make sure your website is search-optimized:

1. Complete the Wix SEO Setup Checklist

Wix has a really easy-to-use SEO Setup Checklist built in its platform. To use it, navigate to the Marketing & SEO page, then click Get Found on Google.

Wix SEO Setup Checklist

From there, you’ll be asked a few questions to get started, such as your business name and the top three to five keywords you want your website to rank for. If you’re not sure which keywords to target, I highly recommend reading our guide to keyword research.

Once you answer the questions, you’ll see a screen with steps you can take to optimize your website for search engines, starting with your homepage.

Wix's steps to optimize site for search engines

SEO Setup Checklist will guide you through the process of updating your pages’ meta tags, making your website mobile-friendly, and more.

Go through each of these steps, and you’ll be well on your way to a search-optimized website.

2. Set up Google Search Console and Analytics

You’ll notice one of the steps is to connect your site to Google Search Console (GSC). This is Google’s suite of tools designed for website owners like you to more easily monitor your search rankings and find issues preventing your pages from being indexed by Googlebot

Performance report, via GSC

You can set up GSC with the click of a button using the SEO Setup Checklist. If you want to learn more, check out our complete guide to Google Search Console.

Wix SEO Wiz connecting Google Search Console

Once GSC is set up, you can connect Google Analytics (GA) to your website to get more insights into where your traffic is coming from and which pages your visitors are going to.

To connect GA, navigate to the Marketing Integrations tab under Marketing & SEO. It’s the first box that appears—click Connect.

Wix marketing integration with Google Analytics

Wix will instruct you on how to create a Google Analytics Property ID and connect that ID with your Wix website. If you need more help, we also have a guide on Google Analytics 4.

Once it’s set up and your website starts getting traffic, you’ll be able to see traffic and webpage reports. This can help you identify which pages may need improvements or how many conversions you get from organic traffic.

Google Analytics traffic report

That’s it—you’re done with step #2.

3. Create search-optimized content

If your website just has the basic homepage, as well as “about” and “contact” pages, chances are you won’t be able to rank well for much (if anything).

A crucial step in SEO is creating content that can be crawled and indexed by Googlebot. That means creating service pages if you’re a local business and possibly also creating blog content targeting relevant keywords to your industry.

Rather than making this whole article about content, I will leave you with a resource. Go check out our guide to SEO content to learn more.

4. Add internal links

Backlinks—links from another website pointing to your website—are one of the most important ranking factors in Google’s algorithm. However, they can be difficult to obtain.

Internal links from one page on your site to another on your site are almost as important as backlinks. But they are much easier to add. You just highlight some text and add the link in.

If you have pages on your website that you want to rank better, simply add more internal links to that page and you’re already on the path to higher rankings. Obviously, just adding some internal links won’t suddenly make you rank #1 for a keyword. But it’s an important—and often overlooked—step on the road to better rankings. 

To add an internal link with Wix, simply highlight the text you want to add a link to, click the “chain link” icon, then choose the page you want the link to point to.

Wix internal link settings

Check out our internal linking guide to learn more about this important SEO task.

5. Schedule regular SEO audits

Once your Wix website is set up and optimized, it’s important to schedule regular SEO audits to keep tabs on your rankings and make sure nothing gets broken.

While you can do this manually, it is time consuming and easy to overlook something. 

For example, you may not realize one of your pages broke and is now a 404 page, or that a certain blog post isn’t showing up in your sitemap, or that you’re missing metadata on a certain page… the list goes on.

Instead, you can use Ahrefs Webmaster Tools to automatically run weekly or monthly audits of your website. This free tool will give you a health score from 0 to 100 on how “healthy” your website is from an SEO perspective.

Health Score overview, via Ahrefs' Site Audit

You can then see specific tasks you need to do in order to fix these issues on your site. Go to the All issues report and check the issues we found while crawling your website.

All issues report, via Ahrefs' Site Audit

You can click the error and see exactly what it means and how to fix it.

Issue details, via Ahrefs' Site Audit

From there, you can click “View affected URLs” and go to those pages to fix the issues. Easy peasy.

Final thoughts

Overall, Wix is a perfectly capable website builder for SEO. While it isn’t as advanced and capable as more complex CMSs like WordPress, it’s plenty good for people who just want to build a website and don’t have the time for or interest in a giant learning curve.

I still use Wix for certain client sites and to build sites for friends and family who want a website where they can still make small edits themselves. It’s my favorite website builder compared to other tools like Squarespace or WordPress.com (not to be confused with WordPress.org, which I use all the time).

One more benefit to using a website builder like Wix is that it’s a complete solution and takes care of the hosting and security. In fact, John doesn’t recommend self-hosting your websites. 

That said, if you want more advanced features and to dive deeper in SEO, I suggest learning WordPress.

Ready to keep learning? Here are some other helpful guides:



Source link

Continue Reading

Trending

en_USEnglish