Connect with us

SEO

How To Identify And Reduce Render-Blocking Resources

Published

on

How To Identify And Reduce Render-Blocking Resources

Identifying and reducing resources responsible for blocking the rendering of your web page is a critical optimization point that can make or break your page speed.

It can be so critical that it can pay dividends to your site’s page experience metrics (and your user’s satisfaction) as a result.

In 2021, the average time it took to fully render a mobile web page was 22 seconds. In 2018, it was 15 seconds.

Clearly, this is a substantially higher number than Google’s recommended time of 2-3 seconds. It’s also substantially higher than it used to be.

What could be causing these issues with render-blocking resources?

Advertisement

What is driving this increase in overall page render speed?

One interesting trend to note is that there has been an increasing reliance on third-party fonts compared to system fonts. Using third-party fonts as a resource tends to interfere with the processing and rendering of a page.

With system fonts, the browser does not have to load anything extra, so it doesn’t have that additional processing step as a result.

Screenshot from Web Almanac, January 2022

This reliance across industries is likely to impact this rendering time. Of course, this is not the only cause of this issue with render-blocking resources.

In addition, Google’s own services tend to have a significant impact on rendering time, such as Google Analytics or using a third-party Facebook pixel for tracking purposes.

The desire to rely on such technologies is not necessarily terrible from a marketing perspective.

But, from a render-blocking resources perspective, it can cause significant increases in page load time and how Google (and users) perceives your page.

Advertisement

The ideal solution is to make sure that your page loads for user interaction as quickly as possible.

It’s also a possibility that poor web development practices in use by web developers today are to blame.

Either way, this is something in every website project that should be addressed as part of your Core Web Vitals audits.

Page experience, however, is not just about how fast the entire page loads.

Instead, it’s more about the overall experience of the page as measured by Google’s page experience framework, or Core Web Vitals.

This is why you want to work on improving and optimizing your page speed for the critical rendering path throughout the DOM, or document object model.

Advertisement

What Is The Critical Rendering Path?

The critical rendering path refers to all of the steps that it takes in order to render the entire page, from when the browser first begins receiving data to when it finally compiles the page at the final render.

This is a process that can take only several milliseconds if you optimize it right.

Optimizing for the critical rendering path means making sure that you optimize for the performance of rendering on many different devices.

This is accomplished by optimizing the critical rendering path to get to your first paint as quickly as possible.

Basically, you’re reducing the amount of time users spend looking at a blank white screen to display visual content ASAP (see 0.0s below).

An example of optimized vs unoptimized rendering from Google.Screenshot from Google Web Fundamentals, January 2022

There’s a whole process on how to do this, outlined in Google’s developer guide documentation, but I will be focusing on one heavy hitter in particular: reducing render-blocking resources.

How Does The Critical Rendering Path Work?

The critical rendering path refers to the series of steps a browser takes on its journey to render a page, by converting the HTML, CSS, and JavaScript to actual pixels on the screen.

Advertisement
An example of Critical Rendering Path.Screenshot from Medium, January 2022

Essentially, the browser needs to request, receive, and parse all HTML and CSS files (plus some additional work) before it will start to render any visual content.

This process occurs within a fraction of a second (in most cases). Until the browser completes these steps, users will see a blank white page.

The following is an example of how users may experience how a page loads according to the different stages of the page load process:

How users perceive page rendering.Screenshot from web.dev, January 2022

Improving the critical rendering path can thus improve on the overall page experience, which can help contribute to improved performance on Core Web Vitals metrics.

How Do I Optimize The Critical Rendering Path?

In order to improve the critical rendering path, you have to analyze your render-blocking resources.

Any render-blocking resources may end up blocking in the initial rendering of the page, and negatively impact your Core Web Vitals scores as a result.

This involves an optimization process of:

  • Reducing the quantity of resources that are critical to the rendering path. This can be done by using a defer method for any possible render-blocking resources.
  • Prioritizing content that is above-the-fold, and downloading important media assets as early as you possibly can.
  • Compress the file size of any remaining critical resources.

By doing this, it’s possible to improve both Core Web Vitals and how your page physically renders to the user.

Why Should I Care?

Google’s user behavior data reports that most users abandon a slow site after about 3 seconds.

Advertisement

In addition to studies that show that reducing page load time and improving the page experience leads to greater user satisfaction, there are also several major Google updates on the horizon that you will want to prepare for.

Identifying and optimizing render-blocking resources will be critical to stay on top of the game when these updates hit.

Google will be implementing page experience on the desktop in 2022, beginning their rollout of desktop page experience in February and finishing up in March.

According to Google, the same three Core Web Vitals metrics (LCP, FID, and CLS) along with their associated thresholds will now be linked to desktop ranking.

Also, Google is working on a brand-new possibly experimental Core Web Vitals metric, taking into account maximum event duration, and total event duration.

Their explanation of these factors they are considering are:

Advertisement

Maximum event duration: the interaction latency is equal to the largest single event duration from any event in the interaction group.
Total event duration: the interaction latency is the sum of all event durations, ignoring any overlap.

With many studies linking reductions in page load times to improvements in valuable KPIs (conversions, bounce rate, time on site), improving site latency has become a top-of-mind business goal for many organizations.

SEO professionals are in a unique position to guide this effort, as our role is often to bridge the gap between business goals and web developers’ priorities.

Having the ability to audit a site, analyze results, and identify areas for improvement helps us to work with developers to improve performance and translate results to key stakeholders.

The Goals Of Optimizing Render-Blocking Resources

One of the primary goals of optimizing the critical rendering path is to make sure that the resources that are needed to render that important, above-the-fold content are loaded as quickly as is humanly possible.

Any render-blocking resources must be deprioritized, and any resources that are preventing the page from rendering quickly.

Each optimization point will contribute to the overall improvement of your page speed, page experience, and Core Web Vitals scores.

Advertisement

Why Improve Render-Blocking CSS?

Google has said many times that coding is not necessarily important for ranking.

But, by the same token, gaining a ranking benefit from page speed optimization improvements can potentially help, depending on the query.

When it comes to CSS files, they are considered to be render-blocking resources.

Why is this?

Even though it happens in the midst of a millisecond or less (in most cases), the browser won’t start to render any page content until it is able to request, receive, and handle all CSS styles.

If a browser renders content that’s not styled properly, all you would get is a bunch of ordinary text and links that are not even styled.

Advertisement

This means that your page will basically be “naked” for lack of a better term.

Removing the CSS styles will result in a page that is literally unusable.

The majority of content will need repainting in order to look the least bit palatable for a user.

Example of CSS enabled vs CSS disabled.

If we examine the page rendering process, the gray box below is a representation of the browser time needed to get all CSS resources. This way, it can begin constructing the DOM of CSS (or CCSOM tree).

This could take anywhere from a millisecond to several seconds, depending on what your server needs to do in order to load these resources.

It can also vary, which could depend on the size, along with the quantity, of these CSS files.

Advertisement

The following render tree shows an example of a browser rendering all the files along with CSS within the DOM:

DOM CSSOM Render Tree.Screenshot from Medium, January 2022

In addition, the following shows an example of the rendering sequence of a page, in which all the files load in a process, from the construction of the DOM to the final painting and compositing of the page, which is known as the critical rendering path.

Because CSS is a render-blocking resource by default, it makes sense to improve CSS to the point where it doesn’t have any negative impact on the page rendering process at all.

The Official Google Recommendation States The Following:

“CSS is a render-blocking resource. Get it to the client as soon and as quickly as possible to optimize the time to first render.”

The HTML must be converted into something the browser can work with: the DOM. CSS files are the same way. This must be converted into the CSSOM.

By optimizing the CSS files within the DOM and CSSOM, you can help decrease the time it takes for a browser to render everything, which greatly contributes to an enhanced page experience.

Why Improve Render-Blocking JavaScript?

Did you know that loading JavaScript is not always required?

Advertisement

With JavaScript, downloading and parsing all JavaScript resources is not a necessary step for fully rendering a page.

So, this isn’t really a technically required part of page render.

But, the caveat to this is: Most modern sites are coded in such a way that JavaScript (for example the Bootstrap JS framework) is required in order to render the above-the-fold experience.

But, if a browser finds JavaScript files before the first render of a page, the rendering process can be stopped until later and after JavaScript files are fully executed.

This can be specified otherwise by deferring JavaScript files for later use.

One example of this is if there are JS functions like an alert that’s built into the HTML. This could stop page rendering until after the execution of this JavaScript code.

Advertisement

JavaScript has the sole power to modify both HTML and CSS styles, so this makes sense.

Parsing and execution of JavaScript could be delayed because of the fact that JavaScript can potentially change the entire page content. This delay is built into the browser by default – for just such a “just in case” scenario.

Official Google Recommendation:

“JavaScript can also block DOM construction and delay when the page is rendered. To deliver optimal performance … eliminate any unnecessary JavaScript from the critical rendering path.”

How To Identify Render-Blocking Resources

To identify the critical rendering path and analyze critical resources:

  • Run a test using webpagetest.org and click on the “waterfall” image.
  • Focus on all resources requested and downloaded before the green “Start Render” line.

Analyze your waterfall view; look for CSS or JavaScript files that are requested before the green “start render” line but are not critical for loading above-the-fold content.

Example of start render.Screenshot from WebPageTest, January 2022

After identifying a (potentially) render-blocking resource, test removing it to see if above-the-fold content is affected.

In my example, I noticed some JavaScript requests that may be critical.

Even though they are critical, it’s sometimes a good idea to test removing these scripts to test how shifting elements on the site affect the experience.

Advertisement
Example of web page test results showing render-blocking resources.Screenshot from WebPageTest, January 2022

There are also other ways to improve such resources.

For non-critical JavaScript files, you may want to look into combining the files and deferring them by including these files at the bottom of your page.

For non-critical CSS files, you can also reduce how many CSS files you have by combining them into one file and compressing them.

Improving your coding techniques can also result in a file that’s faster to download and causes less impact on the rendering speed of your page.

Ways To Reduce Render-Blocking Elements On The Page

Once you determine that a render-blocking resource is not critical for rendering content above-the-fold, you will want to explore a myriad of methods that are available in order to improve the rendering of your page and defer non-critical resources.

There are many solutions to this problem, from deferring JavaScript and CSS files to reducing the impact that CSS can have.

One possible solution is to not add CSS using the @import rule.

Advertisement

Make Sure Not To Add CSS Using The @Import Rule

From a performance perspective, even though @import appears to keep your HTML file cleaner, it can actually create issues with performance.

The @import declaration will actually cause the browser to process a CSS file more slowly. Why? Because it is also downloading all of the imported files.

Rendering will be entirely blocked until the process completes.

Indeed, the best solution is to use the standard method of including a CSS stylesheet using the <link rel=”stylesheet”> declaration in the HTML.

Minify Your CSS And JavaScript Files

If you are on WordPress, using a plugin to minify your CSS and JavaScript files can have a tremendous impact.

The process of minification takes all of the unnecessary spaces within a file and compresses it even further, so you can end up with a nice performance boost.

Advertisement

Also, even if you are not on WordPress, you can use the services of a well-qualified developer in order to complete the process manually.

This will take more time but can be well worth it.

Minified files are usually much lighter than their former counterparts, and this means that initial rendering will complete much faster.

In addition to this, after the minification process, you can also expect the download process to be faster, because less time is necessary to download non-render blocking resources.

Use System Fonts Instead Of Third-Party Fonts

While third-party fonts may appear to make a site “prettier,” this is not exactly the case.

While it may look amazing on the surface, these third-party font files often take a longer time to load and can contribute to your render-blocking resources problem.

Advertisement

Because of the external files, the browser has to make external requests in order to download these files to render your page, which may result in significantly higher download times.

If you’re on a team that has less than ideal development best practices, then it could stand to reason that you have many third-party font files that are not necessary for rendering your site.

In which case, removing all these unnecessary files can improve your render-blocking resources significantly and contribute to your overall improvement in Core Web Vitals.

Using system fonts, on the other hand, only keeps the processing within the browser, without external requests.

Also, there are likely system fonts that may be very similar to the third-party fonts you are using.

Improve Your Coding Techniques And Combining Files

If you’re working with code yourself, you may (or may not … no one is judging here) find that techniques are less than optimal.

Advertisement

One example: you are using inline CSS everywhere, and this is causing processing and rendering glitches within the browser.

The easy solution is to make sure that you take all of the inline CSS and code them properly within the CSS stylesheet file.

If another developer’s code is not up to par, this can create major issues with page rendering.

For example: Say that you have a page that’s coded using older techniques rather than modern and leaner ones.

Older techniques could include significant code bloat and result in slower rendering of the page as a result.

To eliminate this, you can improve your coding techniques by creating leaner and less bloated code, resulting in a much better page rendering experience.

Advertisement

Combining files can also improve the situation.

For example: If you have eight or 10 JavaScript files that all contribute to the same task, you can hire the services of a developer who can then combine all of these files for you.

And, if they are less critical JavaScript files, then to further decrease the page rendering problems, these files can also be deferred by adding them to the end of the HTML code on the page.

By combining files and improving your coding techniques, you can contribute significantly to better page rendering experiences.

Key Takeaways

Finding solutions to reduce render-blocking resources have been an SEO audit staple for a while now. It’s important for several reasons:

By reducing render-blocking resources, you make your site faster. You can also reverse engineer your site to take advantage of elements that will play into Google’s overall page experience update.

Advertisement

You also put yourself in a position to take advantage of a boost you will get from Google’s Core Web Vitals metrics.

Core Web Vitals are not going away. They have become a critical optimization point in order to facilitate the fastest potential rendering times possible within your current framework.

With new Core Web Vitals metrics being introduced in the future, making sure that you’re up-to-date on existing metrics is always a great idea.

Finding and repairing render-blocking resources also ensures that you continue keeping your website visitors happy and that it’s always in top shape for prime time.

More resources:


Featured Image: Naumova Marina/Shutterstock

Advertisement




Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

How Compression Can Be Used To Detect Low Quality Pages

Published

on

By

Compression can be used by search engines to detect low-quality pages. Although not widely known, it's useful foundational knowledge for SEO.

The concept of Compressibility as a quality signal is not widely known, but SEOs should be aware of it. Search engines can use web page compressibility to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords, making it useful knowledge for SEO.

Although the following research paper demonstrates a successful use of on-page features for detecting spam, the deliberate lack of transparency by search engines makes it difficult to say with certainty if search engines are applying this or similar techniques.

What Is Compressibility?

In computing, compressibility refers to how much a file (data) can be reduced in size while retaining essential information, typically to maximize storage space or to allow more data to be transmitted over the Internet.

TL/DR Of Compression

Compression replaces repeated words and phrases with shorter references, reducing the file size by significant margins. Search engines typically compress indexed web pages to maximize storage space, reduce bandwidth, and improve retrieval speed, among other reasons.

This is a simplified explanation of how compression works:

  • Identify Patterns:
    A compression algorithm scans the text to find repeated words, patterns and phrases
  • Shorter Codes Take Up Less Space:
    The codes and symbols use less storage space then the original words and phrases, which results in a smaller file size.
  • Shorter References Use Less Bits:
    The “code” that essentially symbolizes the replaced words and phrases uses less data than the originals.

A bonus effect of using compression is that it can also be used to identify duplicate pages, doorway pages with similar content, and pages with repetitive keywords.

Research Paper About Detecting Spam

This research paper is significant because it was authored by distinguished computer scientists known for breakthroughs in AI, distributed computing, information retrieval, and other fields.

Advertisement

Marc Najork

One of the co-authors of the research paper is Marc Najork, a prominent research scientist who currently holds the title of Distinguished Research Scientist at Google DeepMind. He’s a co-author of the papers for TW-BERT, has contributed research for increasing the accuracy of using implicit user feedback like clicks, and worked on creating improved AI-based information retrieval (DSI++: Updating Transformer Memory with New Documents), among many other major breakthroughs in information retrieval.

Dennis Fetterly

Another of the co-authors is Dennis Fetterly, currently a software engineer at Google. He is listed as a co-inventor in a patent for a ranking algorithm that uses links, and is known for his research in distributed computing and information retrieval.

Those are just two of the distinguished researchers listed as co-authors of the 2006 Microsoft research paper about identifying spam through on-page content features. Among the several on-page content features the research paper analyzes is compressibility, which they discovered can be used as a classifier for indicating that a web page is spammy.

Detecting Spam Web Pages Through Content Analysis

Although the research paper was authored in 2006, its findings remain relevant to today.

Then, as now, people attempted to rank hundreds or thousands of location-based web pages that were essentially duplicate content aside from city, region, or state names. Then, as now, SEOs often created web pages for search engines by excessively repeating keywords within titles, meta descriptions, headings, internal anchor text, and within the content to improve rankings.

Section 4.6 of the research paper explains:

Advertisement

“Some search engines give higher weight to pages containing the query keywords several times. For example, for a given query term, a page that contains it ten times may be higher ranked than a page that contains it only once. To take advantage of such engines, some spam pages replicate their content several times in an attempt to rank higher.”

The research paper explains that search engines compress web pages and use the compressed version to reference the original web page. They note that excessive amounts of redundant words results in a higher level of compressibility. So they set about testing if there’s a correlation between a high level of compressibility and spam.

They write:

“Our approach in this section to locating redundant content within a page is to compress the page; to save space and disk time, search engines often compress web pages after indexing them, but before adding them to a page cache.

…We measure the redundancy of web pages by the compression ratio, the size of the uncompressed page divided by the size of the compressed page. We used GZIP …to compress pages, a fast and effective compression algorithm.”

High Compressibility Correlates To Spam

The results of the research showed that web pages with at least a compression ratio of 4.0 tended to be low quality web pages, spam. However, the highest rates of compressibility became less consistent because there were fewer data points, making it harder to interpret.

Figure 9: Prevalence of spam relative to compressibility of page.

The researchers concluded:

Advertisement

“70% of all sampled pages with a compression ratio of at least 4.0 were judged to be spam.”

But they also discovered that using the compression ratio by itself still resulted in false positives, where non-spam pages were incorrectly identified as spam:

“The compression ratio heuristic described in Section 4.6 fared best, correctly identifying 660 (27.9%) of the spam pages in our collection, while misidentifying 2, 068 (12.0%) of all judged pages.

Using all of the aforementioned features, the classification accuracy after the ten-fold cross validation process is encouraging:

95.4% of our judged pages were classified correctly, while 4.6% were classified incorrectly.

More specifically, for the spam class 1, 940 out of the 2, 364 pages, were classified correctly. For the non-spam class, 14, 440 out of the 14,804 pages were classified correctly. Consequently, 788 pages were classified incorrectly.”

The next section describes an interesting discovery about how to increase the accuracy of using on-page signals for identifying spam.

Insight Into Quality Rankings

The research paper examined multiple on-page signals, including compressibility. They discovered that each individual signal (classifier) was able to find some spam but that relying on any one signal on its own resulted in flagging non-spam pages for spam, which are commonly referred to as false positive.

Advertisement

The researchers made an important discovery that everyone interested in SEO should know, which is that using multiple classifiers increased the accuracy of detecting spam and decreased the likelihood of false positives. Just as important, the compressibility signal only identifies one kind of spam but not the full range of spam.

The takeaway is that compressibility is a good way to identify one kind of spam but there are other kinds of spam that aren’t caught with this one signal. Other kinds of spam were not caught with the compressibility signal.

This is the part that every SEO and publisher should be aware of:

“In the previous section, we presented a number of heuristics for assaying spam web pages. That is, we measured several characteristics of web pages, and found ranges of those characteristics which correlated with a page being spam. Nevertheless, when used individually, no technique uncovers most of the spam in our data set without flagging many non-spam pages as spam.

For example, considering the compression ratio heuristic described in Section 4.6, one of our most promising methods, the average probability of spam for ratios of 4.2 and higher is 72%. But only about 1.5% of all pages fall in this range. This number is far below the 13.8% of spam pages that we identified in our data set.”

So, even though compressibility was one of the better signals for identifying spam, it still was unable to uncover the full range of spam within the dataset the researchers used to test the signals.

Combining Multiple Signals

The above results indicated that individual signals of low quality are less accurate. So they tested using multiple signals. What they discovered was that combining multiple on-page signals for detecting spam resulted in a better accuracy rate with less pages misclassified as spam.

Advertisement

The researchers explained that they tested the use of multiple signals:

“One way of combining our heuristic methods is to view the spam detection problem as a classification problem. In this case, we want to create a classification model (or classifier) which, given a web page, will use the page’s features jointly in order to (correctly, we hope) classify it in one of two classes: spam and non-spam.”

These are their conclusions about using multiple signals:

“We have studied various aspects of content-based spam on the web using a real-world data set from the MSNSearch crawler. We have presented a number of heuristic methods for detecting content based spam. Some of our spam detection methods are more effective than others, however when used in isolation our methods may not identify all of the spam pages. For this reason, we combined our spam-detection methods to create a highly accurate C4.5 classifier. Our classifier can correctly identify 86.2% of all spam pages, while flagging very few legitimate pages as spam.”

Key Insight:

Misidentifying “very few legitimate pages as spam” was a significant breakthrough. The important insight that everyone involved with SEO should take away from this is that one signal by itself can result in false positives. Using multiple signals increases the accuracy.

What this means is that SEO tests of isolated ranking or quality signals will not yield reliable results that can be trusted for making strategy or business decisions.

Takeaways

We don’t know for certain if compressibility is used at the search engines but it’s an easy to use signal that combined with others could be used to catch simple kinds of spam like thousands of city name doorway pages with similar content. Yet even if the search engines don’t use this signal, it does show how easy it is to catch that kind of search engine manipulation and that it’s something search engines are well able to handle today.

Here are the key points of this article to keep in mind:

Advertisement
  • Doorway pages with duplicate content is easy to catch because they compress at a higher ratio than normal web pages.
  • Groups of web pages with a compression ratio above 4.0 were predominantly spam.
  • Negative quality signals used by themselves to catch spam can lead to false positives.
  • In this particular test, they discovered that on-page negative quality signals only catch specific types of spam.
  • When used alone, the compressibility signal only catches redundancy-type spam, fails to detect other forms of spam, and leads to false positives.
  • Combing quality signals improves spam detection accuracy and reduces false positives.
  • Search engines today have a higher accuracy of spam detection with the use of AI like Spam Brain.

Read the research paper, which is linked from the Google Scholar page of Marc Najork:

Detecting spam web pages through content analysis

Featured Image by Shutterstock/pathdoc

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

New Google Trends SEO Documentation

Published

on

By

Google publishes new documentation for how to use Google Trends for search marketing

Google Search Central published new documentation on Google Trends, explaining how to use it for search marketing. This guide serves as an easy to understand introduction for newcomers and a helpful refresher for experienced search marketers and publishers.

The new guide has six sections:

  1. About Google Trends
  2. Tutorial on monitoring trends
  3. How to do keyword research with the tool
  4. How to prioritize content with Trends data
  5. How to use Google Trends for competitor research
  6. How to use Google Trends for analyzing brand awareness and sentiment

The section about monitoring trends advises there are two kinds of rising trends, general and specific trends, which can be useful for developing content to publish on a site.

Using the Explore tool, you can leave the search box empty and view the current rising trends worldwide or use a drop down menu to focus on trends in a specific country. Users can further filter rising trends by time periods, categories and the type of search. The results show rising trends by topic and by keywords.

To search for specific trends users just need to enter the specific queries and then filter them by country, time, categories and type of search.

The section called Content Calendar describes how to use Google Trends to understand which content topics to prioritize.

Advertisement

Google explains:

“Google Trends can be helpful not only to get ideas on what to write, but also to prioritize when to publish it. To help you better prioritize which topics to focus on, try to find seasonal trends in the data. With that information, you can plan ahead to have high quality content available on your site a little before people are searching for it, so that when they do, your content is ready for them.”

Read the new Google Trends documentation:

Get started with Google Trends

Featured Image by Shutterstock/Luis Molinero

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

All the best things about Ahrefs Evolve 2024

Published

on

All the best things about Ahrefs Evolve 2024

Hey all, I’m Rebekah and I am your Chosen One to “do a blog post for Ahrefs Evolve 2024”.

What does that entail exactly? I don’t know. In fact, Sam Oh asked me yesterday what the title of this post would be. “Is it like…Ahrefs Evolve 2024: Recap of day 1 and day 2…?” 

Even as I nodded, I couldn’t get over how absolutely boring that sounded. So I’m going to do THIS instead: a curation of all the best things YOU loved about Ahrefs’ first conference, lifted directly from X.

Let’s go!

OUR HUGE SCREEN

CONFERENCE VENUE ITSELF

It was recently named the best new skyscraper in the world, by the way.

 

OUR AMAZING SPEAKER LINEUP – SUPER INFORMATIVE, USEFUL TALKS!

 

Advertisement

GREAT MUSIC

 

AMAZING GOODIES

 

SELFIE BATTLE

Some background: Tim and Sam have a challenge going on to see who can take the most number of selfies with all of you. Last I heard, Sam was winning – but there is room for a comeback yet!

 

THAT BELL

Everybody’s just waiting for this one.

 

STICKER WALL

AND, OF COURSE…ALL OF YOU!

 

Advertisement

There’s a TON more content on LinkedIn – click here – but I have limited time to get this post up and can’t quite figure out how to embed LinkedIn posts so…let’s stop here for now. I’ll keep updating as we go along!



Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending