SEO
10 Steps To Boost Your Site’s Crawlability And Indexability
Keywords and content may be the twin pillars upon which most search engine optimization strategies are built, but they’re far from the only ones that matter.
Less commonly discussed but equally important – not just to users but to search bots – is your website’s discoverability.
There are roughly 50 billion webpages on 1.93 billion websites on the internet. This is far too many for any human team to explore, so these bots, also called spiders, perform a significant role.
These bots determine each page’s content by following links from website to website and page to page. This information is compiled into a vast database, or index, of URLs, which are then put through the search engine’s algorithm for ranking.
This two-step process of navigating and understanding your site is called crawling and indexing.
As an SEO professional, you’ve undoubtedly heard these terms before, but let’s define them just for clarity’s sake:
- Crawlability refers to how well these search engine bots can scan and index your webpages.
- Indexability measures the search engine’s ability to analyze your webpages and add them to its index.
As you can probably imagine, these are both essential parts of SEO.
If your site suffers from poor crawlability, for example, many broken links and dead ends, search engine crawlers won’t be able to access all your content, which will exclude it from the index.
Indexability, on the other hand, is vital because pages that are not indexed will not appear in search results. How can Google rank a page it hasn’t included in its database?
The crawling and indexing process is a bit more complicated than we’ve discussed here, but that’s the basic overview.
If you’re looking for a more in-depth discussion of how they work, Dave Davies has an excellent piece on crawling and indexing.
How To Improve Crawling And Indexing
Now that we’ve covered just how important these two processes are let’s look at some elements of your website that affect crawling and indexing – and discuss ways to optimize your site for them.
1. Improve Page Loading Speed
With billions of webpages to catalog, web spiders don’t have all day to wait for your links to load. This is sometimes referred to as a crawl budget.
If your site doesn’t load within the specified time frame, they’ll leave your site, which means you’ll remain uncrawled and unindexed. And as you can imagine, this is not good for SEO purposes.
Thus, it’s a good idea to regularly evaluate your page speed and improve it wherever you can.
You can use Google Search Console or tools like Screaming Frog to check your website’s speed.
If your site is running slow, take steps to alleviate the problem. This could include upgrading your server or hosting platform, enabling compression, minifying CSS, JavaScript, and HTML, and eliminating or reducing redirects.
Figure out what’s slowing down your load time by checking your Core Web Vitals report. If you want more refined information about your goals, particularly from a user-centric view, Google Lighthouse is an open-source tool you may find very useful.
2. Strengthen Internal Link Structure
A good site structure and internal linking are foundational elements of a successful SEO strategy. A disorganized website is difficult for search engines to crawl, which makes internal linking one of the most important things a website can do.
But don’t just take our word for it. Here’s what Google’s search advocate John Mueller had to say about it:
“Internal linking is super critical for SEO. I think it’s one of the biggest things that you can do on a website to kind of guide Google and guide visitors to the pages that you think are important.”
If your internal linking is poor, you also risk orphaned pages or those pages that don’t link to any other part of your website. Because nothing is directed to these pages, the only way for search engines to find them is from your sitemap.
To eliminate this problem and others caused by poor structure, create a logical internal structure for your site.
Your homepage should link to subpages supported by pages further down the pyramid. These subpages should then have contextual links where it feels natural.
Another thing to keep an eye on is broken links, including those with typos in the URL. This, of course, leads to a broken link, which will lead to the dreaded 404 error. In other words, page not found.
The problem with this is that broken links are not helping and are harming your crawlability.
Double-check your URLs, particularly if you’ve recently undergone a site migration, bulk delete, or structure change. And make sure you’re not linking to old or deleted URLs.
Other best practices for internal linking include having a good amount of linkable content (content is always king), using anchor text instead of linked images, and using a “reasonable number” of links on a page (whatever that means).
Oh yeah, and ensure you’re using follow links for internal links.
3. Submit Your Sitemap To Google
Given enough time, and assuming you haven’t told it not to, Google will crawl your site. And that’s great, but it’s not helping your search ranking while you’re waiting.
If you’ve recently made changes to your content and want Google to know about it immediately, it’s a good idea to submit a sitemap to Google Search Console.
A sitemap is another file that lives in your root directory. It serves as a roadmap for search engines with direct links to every page on your site.
This is beneficial for indexability because it allows Google to learn about multiple pages simultaneously. Whereas a crawler may have to follow five internal links to discover a deep page, by submitting an XML sitemap, it can find all of your pages with a single visit to your sitemap file.
Submitting your sitemap to Google is particularly useful if you have a deep website, frequently add new pages or content, or your site does not have good internal linking.
4. Update Robots.txt Files
You probably want to have a robots.txt file for your website. While it’s not required, 99% of websites use it as a rule of thumb. If you’re unfamiliar with this is, it’s a plain text file in your website’s root directory.
It tells search engine crawlers how you would like them to crawl your site. Its primary use is to manage bot traffic and keep your site from being overloaded with requests.
Where this comes in handy in terms of crawlability is limiting which pages Google crawls and indexes. For example, you probably don’t want pages like directories, shopping carts, and tags in Google’s directory.
Of course, this helpful text file can also negatively impact your crawlability. It’s well worth looking at your robots.txt file (or having an expert do it if you’re not confident in your abilities) to see if you’re inadvertently blocking crawler access to your pages.
Some common mistakes in robots.text files include:
- Robots.txt is not in the root directory.
- Poor use of wildcards.
- Noindex in robots.txt.
- Blocked scripts, stylesheets and images.
- No sitemap URL.
For an in-depth examination of each of these issues – and tips for resolving them, read this article.
5. Check Your Canonicalization
Canonical tags consolidate signals from multiple URLs into a single canonical URL. This can be a helpful way to tell Google to index the pages you want while skipping duplicates and outdated versions.
But this opens the door for rogue canonical tags. These refer to older versions of a page that no longer exists, leading to search engines indexing the wrong pages and leaving your preferred pages invisible.
To eliminate this problem, use a URL inspection tool to scan for rogue tags and remove them.
If your website is geared towards international traffic, i.e., if you direct users in different countries to different canonical pages, you need to have canonical tags for each language. This ensures your pages are being indexed in each language your site is using.
6. Perform A Site Audit
Now that you’ve performed all these other steps, there’s still one final thing you need to do to ensure your site is optimized for crawling and indexing: a site audit. And that starts with checking the percentage of pages Google has indexed for your site.
Check Your Indexability Rate
Your indexability rate is the number of pages in Google’s index divided by the number of pages on our website.
You can find out how many pages are in the google index from Google Search Console Index by going to the “Pages” tab and checking the number of pages on the website from the CMS admin panel.
There’s a good chance your site will have some pages you don’t want indexed, so this number likely won’t be 100%. But if the indexability rate is below 90%, then you have issues that need to be investigated.
You can get your no-indexed URLs from Search Console and run an audit for them. This could help you understand what is causing the issue.
Another useful site auditing tool included in Google Search Console is the URL Inspection Tool. This allows you to see what Google spiders see, which you can then compare to real webpages to understand what Google is unable to render.
Audit Newly Published Pages
Any time you publish new pages to your website or update your most important pages, you should make sure they’re being indexed. Go into Google Search Console and make sure they’re all showing up.
If you’re still having issues, an audit can also give you insight into which other parts of your SEO strategy are falling short, so it’s a double win. Scale your audit process with tools like:
7. Check For Low-Quality Or Duplicate Content
If Google doesn’t view your content as valuable to searchers, it may decide it’s not worthy to index. This thin content, as it’s known could be poorly written content (e.g., filled with grammar mistakes and spelling errors), boilerplate content that’s not unique to your site, or content with no external signals about its value and authority.
To find this, determine which pages on your site are not being indexed, and then review the target queries for them. Are they providing high-quality answers to the questions of searchers? If not, replace or refresh them.
Duplicate content is another reason bots can get hung up while crawling your site. Basically, what happens is that your coding structure has confused it and it doesn’t know which version to index. This could be caused by things like session IDs, redundant content elements and pagination issues.
Sometimes, this will trigger an alert in Google Search Console, telling you Google is encountering more URLs than it thinks it should. If you haven’t received one, check your crawl results for things like duplicate or missing tags, or URLs with extra characters that could be creating extra work for bots.
Correct these issues by fixing tags, removing pages or adjusting Google’s access.
8. Eliminate Redirect Chains And Internal Redirects
As websites evolve, redirects are a natural byproduct, directing visitors from one page to a newer or more relevant one. But while they’re common on most sites, if you’re mishandling them, you could be inadvertently sabotaging your own indexing.
There are several mistakes you can make when creating redirects, but one of the most common is redirect chains. These occur when there’s more than one redirect between the link clicked on and the destination. Google doesn’t look on this as a positive signal.
In more extreme cases, you may initiate a redirect loop, in which a page redirects to another page, which directs to another page, and so on, until it eventually links back to the very first page. In other words, you’ve created a never-ending loop that goes nowhere.
Check your site’s redirects using Screaming Frog, Redirect-Checker.org or a similar tool.
9. Fix Broken Links
In a similar vein, broken links can wreak havoc on your site’s crawlability. You should regularly be checking your site to ensure you don’t have broken links, as this will not only hurt your SEO results, but will frustrate human users.
There are a number of ways you can find broken links on your site, including manually evaluating each and every link on your site (header, footer, navigation, in-text, etc.), or you can use Google Search Console, Analytics or Screaming Frog to find 404 errors.
Once you’ve found broken links, you have three options for fixing them: redirecting them (see the section above for caveats), updating them or removing them.
10. IndexNow
IndexNow is a relatively new protocol that allows URLs to be submitted simultaneously between search engines via an API. It works like a super-charged version of submitting an XML sitemap by alerting search engines about new URLs and changes to your website.
Basically, what it does is provides crawlers with a roadmap to your site upfront. They enter your site with information they need, so there’s no need to constantly recheck the sitemap. And unlike XML sitemaps, it allows you to inform search engines about non-200 status code pages.
Implementing it is easy, and only requires you to generate an API key, host it in your directory or another location, and submit your URLs in the recommended format.
Wrapping Up
By now, you should have a good understanding of your website’s indexability and crawlability. You should also understand just how important these two factors are to your search rankings.
If Google’s spiders can crawl and index your site, it doesn’t matter how many keywords, backlinks, and tags you use – you won’t appear in search results.
And that’s why it’s essential to regularly check your site for anything that could be waylaying, misleading, or misdirecting bots.
So, get yourself a good set of tools and get started. Be diligent and mindful of the details, and you’ll soon have Google spiders swarming your site like spiders.
More Resources:
Featured Image: Roman Samborskyi/Shutterstock
SEO
Google Revamps Entire Crawler Documentation
Google has launched a major revamp of its Crawler documentation, shrinking the main overview page and splitting content into three new, more focused pages. Although the changelog downplays the changes there is an entirely new section and basically a rewrite of the entire crawler overview page. The additional pages allows Google to increase the information density of all the crawler pages and improves topical coverage.
What Changed?
Google’s documentation changelog notes two changes but there is actually a lot more.
Here are some of the changes:
- Added an updated user agent string for the GoogleProducer crawler
- Added content encoding information
- Added a new section about technical properties
The technical properties section contains entirely new information that didn’t previously exist. There are no changes to the crawler behavior, but by creating three topically specific pages Google is able to add more information to the crawler overview page while simultaneously making it smaller.
This is the new information about content encoding (compression):
“Google’s crawlers and fetchers support the following content encodings (compressions): gzip, deflate, and Brotli (br). The content encodings supported by each Google user agent is advertised in the Accept-Encoding header of each request they make. For example, Accept-Encoding: gzip, deflate, br.”
There is additional information about crawling over HTTP/1.1 and HTTP/2, plus a statement about their goal being to crawl as many pages as possible without impacting the website server.
What Is The Goal Of The Revamp?
The change to the documentation was due to the fact that the overview page had become large. Additional crawler information would make the overview page even larger. A decision was made to break the page into three subtopics so that the specific crawler content could continue to grow and making room for more general information on the overviews page. Spinning off subtopics into their own pages is a brilliant solution to the problem of how best to serve users.
This is how the documentation changelog explains the change:
“The documentation grew very long which limited our ability to extend the content about our crawlers and user-triggered fetchers.
…Reorganized the documentation for Google’s crawlers and user-triggered fetchers. We also added explicit notes about what product each crawler affects, and added a robots.txt snippet for each crawler to demonstrate how to use the user agent tokens. There were no meaningful changes to the content otherwise.”
The changelog downplays the changes by describing them as a reorganization because the crawler overview is substantially rewritten, in addition to the creation of three brand new pages.
While the content remains substantially the same, the division of it into sub-topics makes it easier for Google to add more content to the new pages without continuing to grow the original page. The original page, called Overview of Google crawlers and fetchers (user agents), is now truly an overview with more granular content moved to standalone pages.
Google published three new pages:
- Common crawlers
- Special-case crawlers
- User-triggered fetchers
1. Common Crawlers
As it says on the title, these are common crawlers, some of which are associated with GoogleBot, including the Google-InspectionTool, which uses the GoogleBot user agent. All of the bots listed on this page obey the robots.txt rules.
These are the documented Google crawlers:
- Googlebot
- Googlebot Image
- Googlebot Video
- Googlebot News
- Google StoreBot
- Google-InspectionTool
- GoogleOther
- GoogleOther-Image
- GoogleOther-Video
- Google-CloudVertexBot
- Google-Extended
3. Special-Case Crawlers
These are crawlers that are associated with specific products and are crawled by agreement with users of those products and operate from IP addresses that are distinct from the GoogleBot crawler IP addresses.
List of Special-Case Crawlers:
- AdSense
User Agent for Robots.txt: Mediapartners-Google - AdsBot
User Agent for Robots.txt: AdsBot-Google - AdsBot Mobile Web
User Agent for Robots.txt: AdsBot-Google-Mobile - APIs-Google
User Agent for Robots.txt: APIs-Google - Google-Safety
User Agent for Robots.txt: Google-Safety
3. User-Triggered Fetchers
The User-triggered Fetchers page covers bots that are activated by user request, explained like this:
“User-triggered fetchers are initiated by users to perform a fetching function within a Google product. For example, Google Site Verifier acts on a user’s request, or a site hosted on Google Cloud (GCP) has a feature that allows the site’s users to retrieve an external RSS feed. Because the fetch was requested by a user, these fetchers generally ignore robots.txt rules. The general technical properties of Google’s crawlers also apply to the user-triggered fetchers.”
The documentation covers the following bots:
- Feedfetcher
- Google Publisher Center
- Google Read Aloud
- Google Site Verifier
Takeaway:
Google’s crawler overview page became overly comprehensive and possibly less useful because people don’t always need a comprehensive page, they’re just interested in specific information. The overview page is less specific but also easier to understand. It now serves as an entry point where users can drill down to more specific subtopics related to the three kinds of crawlers.
This change offers insights into how to freshen up a page that might be underperforming because it has become too comprehensive. Breaking out a comprehensive page into standalone pages allows the subtopics to address specific users needs and possibly make them more useful should they rank in the search results.
I would not say that the change reflects anything in Google’s algorithm, it only reflects how Google updated their documentation to make it more useful and set it up for adding even more information.
Read Google’s New Documentation
Overview of Google crawlers and fetchers (user agents)
List of Google’s common crawlers
List of Google’s special-case crawlers
List of Google user-triggered fetchers
See also:
Featured Image by Shutterstock/Cast Of Thousands
SEO
Client-Side Vs. Server-Side Rendering
Faster webpage loading times play a big part in user experience and SEO, with page load speed a key determining factor for Google’s algorithm.
A front-end web developer must decide the best way to render a website so it delivers a fast experience and dynamic content.
Two popular rendering methods include client-side rendering (CSR) and server-side rendering (SSR).
All websites have different requirements, so understanding the difference between client-side and server-side rendering can help you render your website to match your business goals.
Google & JavaScript
Google has extensive documentation on how it handles JavaScript, and Googlers offer insights and answer JavaScript questions regularly through various formats – both official and unofficial.
For example, in a Search Off The Record podcast, it was discussed that Google renders all pages for Search, including JavaScript-heavy ones.
This sparked a substantial conversation on LinkedIn, and another couple of takeaways from both the podcast and proceeding discussions are that:
- Google doesn’t track how expensive it is to render specific pages.
- Google renders all pages to see content – regardless if it uses JavaScript or not.
The conversation as a whole has helped to dispel many myths and misconceptions about how Google might have approached JavaScript and allocated resources.
Martin Splitt’s full comment on LinkedIn covering this was:
“We don’t keep track of “how expensive was this page for us?” or something. We know that a substantial part of the web uses JavaScript to add, remove, change content on web pages. We just have to render, to see it all. It doesn’t really matter if a page does or does not use JavaScript, because we can only be reasonably sure to see all content once it’s rendered.”
Martin also confirmed a queue and potential delay between crawling and indexing, but not just because something is JavaScript or not, and it’s not an “opaque” issue that the presence of JavaScript is the root cause of URLs not being indexed.
General JavaScript Best Practices
Before we get into the client-side versus server-side debate, it’s important that we also follow general best practices for either of these approaches to work:
- Don’t block JavaScript resources through Robots.txt or server rules.
- Avoid render blocking.
- Avoid injecting JavaScript in the DOM.
What Is Client-Side Rendering, And How Does It Work?
Client-side rendering is a relatively new approach to rendering websites.
It became popular when JavaScript libraries started integrating it, with Angular and React.js being some of the best examples of libraries used in this type of rendering.
It works by rendering a website’s JavaScript in your browser rather than on the server.
The server responds with a bare-bones HTML document containing the JS files instead of getting all the content from the HTML document.
While the initial upload time is a bit slow, the subsequent page loads will be rapid as they aren’t reliant on a different HTML page per route.
From managing logic to retrieving data from an API, client-rendered sites do everything “independently.” The page is available after the code is executed because every page the user visits and its corresponding URL are created dynamically.
The CSR process is as follows:
- The user enters the URL they wish to visit in the address bar.
- A data request is sent to the server at the specified URL.
- On the client’s first request for the site, the server delivers the static files (CSS and HTML) to the client’s browser.
- The client browser will download the HTML content first, followed by JavaScript. These HTML files connect the JavaScript, starting the loading process by displaying loading symbols the developer defines to the user. At this stage, the website is still not visible to the user.
- After the JavaScript is downloaded, content is dynamically generated on the client’s browser.
- The web content becomes visible as the client navigates and interacts with the website.
What Is Server-Side Rendering, And How Does It Work?
Server-side rendering is the more common technique for displaying information on a screen.
The web browser submits a request for information from the server, fetching user-specific data to populate and sending a fully rendered HTML page to the client.
Every time the user visits a new page on the site, the server will repeat the entire process.
Here’s how the SSR process goes step-by-step:
- The user enters the URL they wish to visit in the address bar.
- The server serves a ready-to-be-rendered HTML response to the browser.
- The browser renders the page (now viewable) and downloads JavaScript.
- The browser executes React, thus making the page interactable.
What Are The Differences Between Client-Side And Server-Side Rendering?
The main difference between these two rendering approaches is in the algorithms of their operation. CSR shows an empty page before loading, while SSR displays a fully-rendered HTML page on the first load.
This gives server-side rendering a speed advantage over client-side rendering, as the browser doesn’t need to process large JavaScript files. Content is often visible within a couple of milliseconds.
Search engines can crawl the site for better SEO, making it easy to index your webpages. This readability in the form of text is precisely the way SSR sites appear in the browser.
However, client-side rendering is a cheaper option for website owners.
It relieves the load on your servers, passing the responsibility of rendering to the client (the bot or user trying to view your page). It also offers rich site interactions by providing fast website interaction after the initial load.
Fewer HTTP requests are made to the server with CSR, unlike in SSR, where each page is rendered from scratch, resulting in a slower transition between pages.
SSR can also buckle under a high server load if the server receives many simultaneous requests from different users.
The drawback of CSR is the longer initial loading time. This can impact SEO; crawlers might not wait for the content to load and exit the site.
This two-phased approach raises the possibility of seeing empty content on your page by missing JavaScript content after first crawling and indexing the HTML of a page. Remember that, in most cases, CSR requires an external library.
When To Use Server-Side Rendering
If you want to improve your Google visibility and rank high in the search engine results pages (SERPs), server-side rendering is the number one choice.
E-learning websites, online marketplaces, and applications with a straightforward user interface with fewer pages, features, and dynamic data all benefit from this type of rendering.
When To Use Client-Side Rendering
Client-side rendering is usually paired with dynamic web apps like social networks or online messengers. This is because these apps’ information constantly changes and must deal with large and dynamic data to perform fast updates to meet user demand.
The focus here is on a rich site with many users, prioritizing the user experience over SEO.
Which Is Better: Server-Side Or Client-Side Rendering?
When determining which approach is best, you need to not only take into consideration your SEO needs but also how the website works for users and delivers value.
Think about your project and how your chosen rendering will impact your position in the SERPs and your website’s user experience.
Generally, CSR is better for dynamic websites, while SSR is best suited for static websites.
Content Refresh Frequency
Websites that feature highly dynamic information, such as gambling or FOREX websites, update their content every second, meaning you’d likely choose CSR over SSR in this scenario – or choose to use CSR for specific landing pages and not all pages, depending on your user acquisition strategy.
SSR is more effective if your site’s content doesn’t require much user interaction. It positively influences accessibility, page load times, SEO, and social media support.
On the other hand, CSR is excellent for providing cost-effective rendering for web applications, and it’s easier to build and maintain; it’s better for First Input Delay (FID).
Another CSR consideration is that meta tags (description, title), canonical URLs, and Hreflang tags should be rendered server-side or presented in the initial HTML response for the crawlers to identify them as soon as possible, and not only appear in the rendered HTML.
Platform Considerations
CSR technology tends to be more expensive to maintain because the hourly rate for developers skilled in React.js or Node.js is generally higher than that for PHP or WordPress developers.
Additionally, there are fewer ready-made plugins or out-of-the-box solutions available for CSR frameworks compared to the larger plugin ecosystem that WordPress users have access too.
For those considering a headless WordPress setup, such as using Frontity, it’s important to note that you’ll need to hire both React.js developers and PHP developers.
This is because headless WordPress relies on React.js for the front end while still requiring PHP for the back end.
It’s important to remember that not all WordPress plugins are compatible with headless setups, which could limit functionality or require additional custom development.
Website Functionality & Purpose
Sometimes, you don’t have to choose between the two as hybrid solutions are available. Both SSR and CSR can be implemented within a single website or webpage.
For example, in an online marketplace, pages with product descriptions can be rendered on the server, as they are static and need to be easily indexed by search engines.
Staying with ecommerce, if you have high levels of personalization for users on a number of pages, you won’t be able to SSR render the content for bots, so you will need to define some form of default content for Googlebot which crawls cookieless and stateless.
Pages like user accounts don’t need to be ranked in the search engine results pages (SERPs), so a CRS approach might be better for UX.
Both CSR and SSR are popular approaches to rendering websites. You and your team need to make this decision at the initial stage of product development.
More resources:
Featured Image: TippaPatt/Shutterstock
SEO
HubSpot Rolls Out AI-Powered Marketing Tools
HubSpot announced a push into AI this week at its annual Inbound marketing conference, launching “Breeze.”
Breeze is an artificial intelligence layer integrated across the company’s marketing, sales, and customer service software.
According to HubSpot, the goal is to provide marketers with easier, faster, and more unified solutions as digital channels become oversaturated.
Karen Ng, VP of Product at HubSpot, tells Search Engine Journal in an interview:
“We’re trying to create really powerful tools for marketers to rise above the noise that’s happening now with a lot of this AI-generated content. We might help you generate titles or a blog content…but we do expect kind of a human there to be a co-assist in that.”
Breeze AI Covers Copilot, Workflow Agents, Data Enrichment
The Breeze layer includes three main components.
Breeze Copilot
An AI assistant that provides personalized recommendations and suggestions based on data in HubSpot’s CRM.
Ng explained:
“It’s a chat-based AI companion that assists with tasks everywhere – in HubSpot, the browser, and mobile.”
Breeze Agents
A set of four agents that can automate entire workflows like content generation, social media campaigns, prospecting, and customer support without human input.
Ng added the following context:
“Agents allow you to automate a lot of those workflows. But it’s still, you know, we might generate for you a content backlog. But taking a look at that content backlog, and knowing what you publish is still a really important key of it right now.”
Breeze Intelligence
Combines HubSpot customer data with third-party sources to build richer profiles.
Ng stated:
“It’s really important that we’re bringing together data that can be trusted. We know your AI is really only as good as the data that it’s actually trained on.”
Addressing AI Content Quality
While prioritizing AI-driven productivity, Ng acknowledged the need for human oversight of AI content:
“We really do need eyes on it still…We think of that content generation as still human-assisted.”
Marketing Hub Updates
Beyond Breeze, HubSpot is updating Marketing Hub with tools like:
- Content Remix to repurpose videos into clips, audio, blogs, and more.
- AI video creation via integration with HeyGen
- YouTube and Instagram Reels publishing
- Improved marketing analytics and attribution
The announcements signal HubSpot’s AI-driven vision for unifying customer data.
But as Ng tells us, “We definitely think a lot about the data sources…and then also understand your business.”
HubSpot’s updates are rolling out now, with some in public beta.
Featured Image: Poetra.RH/Shutterstock
-
WORDPRESS7 days ago
How to Connect Your WordPress Site to the Fediverse – WordPress.com News
-
SEARCHENGINES6 days ago
Daily Search Forum Recap: September 13, 2024
-
SEO7 days ago
SEO Experts Gather for a Candid Chat About Search [Podcast]
-
SEO6 days ago
The Expert SEO Guide To URL Parameter Handling
-
SEO4 days ago
9 HTML Tags (& 11 Attributes) You Must Know for SEO
-
WORDPRESS6 days ago
7 Best WordPress Event Ticketing Plugins for 2024 (Tested)
-
WORDPRESS5 days ago
20 Must-Know WordPress Stats Defining the Leading Platform in 2024
-
SEARCHENGINES5 days ago
Google Ranking Volatility, Apple Intelligence, Navboost, Ads, Bing & Local
You must be logged in to post a comment Login