Connect with us


Beyond Title Tags: 5 Worthwhile SEO Tests that Seem “Untestworthy”



Adding STAT Reports to Google Sheets Using App Scripts

The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Is it testworthy, or is it untestworthy?

There’s a fine line between optimizations and experiments. Testing something is an exercise in curiosity, whereas optimizing a thing is an act of certainty.

If we know the outcome of a given activity before we perform that activity, we’re in optimization territory. For example, if you’ve discovered a load of orphaned pages, then the act of internally linking to those pages is highly likely to result in a positive outcome. We can deem this scenario, “untestworthy” (yes, I know that’s actually a word).

But, as we’ll discuss here, SEO includes a vast array of activities where the outcomes of our work are either uncertain or difficult to predict. Think about the last time you experienced a site migration. Were you certain that the new site would perform better than the original? This might be a scenario that we’ll deem, “testworthy.”

In short, a testworthy activity is one where we don’t know the ending until we measure our outcomes with data.

Measuring our SEO tests

The step-by-step measurement processes and techniques for conducting SEO experiments fall outside the scope of this article, so if you’re reading this and asking yourself, “how exactly do I run an SEO experiment from start to finish?”, here are a couple links to resources that can assist you in learning the nitty-gritty specifics of setting up and measuring SEO experiments:

For each of the experiments below, I will assume a time-based measurement technique. Although some of the ideas here can be tested using an A/B split testing technique, not all of them can.

Curious about time-based techniques? I cover them in detail in this guide.

A word on statistical significance

One final note to remember. Statistical significance, i.e. when your results can be confidently attributed to your testing criteria, is a sexy concept, but one sobering reality of SEO testing is that statistical significance can only be achieved through rigorously advanced split testing.

Time-based SEO experiments provide us with directional learnings, not absolute conclusions. Advantages of experimenting in this way include the ability to react more quickly, use up fewer resources, and the flexibility to experiment in nearly all search environments where split testing cannot.

Here’s one way to visualize how non-significant tests remain valuable. On the left end of the spectrum, there are the crapshoot experiments: low confidence, low investment initiatives that provide less reliable insights. Further to the right, we can begin categorizing experiments according to higher confidence intervals and higher resource investments. Somewhere in the middle, there are a great many SEO tests that provide directional insights, even when our directional insights aren’t guaranteed with the promise of scientific certainty.

With this in mind, I’ve put together a list of five inconspicuous SEO tests that appear “untestworthy,” but are actually SEO tests disguised as optimizations.

Test in disguise #1: URL switching

Two handwritten URLs showing an example change.

A URL switch test is really very similar to SEO title testing. The idea behind URL switch tests is simple: like page titles, we know that URLs are heavily weighted ranking factors, so if we find that there are URLs that look under-optimized or misaligned with our target terms and search intents, then we can build a hypotheses for testing a new URL and redirecting the original URL.

Some of you might be silently blowing a fuse right about now, and for good reason. URL switch tests can be very risky. If your original URL has already generated a substantial number of links (internal or external) I would exercise extreme caution before running a URL switch test.

As you probably know by now, redirects have the potential to backfire, and if your test fails, cannot be rolled back to the original URL variant as easily as a title test can be rolled back.

But this shouldn’t scare you if you are running a URL switch test in lower-risk scenarios. I have seen many successful URL switch tests in scenarios where the target URL was either freshly-launched, had too few links pointing to it, or where the URL was so ineffective that an experiment was justifiably worth the risk.

How to run a URL switch test

  1. Check the URL’s current traffic levels. Higher traffic levels = higher risk.

  2. Check the URL’s internal and external links. Internal links can be updated, but external links can still lose strength as you pass them through a 302/301.

  3. If the risk is within your level of tolerability, clarify what your new hypothesis and URL test variation will be.
  4. Change the URL from the control URL to the variation URL.

  5. Add a 302 temporary redirect from the control to the variation, submit the URL for re-indexation in Google Search Console (GSC), and benchmark the date that this is completed on.

  6. Wait 2-6 weeks to measure the clicks before vs. clicks after for equal time durations and days of the week in GSC.
    • For example: If your measurement period (after data) begins on a Thursday and ends on a Sunday, then I recommend comparing with an equivalent time duration in GSC that also begins on a Tuesday and ends on a Sunday just prior to the experiment launch date (before data). For most websites, the click patterns on weekends will be lower than on weekdays. Using the same days of the week and time durations allows you to control for these differences in daily click patterns.

    • The optimal time-range is situation-dependent. Pages that generate high click volumes can be measured closer to the two-week time-frame, while pages that generate lower click-volumes will need to run longer.

    • Caution: If the risk to this page is high, you may want to check in periodically during the first few days to make sure that performance doesn’t drop unexpectedly.

  7. When measuring performance, use the “compare URLs” feature in GSC. This lets you check both the control URL and the variation URL simultaneously.
    • 1666260304 885 Beyond Title Tags 5 Worthwhile SEO Tests that Seem Untestworthy
    • 1666260305 672 Beyond Title Tags 5 Worthwhile SEO Tests that Seem Untestworthy
  8. After you’ve gathered enough data to make a directionally-sound judgment call about which URL performs better, do one of the following:
    • If the new variation performed better: Change the 302 temporary redirect to a 301 permanent redirect and update all internal links to reflect the new URL.

    • If the original control URL performed better: Remove the 302 redirect. [Optional: you may want to add a new redirect from the failed variation URL back to the original control URL to speed up the re-indexation process.]

  9. Resubmit the final URL in Google Search Console and periodically monitor the performance after the test has ended to ensure that performance remains positive.

Test in disguise #2: Content refreshes

Illustration of two pages, one the control and one with a variance.

Isn’t a content refresh a given? We know that refreshing content is good for SEO, so why does it need testing? 

Yes, content refreshes are incredibly important and this is an activity that has been proven successful time and time again. However, not every content refresh yields positive results.

Even though it isn’t the norm, content refresh projects can occasionally result in traffic losses, and perhaps equally frustrating, many refresh projects can turn out neutral results. This means that all of that precious time and energy that we spent rewriting and republishing a piece of content failed to produce the outcome that we intended.

For these reasons, it’s important to figure out if our investments in these projects have achieved their desired positive outcomes or not. That’s where SEO testing comes into play.   

How to run a content refresh SEO test

  1. Perform your content refresh project exactly as you otherwise would, according to your own content team’s workflow. Make sure to save all of the original files, in case you need to revert back to the original content.

  2. On the date of republication, submit the page URL to Google Search Console to be re-indexed and benchmark the date.

  3. Wait 2-6 weeks to measure the clicks before vs. clicks after in GSC.
  4. After you’ve gathered enough data to make a directionally-sound judgment call about which URL performs better, do one of the following:
    • If the variation performed better: Congrats! Report the results to your team and keep the change.

    • If the control performed better: Reinstate the original content and files. Then, re-index the page and continue monitoring performance to look for rebounding traffic.

Test in disguise #3: Section rearrangement

Illustration of two pages, one the control and one with rearranged features.

A section rearrange test is just what it sounds like. The hypothesis for these experiments is that if we can reprioritize some of the on-page content, elements, or components, then we might be able to influence the page’s rankings and traffic coming in.

This can work particularly well, if the page section that addresses our audiences’ main search intents is either buried deep below the fold, or if it requires extra steps for the user to access that content.

For simplicity’s sake, let’s use the example keyword: “email ideas for cold outreach.”

This keyword appears to have a lot of demand from users who are looking for specific email templates and phrasings that they can use in their outreach campaigns.

Now, let’s assume that you’ve got a blog post on this exact topic, but the exact email templates and scripts that users are searching for are buried at the end of your posts, well past a dozen other sections of content that don’t satisfy their search demand. This might be a great case for running a section rearrange test.

The idea is, if you can reprioritize those pieces of information that users are looking for from the bottom of your page to the top of your page, Google is likely to notice the prioritized content as a better match for users to quickly access the information they want. Thus, rankings and traffic may improve in the same way they might improve with a content refresh project.

Added bonus: it’s faster than rewriting new content!

How to run a section rearrange SEO test

  1. Look for pages that are underperforming, and that have addressed a users’ primary search intent somewhere deep within the page.

  2. Rearrange the page sections in a way that might create a better experience or flow for the readers.

  3. Launch the new page (but remember to save the original control page files), re-index in Google Search Console, and benchmark the date.

  4. Wait 2-6 weeks to measure the clicks before vs. clicks after in GSC.
  5. After you’ve gathered enough data to make a directionally-sound judgment call about which URL performs better, do one of the following:
    • If the variation performed better: Congrats! Report the results to your team and keep the changes.

    • If the control performed better: Reinstate the original content and files. Then, re-index the page and continue monitoring performance to look for rebounding traffic.

Test in disguise #4: Content removal

Illustration of two pages, one the control and one with a content feature removed.

This test is the SEO-equivalent of what CRO professionals call “a takeaway test.”

In digital marketing, there are times when less really is more, so the idea for this experiment is, if we just trim out certain items — whether those might be page elements, or less-helpful content sections — then the removal process could lend itself to creating a tighter, stronger webpage.

In a CRO-driven takeaway experiment, a CRO professional might notice certain elements that distract users or get in the way of a conversion path.

This concept works just a little bit differently for SEO if our goal is to improve rankings and traffic performance. For SEO, content removal experiments are just a matter of “trimming the fat” from our content and page elements.

When analyzing your top pages, ask yourself if you see any sections, paragraphs, or sentences which deviate from the information that the search audience really came for. You might be surprised to see how much of the content we create is actually worthless for our users.

How to run a content removal SEO test

Scan for high-value pages and posts that may be hitting a wall with rankings and traffic performance.

  1. Make sure to analyze the top keywords and SERPs so that you can get very clear on which primary and secondary search intents the users predominantly wish to see and read about.

  2. Scan your page’s content with a dose of radical honesty to look for content that diverges from the information that you might want to see if you were a reader.

  3. If your investigation turns up content and/or elements that don’t help the users, remove them and make sure to save the original control page files, just in case the experiment results are negative.

  4. Launch the new page, re-index in Google Search Console, and benchmark the date.

  5. Wait 2-6 weeks to measure the clicks before vs clicks after in GSC.
  6. After you’ve gathered enough data to make a directionally-sound judgment call about which URL performs better, do one of the following:
    • If the variation performed better: Congrats! Report the results to your team and keep the changes.

    • If the control performed better: Reinstate the original content and files. Then, re-index the page and continue monitoring performance to look for rebounding traffic.

Test in disguise #5: Featured snippets

This activity is one of my all-time favorites.

Treating our featured snippet answers like an SEO test is one of the ways that my teams have been able to accrue competitively high volumes of traffic and clicks in recent years.

When our team began to treat our featured snippets as experiments, rather than optimizations, we were able to learn much more about how to write better answers, and we were able to create processes for scaling up to higher quantities of featured snippet experiments. This meant more “at bats” for acquiring the answer box rankings, which meant faster traffic growth.

Much has already been covered about how to optimize for featured snippets. I’ll simply add a process for testing your featured snippet copy.

What’s more, featured snippet tests are one of the rare instances where statistical significance is undeniably attainable because the success measurement is binary. Either your experiment resulted in acquiring the featured snippet, or it did not. (Caveat: Some longer tail featured snippets may also be impacted by your experiments, but the impacts are generally negligible if you are targeting a strong primary keyword.)

How to run featured snippet tests

  1. Identify opportunities where featured snippets are appearing in the SERPs, and where one of your pages ranks within the top 5 positions but is not occupying the answer box. (Tip: some of the current rank tracking solutions such as STAT make featured snippet identification much easier.)

  2. Sort and prioritize featured snippet opportunities according to the opportunities that represent the highest value to your website. I recommend considering the traffic’s audience and conversion potential alongside the potential search volume.

  3. Rewrite the portion of your article where the featured snippet is being targeted. This step is another one where the full context of featured snippet practices span outside the scope of this article, so you may want to check out resources like this if you’re not already familiar with featured snippet rewriting.

  4. Periodically check in on your target answer box(s) and traffic over the next several weeks.

  5. If at first you don’t succeed, test again! The great part about answer box testing is that you rarely need to revert to your control, and you can keep swinging until you hit the home run. In some cases, we’ve had to make as many as ten or more rewrite attempts before successfully capturing the featured snippet.

  6. Repeat this process to run more experiments the remaining featured snippet opportunities that were identified in step one.

More SEO tests in disguise

This list is far from exhaustive.

As I alluded to earlier in the piece, I think that just about anything which requires measurement is a form of testing to some degree, regardless of whether or not this activity can be measured to true statistical significance.

If your team is investing any serious resources into activities like core web vitals, internal linking, E-A-T enhancements, site migrations, Schema markup, or UX changes, it’s usually wise to do a retrospective before and after analysis on whether or not that investment yielded a positive payoff.

Stacking up those experiments to figure out where your bets are paying off, versus where they are not paying off will start to steer your strategy and SEO knowledge toward more profitable outcomes.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address


Comparing Credibility of Custom Chatbots & Live Chat



Building Customer Trust: Comparing Credibility of Custom Chatbots & Live Chat

Addressing customer issues quickly is not merely a strategy to distinguish your brand; it’s an imperative for survival in today’s fiercely competitive marketplace.

Customer frustration can lead to customer churn. That’s precisely why organizations employ various support methods to ensure clients receive timely and adequate assistance whenever they require it.

Nevertheless, selecting the most suitable support channel isn’t always straightforward. Support teams often grapple with the choice between live chat and chatbots.

The automation landscape has transformed how businesses engage with customers, elevating chatbots as a widely embraced support solution. As more companies embrace technology to enhance their customer service, the debate over the credibility of chatbots versus live chat support has gained prominence.

However, customizable chatbot continue to offer a broader scope for personalization and creating their own chatbots.

In this article, we will delve into the world of customer support, exploring the advantages and disadvantages of both chatbots and live chat and how they can influence customer trust. By the end, you’ll have a comprehensive understanding of which option may be the best fit for your business.

The Rise of Chatbots

Chatbots have become increasingly prevalent in customer support due to their ability to provide instant responses and cost-effective solutions. These automated systems use artificial intelligence (AI) and natural language processing (NLP) to engage with customers in real-time, making them a valuable resource for businesses looking to streamline their customer service operations.

Advantages of Chatbots

24/7 Availability

One of the most significant advantages of custom chatbots is their round-the-clock availability. They can respond to customer inquiries at any time, ensuring that customers receive support even outside regular business hours.


Custom Chatbots provide consistent responses to frequently asked questions, eliminating the risk of human error or inconsistency in service quality.


Implementing chatbots can reduce operational costs by automating routine inquiries and allowing human agents to focus on more complex issues.


Chatbots can handle multiple customer interactions simultaneously, making them highly scalable as your business grows.

Disadvantages of Chatbots

Limited Understanding

Chatbots may struggle to understand complex or nuanced inquiries, leading to frustration for customers seeking detailed information or support.

Lack of Empathy

Chatbots lack the emotional intelligence and empathy that human agents can provide, making them less suitable for handling sensitive or emotionally charged issues.

Initial Setup Costs

Developing and implementing chatbot technology can be costly, especially for small businesses.

The Role of Live Chat Support

Live chat support, on the other hand, involves real human agents who engage with customers in real-time through text-based conversations. While it may not offer the same level of automation as custom chatbots, live chat support excels in areas where human interaction and empathy are crucial.

Advantages of Live Chat

Human Touch

Live chat support provides a personal touch that chatbots cannot replicate. Human agents can empathize with customers, building a stronger emotional connection.

Complex Issues

For inquiries that require a nuanced understanding or involve complex problem-solving, human agents are better equipped to provide in-depth assistance.

Trust Building

Customers often trust human agents more readily, especially when dealing with sensitive matters or making important decisions.


Human agents can adapt to various customer personalities and communication styles, ensuring a positive experience for diverse customers.

Disadvantages of Live Chat

Limited Availability

Live chat support operates within specified business hours, which may not align with all customer needs, potentially leading to frustration.

Response Time

The speed of response in live chat support can vary depending on agent availability and workload, leading to potential delays in customer assistance.


Maintaining a live chat support team with trained agents can be expensive, especially for smaller businesses strategically.

Building Customer Trust: The Credibility Factor

When it comes to building customer trust, credibility is paramount. Customers want to feel that they are dealing with a reliable and knowledgeable source. Both customziable chatbots and live chat support can contribute to credibility, but their effectiveness varies in different contexts.

Building Trust with Chatbots

Chatbots can build trust in various ways:


Chatbots provide consistent responses, ensuring that customers receive accurate information every time they interact with them.

Quick Responses

Chatbots offer instant responses, which can convey a sense of efficiency and attentiveness.

Data Security

Chatbots can assure customers of their data security through automated privacy policies and compliance statements.

However, custom chatbots may face credibility challenges when dealing with complex issues or highly emotional situations. In such cases, the lack of human empathy and understanding can hinder trust-building efforts.

Building Trust with Live Chat Support

Live chat support, with its human touch, excels at building trust in several ways:


Human agents can show empathy by actively listening to customers’ concerns and providing emotional support.

Tailored Solutions

Live chat agents can tailor solutions to individual customer needs, demonstrating a commitment to solving their problems.


Human agents can adapt to changing customer requirements, ensuring a personalized and satisfying experience.

However, live chat support’s limitations, such as availability and potential response times, can sometimes hinder trust-building efforts, especially when customers require immediate assistance.

Finding the Right Balance

The choice between custom chatbots and live chat support is not always binary. Many businesses find success by integrating both options strategically:

Initial Interaction

Use chatbots for initial inquiries, providing quick responses, and gathering essential information. This frees up human agents to handle more complex cases.

Escalation to Live Chat

Implement a seamless escalation process from custom chatbots to live chat support when customer inquiries require a higher level of expertise or personal interaction.

Continuous Improvement

Regularly analyze customer interactions and feedback to refine your custom chatbot’s responses and improve the overall support experience.


In the quest to build customer trust, both chatbots and live chat support have their roles to play. Customizable Chatbots offer efficiency, consistency, and round-the-clock availability, while live chat support provides the human touch, empathy, and adaptability. The key is to strike the right balance, leveraging the strengths of each to create a credible and trustworthy customer support experience. By understanding the unique advantages and disadvantages of both options, businesses can make informed decisions to enhance customer trust and satisfaction in the digital era.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


The Rise in Retail Media Networks



A shopping cart holding the Amazon logo to represent the rise in retail media network advertising.

As LL Cool J might say, “Don’t call it a comeback. It’s been here for years.”

Paid advertising is alive and growing faster in different forms than any other marketing method.

Magna, a media research firm, and GroupM, a media agency, wrapped the year with their ad industry predictions – expect big growth for digital advertising in 2024, especially with the pending US presidential political season.

But the bigger, more unexpected news comes from the rise in retail media networks – a relative newcomer in the industry.

Watch CMI’s chief strategy advisor Robert Rose explain how these trends could affect marketers or keep reading for his thoughts:

GroupM expects digital advertising revenue in 2023 to conclude with a 5.8% or $889 billion increase – excluding political advertising. Magna believes ad revenue will tick up 5.5% this year and jump 7.2% in 2024. GroupM and Zenith say 2024 will see a more modest 4.8% growth.

Robert says that the feeling of an ad slump and other predictions of advertising’s demise in the modern economy don’t seem to be coming to pass, as paid advertising not only survived 2023 but will thrive in 2024.

What’s a retail media network?

On to the bigger news – the rise of retail media networks. Retail media networks, the smallest segment in these agencies’ and research firms’ evaluation, will be one of the fastest-growing and truly important digital advertising formats in 2024.

GroupM suggests the $119 billion expected to be spent in the networks this year and should grow by a whopping 8.3% in the coming year.  Magna estimates $124 billion in ad revenue from retail media networks this year.

“Think about this for a moment. Retail media is now almost a quarter of the total spent on search advertising outside of China,” Robert points out.

You’re not alone if you aren’t familiar with retail media networks. A familiar vernacular in the B2C world, especially the consumer-packaged goods industry, retail media networks are an advertising segment you should now pay attention to.

Retail media networks are advertising platforms within the retailer’s network. It’s search advertising on retailers’ online stores. So, for example, if you spend money to advertise against product keywords on Amazon, Walmart, or Instacart, you use a retail media network.

But these ad-buying networks also exist on other digital media properties, from mini-sites to videos to content marketing hubs. They also exist on location through interactive kiosks and in-store screens. New formats are rising every day.

Retail media networks make sense. Retailers take advantage of their knowledge of customers, where and why they shop, and present offers and content relevant to their interests. The retailer uses their content as a media company would, knowing their customers trust them to provide valuable information.

Think about these 2 things in 2024

That brings Robert to two things he wants you to consider for 2024 and beyond. The first is a question: Why should you consider retail media networks for your products or services?   

Advertising works because it connects to the idea of a brand. Retail media networks work deep into the buyer’s journey. They use the consumer’s presence in a store (online or brick-and-mortar) to cross-sell merchandise or become the chosen provider.

For example, Robert might advertise his Content Marketing Strategy book on Amazon’s retail network because he knows his customers seek business books. When they search for “content marketing,” his book would appear first.

However, retail media networks also work well because they create a brand halo effect. Robert might buy an ad for his book in The New York Times and The Wall Street Journal because he knows their readers view those media outlets as reputable sources of information. He gains some trust by connecting his book to their media properties.

Smart marketing teams will recognize the power of the halo effect and create brand-level experiences on retail media networks. They will do so not because they seek an immediate customer but because they can connect their brand content experience to a trusted media network like Amazon, Nordstrom, eBay, etc.

The second thing Robert wants you to think about relates to the B2B opportunity. More retail media network opportunities for B2B brands are coming.

You can already buy into content syndication networks such as Netline, Business2Community, and others. But given the astronomical growth, for example, of Amazon’s B2B marketplace ($35 billion in 2023), Robert expects a similar trend of retail media networks to emerge on these types of platforms.   

“If I were Adobe, Microsoft, Salesforce, HubSpot, or any brand with big content platforms, I’d look to monetize them by selling paid sponsorship of content (as advertising or sponsored content) on them,” Robert says.

As you think about creative ways to use your paid advertising spend, consider the retail media networks in 2024.

Like what you read here? Get yourself a subscription to daily or weekly updates.  It’s free – and you can change your preferences or unsubscribe anytime.


Cover image by Joseph Kalinowski/Content Marketing Institute

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


AI driving an exponential increase in marketing technology solutions



AI driving an exponential increase in marketing technology solutions

The martech landscape is expanding and AI is the prime driving force. That’s the topline news from the “Martech 2024” report released today. And, while that will get the headline, the report contains much more.

Since the release of the most recent Martech Landscape in May 2023, 2,042 new marketing technology tools have surfaced, bringing the total to 13,080 — an 18.5% increase. Of those, 1,498 (73%) were AI-based. 

Screenshot 2023 12 05 110428 800x553

“But where did it land?” said Frans Riemersma of Martech Tribe during a joint video conference call with Scott Brinker of ChiefMartec and HubSpot. “And the usual suspect, of course, is content. But the truth is you can build an empire with all the genAI that has been surfacing — and by an empire, I mean, of course, a business.”

Content tools accounted for 34% of all the new AI tools, far ahead of video, the second-place category, which had only 4.85%. U.S. companies were responsible for 61% of these tools — not surprising given that most of the generative AI dynamos, like OpenAI, are based here. Next up was the U.K. at 5.7%, but third place was a big surprise: Iceland — with a population of 373,000 — launched 4.6% of all AI martech tools. That’s significantly ahead of fourth place India (3.5%), whose population is 1.4 billion and which has a significant tech industry. 

Dig deeper: 3 ways email marketers should actually use AI

The global development of these tools shows the desire for solutions that natively understand the place they are being used. 

“These regional products in their particular country…they’re fantastic,” said Brinker. “They’re loved, and part of it is because they understand the culture, they’ve got the right thing in the language, the support is in that language.”

Now that we’ve looked at the headline stuff, let’s take a deep dive into the fascinating body of the report.

The report: A deeper dive

Marketing technology “is a study in contradictions,” according to Brinker and Riemersma. 

In the new report they embrace these contradictions, telling readers that, while they support “discipline and fiscal responsibility” in martech management, failure to innovate might mean “missing out on opportunities for competitive advantage.” By all means, edit your stack meticulously to ensure it meets business value use cases — but sure, spend 5-10% of your time playing with “cool” new tools that don’t yet have a use case. That seems like a lot of time.

Similarly, while you mustn’t be “carried away” by new technology hype cycles, you mustn’t ignore them either. You need to make “deliberate choices” in the realm of technological change, but be agile about implementing them. Be excited by martech innovation, in other words, but be sensible about it.

The growing landscape

Consolidation for the martech space is not in sight, Brinker and Riemersma say. Despite many mergers and acquisitions, and a steadily increasing number of bankruptcies and dissolutions, the exponentially increasing launch of new start-ups powers continuing growth.

It should be observed, of course, that this is almost entirely a cloud-based, subscription-based commercial space. To launch a martech start-up doesn’t require manufacturing, storage and distribution capabilities, or necessarily a workforce; it just requires uploading an app to the cloud. That is surely one reason new start-ups appear at such a startling rate. 

Dig deeper: AI ad spending has skyrocketed this year

As the authors admit, “(i)f we measure by revenue and/or install base, the graph of all martech companies is a ‘long tail’ distribution.” What’s more, focus on the 200 or so leading companies in the space and consolidation can certainly be seen.

Long-tail tools are certainly not under-utilized, however. Based on a survey of over 1,000 real-world stacks, the report finds long-tail tools constitute about half of the solutions portfolios — a proportion that has remained fairly consistent since 2017. The authors see long-tail adoption where users perceive feature gaps — or subpar feature performance — in their core solutions.

Composability and aggregation

The other two trends covered in detail in the report are composability and aggregation. In brief, a composable view of a martech stack means seeing it as a collection of features and functions rather than a collection of software products. A composable “architecture” is one where apps, workflows, customer experiences, etc., are developed using features of multiple products to serve a specific use case.

Indeed, some martech vendors are now describing their own offerings as composable, meaning that their proprietary features are designed to be used in tandem with third-party solutions that integrate with them. This is an evolution of the core-suite-plus-app-marketplace framework.

That framework is what Brinker and Riemersma refer to as “vertical aggregation.” “Horizontal aggregation,” they write, is “a newer model” where aggregation of software is seen not around certain business functions (marketing, sales, etc.) but around a layer of the tech stack. An obvious example is the data layer, fed from numerous sources and consumed by a range of applications. They correctly observe that this has been an important trend over the past year.

Build it yourself

Finally, and consistent with Brinker’s long-time advocacy for the citizen developer, the report detects a nascent trend towards teams creating their own software — a trend that will doubtless be accelerated by support from AI.

So far, the apps that are being created internally may be no more than “simple workflows and automations.” But come the day that app development is so democratized that it will be available to a wide range of users, the software will be a “reflection of the way they want their company to operate and the experiences they want to deliver to customers. This will be a powerful dimension for competitive advantage.”

Constantine von Hoffman contributed to this report.

Get MarTech! Daily. Free. In your inbox.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading