MARKETING
SEO Recap: PageRank – Moz

The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.
Have you ever wondered how Moz employees learn internally? Well, here’s your chance to get a sneak peek into never seen before, internal webinar footage with Tom Capper! Learning is important at Moz, and the sharing of information amongst employees is crucial in making sure we stay true to our core values. Knowledge sharing allows us to stay transparent, work together more easily, find better ways of doing things, and create even better tools and experiences for our customers.
Tom started these sessions when everyone was working remotely in 2020. It allowed us to come together again in a special, collaborative way. So, today, we give to you all the gift of learning! In this exclusive webinar, Tom Capper takes us through the crucial topic of PageRank.
Video Transcription
This is actually a topic that I used to put poor, innocent, new recruits through, particularly if they came from a non-marketing background. Even though this is considered by a lot people to be an advanced topic, I think it’s something that actually it makes sense for people who want to learn about SEO to learn first because it’s foundational. And if you think about a lot of other technical SEO and link building topics from this perspective, they make a lot more sense and are simpler and you kind of figure out the answers yourself rather than needing to read 10,000 word blog posts and patents and this kind of thing.
Anyway, hold that thought, because it’s 1998. I am 6 years old, and this is a glorious state-of-the-art video game, and internet browsing that I do in my computer club at school looks a bit like this. I actually didn’t use Yahoo!. I used Excite, which in hindsight was a mistake, but in my defense I was 6.
The one thing you’ll notice about this as a starting point for a journey on the internet, compared to something like Google or whatever you use today, maybe even like something that’s built into your browser these days, there is a lot of links on this page, and mostly there are links to pages with links on this page. It’s kind of like a taxonomy directory system. And this is important because if a lot of people browse the web using links, and links are primarily a navigational thing, then we can get some insights out of looking at links.
They’re a sort of proxy for popularity. If we assume that everyone starts their journey on the internet on Yahoo! in 1998, then the pages that are linked to from Yahoo! are going to get a lot of traffic. They are, by definition, popular, and the pages that those pages link to will also still get quite a lot and so on and so forth. And through this, we could build up some kind of picture of what websites are popular. And popularity is important because if you show popular websites to users in search results, then they will be more trustworthy and credible and likely to be good and this kind of thing.
This is massive oversimplification, bear with me, but this is kind of why Google won. Google recognized this fact, and they came up with an innovation called PageRank, which made their search engine better than other people’s search engines, and which every other search engine subsequently went on to imitate.
However, is anything I said just now relevant 23 years later? We definitely do not primarily navigate the word with links anymore. We use these things called search engines, which Google might know something about. But also we use newsfeeds, which are kind of dynamic and uncrawlable, and all sorts of other non-static, HTML link-based patterns. Links are probably not the majority even of how we navigate our way around the web, except maybe within websites. And Google has better data on popularity anyway. Like Google runs a mobile operating system. They run ISPs. They run a browser. They run YouTube. There are lots of ways for Google to figure out what is and isn’t popular without building some arcane link graph.
However, be that true or not, there still is a core methodology that underpins how Google works on a foundational level. In 1998, it was the case that PageRank was all of how Google worked really. It was just PageRank plus relevance. These days, there’s a lot of nuance and layers on top, and even PageRank itself probably isn’t even called that and probably has changed and been refined and tweaked around the edges. And it might be that PageRank is not used as a proxy for popularity anymore, but maybe as a proxy for trust or something like that and it has a slightly different role in the algorithm.
But the point is we still know purely through empirical evidence that changing how many and what pages link to a page has a big impact on organic performance. So we still know that something like this is happening. And the way that Google talks about how links work and their algorithms still reflects a broadly PageRank-based understanding as do developments in SEO directives and hreflang and rel and this kind of thing. It still all speaks to a PageRank-based ecosystem, if not a PageRank-only ecosystem.
Also, I’m calling it PageRank because that’s what Google calls it, but some other things you should be aware of that SEOs use, link equity I think is a good one to use because it kind of explains what you’re talking about in a useful way. Link flow, it’s not bad, but link flow is alluding to a different metaphor that you’ve probably seen before, where you think of links as being sent through big pipes of liquids that then pour in different amounts into different pages. It’s a different metaphor to the popularity one, and as a result it has some different implications if it’s overstretched, so use some caution. And then linking strength, I don’t really know what metaphor this is trying to do. It doesn’t seem as bad as link juice, at least fine, I guess.
More importantly, how does it work? And I don’t know if anyone here hates maths. If you do, I’m sorry, but there’s going to be maths.
So the initial sort of question is or the foundation of all this is imagine that, so A, in the red box here, that’s a web page to be clear in this diagram, imagine that the whole internet is represented in this diagram, that there’s only one web page, which means this is 1970 something, I guess, what is the probability that a random browser is on this page? We can probably say it’s one or something like that. If you want to have some other take on that, it kind of doesn’t matter because it’s all just going to be based on whatever number that is. From that though, we can sort of try to infer some other things.
So whatever probability you thought that was, and let’s say we thought that if there’s one page on the internet, everyone is on it, what’s the probability a random browser is on the one page, A, links to? So say that we’ve pictured the whole internet here. A is a page that links to another page which links nowhere. And we started by saying that everyone was on this page. Well, what’s the probability now, after a cycle, that everyone will be on this page? Well, we go with the assumption that there’s an 85% chance, and the 85% number comes from Google’s original 1998 white paper. There’s an 85% chance that they go onto this one page in their cycle, and a 15% chance that they do one of these non-browser-based activities. And the reason why we assume that there’s a chance on every cycle that people exit to do non-browser-based activities, it’s because otherwise we get some kind of infinite cycle later on. We don’t need to worry about that. But yeah, the point is that if you assume that people never leave their computers and that they just browse through links endlessly, then you end up assuming eventually that every page has infinite traffic, which is not the case.
That’s the starting point where we have this really simple internet, we have a page with a link on it, and a page without a link on it and that’s it. Something to bear in mind with these systems is, obviously, web pages don’t have our link on them and web pages with no links on them are virtually unheard of, like the one on the right. This gets really complex really fast. If we try to make a diagram just of two pages on the Moz website, it would not fit on the screen. So we’re talking with really simplified versions here, but it doesn’t matter because the principles are extensible.
So what if the page on the left actually linked to two pages, not one? What is the probability now that we’re on one of those two pages? We’re taking that 85% chance that they move on at all without exiting, because the house caught fire, they went for a bike ride or whatever, and we’re now dividing that by two. So we’re saying 42.5% chance that they were on this page, 42.5% chance they were on this page, and then nothing else happens because there are no more links in the world. That’s fine.
What about this page? So if this page now links to one more, how does this page’s strength relates to page A? So this one was 0.85/2, and this one is 0.85 times that number. So note that we are diluting as we go along because we’ve applied that 15% deterioration on every step. This is useful and interesting to us because we can imagine a model in which page A, on the left, is our homepage and the page on the right is some page we want to rank, and we’re diluting with every step that we have to jump to get there. And this is crawl depth, which is a metric that is exposed by Moz Pro and most other technical SEO tools. That’s why crawl depth is something that people are interested in is this, and part of it is discovery, which we won’t get into today, but part of it is also this dilution factor.
And then if this page actually linked to three, then again, each of these pages is only one-third as strong as when it only linked to one. So it’s being split up and diluted the further down we go.
So that all got very complicated very quick on a very simple, fictional website. Don’t panic. The lessons we want to take away from this are quite simple, even though the math becomes very arcane very quickly.
So the first lesson we want to take is that each additional link depth diluted value. So we talked about the reasons for that, but obviously it has implications for site structure. It also has implications in some other things, some other common technical SEO issues that I’ll cover in a bit.
So if I link to a page indirectly that is less effective than linking to a page directly, even in a world where every page only has one link on it, which is obviously an ideal scenario.
The other takeaway we can have is that more links means each link is less valuable. So if every additional link you add to your homepage, you’re reducing the effectiveness of the links that were already there. So this is very important because if you look on a lot of sites right now, you’ll find 600 link mega navs at the top of the page and the same at the bottom of the page and all this kind of thing. And that can be an okay choice. I’m not saying that’s always wrong, but it is a choice and it has dramatic implications.
Some of the biggest changes in SEO performance I’ve ever seen on websites came from cutting back the number of links on the homepage by a factor of 10. If you change a homepage so that it goes from linking to 600 pages to linking to the less than 100 that you actually want to rank, that will almost always have a massive difference, a massive impact, more so than external link building could ever dream of because you’re not going to get that 10 times difference through external link building, unless it’s a startup or something.
Some real-world scenarios. I want to talk about basically some things that SEO tools often flag, that we’re all familiar with talking about as SEO issues or optimizations or whatever, but often we don’t think about why and we definitely don’t think of them as being things that hark back quite so deep into Google’s history.
So a redirect is a link, the fictional idea of a page with one link on it is a redirect, because a redirect is just a page that links to exactly one other page. So in this scenario, the page on the left could have linked directly to the page on the top right, but because it didn’t, we’ve got this 0.85 squared here, which is 0.7225. The only thing you need to know about that is that it’s a smaller number than 0.85. Because we didn’t link directly, we went through this page here that redirected, which doesn’t feel like a link, but is a link in this ecosystem, we’ve just arbitrarily decided to dilute the page at the end of the cycle. And this is, obviously, particularly important when we think about chain redirects, which is another thing that’s often flagged by the SEO tools.
But when you look in an issue report in something like Moz Pro and it gives you a list of redirects as if they’re issues, that can be confusing because a redirect is something we’re also told is a good thing. Like if we have a URL that’s no longer in use, it should redirect. But the reason that issue is being flagged is we shouldn’t still be linking to the URL that redirects. We should be linking directly to the thing at the end of the chain. And this is why. It’s because of this arbitrary dilution that we’re inserting into our own website, which is basically just a dead weight loss. If you imagine that in reality, pages do tend to link back to each other, this will be a big complex web and cycle that is, and I think this is where the flow thing comes around because people can imagine a flow of buckets that drip round into each other but leak a little bit at every step, and then you get less and less water, unless there’s some external source. If you imagine these are looping back around, then inserting redirects is just dead weight loss. We’ve drilled a hole in the bottom of a bucket.
So, yeah, better is a direct link. Worse is a 302, although that’s a controversial subject, who knows. Google sometimes claim that they treat 302s as 301s these days. Let’s not get into that.
Canonicals, very similar, a canonical from a PageRank perspective. A canonical is actually a much later addition to search engines. But a canonical is basically equivalent to a 301 redirect. So if we have this badgers page, which has two versions, so you can access it by going to badgers?colour=brown. Or so imagine I have a website that sells live badgers for some reason in different colors, and then I might have these two different URL variants for my badger e-com page filtered to brown. And I’ve decided that this one without any parameters is the canonical version, literally and figuratively speaking. If the homepage links to it via this parameter page, which then has canonical tag pointing at the correct version, then I’ve arbitrarily weakened the correct version versus what I could have done, which would be the direct link through. Interestingly, if we do have this direct link through, note that this page now has no strength at all. It now has no inbound links, and also it probably wouldn’t get flagged as an error in the tool because the tool wouldn’t find it.
You’ll notice I put a tilde before the number zero. We’ll come to that.
PageRank sculpting is another thing that I think is interesting because people still try to do it even though it’s not worked for a really long time. So this is an imaginary scenario that is not imaginary at all. It’s really common, Moz probably has this exact scenario, where your homepage links to some pages you care about and also some pages you don’t really care about, certainly from an SEO perspective, such as your privacy policy. Kind of sucks because, in this extreme example here, having a privacy policy has just randomly halved the strength of a page you care about. No one wants that.
So what people used to do was they would use a link level nofollow. They use a link level nofollow, which . . . So the idea was, and it worked at the time, and by at the time, I mean like 2002 or something. But people still try this on new websites today. The idea was that effectively the link level nofollow removed this link, so it was as if your homepage only linked to one page. Great, everyone is a winner.
Side note I talked about before. So no page actually has zero PageRank. A page with no links in the PageRank model has the PageRank one over the number of pages on the internet. That’s the seeding probability that before everything starts going and cycles round and figures out what the stable equilibrium PageRank is, they assume that there’s an equal chance you’re on any page on the internet. One divided by the number of pages on the internet is a very small number, so we can think of it as zero.
This was changed, our level nofollow hack was changed again a very, very long time ago such that if you use a link level nofollow, and by the way, this is also true if you use robots.txt to do this, this second link will still be counted in when we go here and we have this divided by two to say we are halving, there’s an equal chance that you go to either of these pages. This page still gets that reduction because it was one of two links, but this page at the bottom now has no strength at all because it was only linked through a nofollow. So if you do this now, it’s a worst of both world scenario. And you might say, “Oh, I don’t actually care whether my privacy policy has zero strength,” whatever. But you do care because your privacy policy probably links through the top nav to every other page on your website. So you’re still doing yourself a disservice.
Second side note, I said link level nofollow, meaning nofollow in the HTML is an attribute to a link. There is also page level nofollow, which I struggled to think of a single good use case for. Basically, a page level nofollow means we are going to treat every single link on this page as nofollow. So we’re just going to create a PageRank dead-end. This is a strange thing to do. Sometimes people use robots.txt, which basically does the same thing. If I block this page with robota.txt, that’s the same in terms of the PageRank consequences, except there are other good reasons to do that, like I might not want Google to ever see this, or I might want to prevent a massive waste of Google’s crawlers’ time so that they spend more time crawling the rest of my site or something like this. There are reasons to use robots.txt. Page level nofollow is we’re going to create that dead-end, but also we’re going to waste Google’s time crawling it anyway.
Some of the extreme scenarios I just talked about, particularly the one with the privacy policy, changed a lot for the better for everyone in 2004 with something called reasonable surfer, which you occasionally still hear people talking about now, but mostly implicitly. And it is probably actually an under-discussed or underheld in mind topic.
So these days, and by these days, I mean for the last 17 years, if one of these links was that massive call to action and another one of these links was in the footer, like a privacy policy link often is, then Google will apply some sense and say the chance people click on this one . . . Google was trying to figure out probabilities here, remember. So we’ll split this. This 0.9 and 0.1 still have to add up to 1, but we’ll split them in a more reasonable fashion. Yeah, they were doing that a long time ago. They’ve probably got very, very good at it by now.
Noindex is an interesting one because, traditionally, you would think that has nothing to do with PageRank. So, yeah, a noindex tag just means this should never show up in search results, this page at the bottom, which is fine. There are some valid reasons to do that. Maybe you’re worried that it will show up for the wrong query that something else on your site is trying to show up for, or maybe it contains sensitive information or something like this. Okay, fine. However, when you put a noindex tag on something, Google eventually stops crawling it. Everyone sort of intuitively knew all the pieces of this puzzle, but Google only acknowledged that this behavior is what happens a couple of years ago.
So Google eventually stops crawling it, and when Google stops crawling on it, it stops passing PageRank. So noindex follow, which used to be quite a good thing or we thought quite a good thing to do for a page like an HTML sitemap page or something like that, like an HTML sitemap page, clearly you don’t want to show up in search results because it’s kind of crap and a poor reflection on your site and not a good UX and this kind of thing. But it is a good way to pass equity through to a bunch of deep pages, or so we thought. It turns out probably not. It was equivalent to that worst case scenario, page level nofollow in the long run that we talked about earlier. And again, this is probably why noindex is flagged as an error in tools like Moz Pro, although often it’s not well explained or understood.
My pet theory on how links work is that, at this stage, they’re no longer a popularity proxy because there’s better ways of doing that. But they are a brand proxy for a frequently cited brand. Citation and link are often used synonymously in this industry, so that kind of makes sense. However, once you actually start ranking in the top 5 or 10, my experience is that links become less and less relevant the more and more competitive a position you’re in because Google has increasingly better data to figure out whether people want to click on you or not. This is some data from 2009, contrasting ranking correlations in positions 6 to 10, versus positions 1 to 5. Basically, both brand and link become less relevant, or the easily measured versions become less relevant, which again is kind of exploring that theory that the higher up you rank, the more bespoke and user signal-based it might become.
This is some older data, where I basically looked at to what extent you can use Domain Authority to predict rankings, which is this blue bar, to what extent you could use branded search volume to predict rankings, which is this green bar, and to what extent you could use a model containing them both to predict rankings, which is not really any better than just using branded search volume. This is obviously simplified and flawed data, but this is some evidence towards the hypothesis that links are used as a brand proxy.
MARKETING
AI driving an exponential increase in marketing technology solutions

The martech landscape is expanding and AI is the prime driving force. That’s the topline news from the “Martech 2024” report released today. And, while that will get the headline, the report contains much more.
Since the release of the most recent Martech Landscape in May 2023, 2,042 new marketing technology tools have surfaced, bringing the total to 13,080 — an 18.5% increase. Of those, 1,498 (73%) were AI-based.

“But where did it land?” said Frans Riemersma of Martech Tribe during a joint video conference call with Scott Brinker of ChiefMartec and HubSpot. “And the usual suspect, of course, is content. But the truth is you can build an empire with all the genAI that has been surfacing — and by an empire, I mean, of course, a business.”
Content tools accounted for 34% of all the new AI tools, far ahead of video, the second-place category, which had only 4.85%. U.S. companies were responsible for 61% of these tools — not surprising given that most of the generative AI dynamos, like OpenAI, are based here. Next up was the U.K. at 5.7%, but third place was a big surprise: Iceland — with a population of 373,000 — launched 4.6% of all AI martech tools. That’s significantly ahead of fourth place India (3.5%), whose population is 1.4 billion and which has a significant tech industry.
Dig deeper: 3 ways email marketers should actually use AI
The global development of these tools shows the desire for solutions that natively understand the place they are being used.
“These regional products in their particular country…they’re fantastic,” said Brinker. “They’re loved, and part of it is because they understand the culture, they’ve got the right thing in the language, the support is in that language.”
Now that we’ve looked at the headline stuff, let’s take a deep dive into the fascinating body of the report.
The report: A deeper dive
Marketing technology “is a study in contradictions,” according to Brinker and Riemersma.
In the new report they embrace these contradictions, telling readers that, while they support “discipline and fiscal responsibility” in martech management, failure to innovate might mean “missing out on opportunities for competitive advantage.” By all means, edit your stack meticulously to ensure it meets business value use cases — but sure, spend 5-10% of your time playing with “cool” new tools that don’t yet have a use case. That seems like a lot of time.
Similarly, while you mustn’t be “carried away” by new technology hype cycles, you mustn’t ignore them either. You need to make “deliberate choices” in the realm of technological change, but be agile about implementing them. Be excited by martech innovation, in other words, but be sensible about it.
The growing landscape
Consolidation for the martech space is not in sight, Brinker and Riemersma say. Despite many mergers and acquisitions, and a steadily increasing number of bankruptcies and dissolutions, the exponentially increasing launch of new start-ups powers continuing growth.
It should be observed, of course, that this is almost entirely a cloud-based, subscription-based commercial space. To launch a martech start-up doesn’t require manufacturing, storage and distribution capabilities, or necessarily a workforce; it just requires uploading an app to the cloud. That is surely one reason new start-ups appear at such a startling rate.
Dig deeper: AI ad spending has skyrocketed this year
As the authors admit, “(i)f we measure by revenue and/or install base, the graph of all martech companies is a ‘long tail’ distribution.” What’s more, focus on the 200 or so leading companies in the space and consolidation can certainly be seen.
Long-tail tools are certainly not under-utilized, however. Based on a survey of over 1,000 real-world stacks, the report finds long-tail tools constitute about half of the solutions portfolios — a proportion that has remained fairly consistent since 2017. The authors see long-tail adoption where users perceive feature gaps — or subpar feature performance — in their core solutions.
Composability and aggregation
The other two trends covered in detail in the report are composability and aggregation. In brief, a composable view of a martech stack means seeing it as a collection of features and functions rather than a collection of software products. A composable “architecture” is one where apps, workflows, customer experiences, etc., are developed using features of multiple products to serve a specific use case.
Indeed, some martech vendors are now describing their own offerings as composable, meaning that their proprietary features are designed to be used in tandem with third-party solutions that integrate with them. This is an evolution of the core-suite-plus-app-marketplace framework.
That framework is what Brinker and Riemersma refer to as “vertical aggregation.” “Horizontal aggregation,” they write, is “a newer model” where aggregation of software is seen not around certain business functions (marketing, sales, etc.) but around a layer of the tech stack. An obvious example is the data layer, fed from numerous sources and consumed by a range of applications. They correctly observe that this has been an important trend over the past year.
Build it yourself
Finally, and consistent with Brinker’s long-time advocacy for the citizen developer, the report detects a nascent trend towards teams creating their own software — a trend that will doubtless be accelerated by support from AI.
So far, the apps that are being created internally may be no more than “simple workflows and automations.” But come the day that app development is so democratized that it will be available to a wide range of users, the software will be a “reflection of the way they want their company to operate and the experiences they want to deliver to customers. This will be a powerful dimension for competitive advantage.”
Constantine von Hoffman contributed to this report.
Get MarTech! Daily. Free. In your inbox.
MARKETING
Mastering The Laws of Marketing in Madness


Navigating through the world of business can be chaotic. At the time of this publication in November 2023, global economic growth is expected to remain weak for an undefined amount of time.
However, certain rules of marketing remain steadfast to guide businesses towards success in any environment. These universal laws are the anchors that keep a business steady, helping it thrive amidst uncertainty and change.
In this guide, we’ll explore three laws that have proven to be the cornerstones of successful marketing. These are practical, tried-and-tested approaches that have empowered businesses to overcome challenges and flourish, regardless of external conditions. By mastering these principles, businesses can turn adversities into opportunities, ensuring growth and resilience in any market landscape. Let’s uncover these essential laws that pave the way to success in the unpredictable world of business marketing. Oh yeah, and don’t forget to integrate these insights into your career. Follow the implementation steps!
Law 1: Success in Marketing is a Marathon, Not a Sprint
Navigating the tumultuous seas of digital marketing necessitates a steadfast ship, fortified by a strategic long-term vision. It’s a marathon, not a sprint.
Take Apple, for instance. The late ’90s saw them on the brink of bankruptcy. Instead of grasping at quick, temporary fixes, Apple anchored themselves in a long-term vision. A vision that didn’t just stop at survival, but aimed for revolutionary contributions, resulting in groundbreaking products like the iPod, iPhone, and iPad.
In a landscape where immediate gains often allure businesses, it’s essential to remember that these are transient. A focus merely on the immediate returns leaves businesses scurrying on a hamster wheel, chasing after fleeting successes, but never really moving forward.


Want to get certified in Content Marketing?
Leverage the tools and channels to predictably and profitably drive awareness, leads, sales, and referrals—EVERYTHING you need to know to become a true master of digital marketing. Click Here
A long-term vision, however, acts as the north star, guiding businesses through immediate challenges while ensuring sustainable success and consistent growth over time.
Consider This Analogy:
Building a business is like growing a tree. Initially, it requires nurturing, patience, and consistent care. But with time, the tree grows, becoming strong and robust, offering shade and fruits—transforming the landscape. The same goes for business. A vision, perseverance, and a long-term strategy are the nutrients that allow it to flourish, creating a sustainable presence in the market.
Implementation Steps:
- Begin by planning a content calendar focused on delivering consistent value over the next six months.
- Ensure regular reviews and necessary adjustments to your long-term goals, keeping pace with evolving market trends and demands.
- And don’t forget the foundation—invest in robust systems and ongoing training, laying down strong roots for sustainable success in the ever-changing digital marketing landscape.
Law 2: Survey, Listen, and Serve
Effective marketing hinges on understanding and responding to the customer’s needs and preferences. A robust, customer-centric approach helps in shaping products and services that resonate with the audience, enhancing overall satisfaction and loyalty.
Take Netflix, for instance. Netflix’s evolution from a DVD rental company to a streaming giant is a compelling illustration of a customer-centric approach.
Their transition wasn’t just a technological upgrade; it was a strategic shift informed by attentively listening to customer preferences and viewing habits. Netflix succeeded, while competitors such a Blockbuster haid their blinders on.
Here are some keystone insights when considering how to Survey, Listen, and Serve…
Customer Satisfaction & Loyalty:
Surveying customers is essential for gauging their satisfaction. When customers feel heard and valued, it fosters loyalty, turning one-time buyers into repeat customers. Through customer surveys, businesses can receive direct feedback, helping to identify areas of improvement, enhancing overall customer satisfaction.
Engagement:
Engaging customers through surveys not only garners essential feedback but also makes customers feel valued and involved. It cultivates a relationship where customers feel that their opinions are appreciated and considered, enhancing their connection and engagement with the brand.
Product & Service Enhancement:
Surveys can unveil insightful customer feedback regarding products and services. This information is crucial for making necessary adjustments and innovations, ensuring that offerings remain aligned with customer needs and expectations.
Data Collection:
Surveys are instrumental in collecting demographic information. Understanding the demographic composition of a customer base is crucial for tailoring marketing strategies, ensuring they resonate well with the target audience.
Operational Efficiency:
Customer feedback can also shed light on a company’s operational aspects, such as customer service and website usability. Such insights are invaluable for making necessary enhancements, improving the overall customer experience.
Benchmarking:
Consistent surveying allows for effective benchmarking, enabling businesses to track performance over time, assess the impact of implemented changes, and make data-driven strategic decisions.
Implementation Steps:
- Regularly incorporate customer feedback mechanisms like surveys and direct interactions to remain attuned to customer needs and preferences.
- Continuously refine and adjust offerings based on customer feedback, ensuring products and services evolve in alignment with customer expectations.
- In conclusion, adopting a customer-centric approach, symbolized by surveying, listening, and serving, is indispensable for nurturing customer relationships, driving loyalty, and ensuring sustained business success.
Law 3: Build Trust in Every Interaction
In a world cluttered with countless competitors vying for your prospects attention, standing out is about more than just having a great product or service. It’s about connecting authentically, building relationships rooted in trust and understanding. It’s this foundational trust that transforms casual customers into loyal advocates, ensuring that your business isn’t just seen, but it truly resonates and remains memorable.


Are You Ready to Master Social Media?
Become a Certified Social Media Specialist and learn the newest strategies (by social platform) to draw organic traffic to your social media sites.
For instance, let’s talk about Oprah! Through vulnerability and honest connections, Oprah Winfrey didn’t just build an audience; she cultivated a community. Sharing, listening, and interacting genuinely, she created a media landscape where trust and respect flourished. Oprah was known to make her audience and even guests cry for the first time live. She had a natural ability to build instant trust.
Here are some keystone insights when considering how to develop and maintain trust…
The Unseen Fast-Track
Trust is an unseen accelerator. It simplifies decisions, clears doubts, and fast-forwards the customer journey, turning curiosity into conviction and interest into investment.
The Emotional Guardrail
Trust is like a safety net or a warm embrace, making customers feel valued, understood, and cared for. It nurtures a positive environment, encouraging customers to return, not out of necessity, but a genuine affinity towards the brand.
Implementation Steps:
- Real Stories: Share testimonials and experiences, both shiny and shaded, to build credibility and show authenticity.
- Open Conversation: Encourage and welcome customer feedback and discussions, facilitating a two-way conversation that fosters understanding and improvement.
- Community Engagement: Actively participate and engage in community or industry events, align your brand with genuine causes and values, promoting real connections and trust.
Navigating through this law involves cultivating a space where authenticity leads, trust blossoms, and genuine relationships flourish, engraving a memorable brand story in the hearts and minds of the customers.
Guarantee Your Success With These Foundational Laws
Navigating through the world of business is a demanding odyssey that calls for more than just adaptability and innovation—it requires a solid foundation built on timeless principles. In our exploration, we have just unraveled three indispensable laws that stand as pillars supporting the edifice of sustained marketing success, enabling businesses to sail confidently through the ever-shifting seas of the marketplace.
Law 1: “Success in Marketing is a Marathon, Not a Sprint,” advocates for the cultivation of a long-term vision. It is about nurturing a resilient mindset focused on enduring success rather than transient achievements. Like a marathon runner who paces themselves for the long haul, businesses must strategize, persevere, and adapt, ensuring sustained growth and innovation. The embodiment of this law is seen in enterprises like Apple, whose evolutionary journey is a testament to the power of persistent vision and continual reinvention.
Law 2: “Survey, Listen, and Serve,” delineates the roadmap to a business model deeply intertwined with customer insights and responsiveness. This law emphasizes the essence of customer-centricity, urging businesses to align their strategies and offerings with the preferences and expectations of their audiences. It’s a call to attentively listen, actively engage, and meticulously tailor offerings to resonate with customer needs, forging paths to enhanced satisfaction and loyalty.
Law 3: “Build Trust in Every Interaction,” underscores the significance of building genuine, trust-laden relationships with customers. It champions the cultivation of a brand personality that resonates with authenticity, fostering connections marked by trust and mutual respect. This law navigates businesses towards establishing themselves as reliable entities that customers can resonate with, rely on, and return to, enriching the customer journey with consistency and sincerity.
These pivotal laws form the cornerstone upon which businesses can build strategies that withstand the tests of market volatility, competition, and evolution. They stand as unwavering beacons guiding enterprises towards avenues marked by not just profitability, but also a legacy of value, integrity, and impactful contributions to the marketplace. Armed with these foundational laws, businesses are empowered to navigate the multifaceted realms of the business landscape with confidence, clarity, and a strategic vision poised for lasting success and remarkable achievements.
Oh yeah! And do you know Newton’s Law?The law of inertia, also known as Newton’s first law of motion, states that an object at rest will stay at rest, and an object in motion will stay in motion… The choice is yours. Take action and integrate these laws. Get in motion!
MARKETING
Intro to Amazon Non-endemic Advertising: Benefits & Examples

Amazon has rewritten the rules of advertising with its move into non-endemic retail media advertising. Advertising on Amazon has traditionally focused on brands and products directly sold on the platform. However, a new trend is emerging – the rise of non-endemic advertising on this booming marketplace. In this article, we’ll dive into the concept of non-endemic ads, their significance, and the benefits they offer to advertisers. This strategic shift is opening the floodgates for advertisers in previously overlooked industries.
While endemic brands are those with direct competitors on the platform, non-endemic advertisers bring a diverse range of services to Amazon’s vast audience. The move toward non-endemic advertising signifies Amazon’s intention to leverage its extensive data and audience segments to benefit a broader spectrum of advertisers.
Endemic vs. Non-Endemic Advertising
Let’s start by breaking down the major differences between endemic advertising and non-endemic advertising…
Endemic Advertising
Endemic advertising revolves around promoting products available on the Amazon platform. With this type of promotion, advertisers use retail media data to promote products that are sold at the retailer.
Non-Endemic Advertising
In contrast, non-endemic advertising ventures beyond the confines of products sold on Amazon. It encompasses industries such as insurance, finance, and services like lawn care. If a brand is offering a product or service that doesn’t fit under one of the categories that Amazon sells, it’s considered non-endemic. Advertisers selling products and services outside of Amazon and linking directly to their own site are utilizing Amazon’s DSP and their data/audience segments to target new and relevant customers.
7 Benefits of Running Non-Endemic Ad Campaigns
Running non-endemic ad campaigns on Amazon provides a wide variety of benefits like:
Access to Amazon’s Proprietary Data: Harnessing Amazon’s robust first-party data provides advertisers with valuable insights into consumer behavior and purchasing patterns. This data-driven approach enables more targeted and effective campaigns.
Increased Brand Awareness and Revenue Streams: Non-endemic advertising allows brands to extend their reach beyond their typical audience. By leveraging Amazon’s platform and data, advertisers can build brand awareness among users who may not have been exposed to their products or services otherwise. For non-endemic brands that meet specific criteria, there’s an opportunity to serve ads directly on the Amazon platform. This can lead to exposure to the millions of users shopping on Amazon daily, potentially opening up new revenue streams for these brands.
No Minimum Spend for Non-DSP Campaigns: Non-endemic advertisers can kickstart their advertising journey on Amazon without the burden of a minimum spend requirement, ensuring accessibility for a diverse range of brands.
Amazon DSP Capabilities: Leveraging the Amazon DSP (Demand-Side Platform) enhances campaign capabilities. It enables programmatic media buys, advanced audience targeting, and access to a variety of ad formats.
Connect with Primed-to-Purchase Customers: Amazon’s extensive customer base offers a unique opportunity for non-endemic advertisers to connect with customers actively seeking relevant products or services.
Enhanced Targeting and Audience Segmentation: Utilizing Amazon’s vast dataset, advertisers can create highly specific audience segments. This enhanced targeting helps advertisers reach relevant customers, resulting in increased website traffic, lead generation, and improved conversion rates.
Brand Defense – By utilizing these data segments and inventory, some brands are able to bid for placements where their possible competitors would otherwise be. This also gives brands a chance to be present when competitor brands may be on the same page helping conquest for competitors’ customers.
How to Start Running Non-Endemic Ads on Amazon
Ready to start running non-endemic ads on Amazon? Start with these essential steps:
Familiarize Yourself with Amazon Ads and DSP: Understand the capabilities of Amazon Ads and DSP, exploring their benefits and limitations to make informed decisions.
Look Into Amazon Performance Plus: Amazon Performance Plus is the ability to model your audiences based on user behavior from the Amazon Ad Tag. The process will then find lookalike amazon shoppers with a higher propensity for conversion.
“Amazon Performance Plus has the ability to be Amazon’s top performing ad product. With the machine learning behind the audience cohorts we are seeing incremental audiences converting on D2C websites and beating CPA goals by as much as 50%.”
– Robert Avellino, VP of Retail Media Partnerships at Tinuiti
Understand Targeting Capabilities: Gain insights into the various targeting options available for Amazon ads, including behavioral, contextual, and demographic targeting.
Command Amazon’s Data: Utilize granular data to test and learn from campaign outcomes, optimizing strategies based on real-time insights for maximum effectiveness.
Work with an Agency: For those new to non-endemic advertising on Amazon, it’s essential to define clear goals and identify target audiences. Working with an agency can provide valuable guidance in navigating the nuances of non-endemic advertising. Understanding both the audience to be reached and the core audience for the brand sets the stage for a successful non-endemic advertising campaign.
Conclusion
Amazon’s venture into non-endemic advertising reshapes the advertising landscape, providing new opportunities for brands beyond the traditional ecommerce sphere. The blend of non-endemic campaigns with Amazon’s extensive audience and data creates a cohesive option for advertisers seeking to diversify strategies and explore new revenue streams. As this trend evolves, staying informed about the latest features and possibilities within Amazon’s non-endemic advertising ecosystem is crucial for brands looking to stay ahead in the dynamic world of digital advertising.
We’ll continue to keep you updated on all things Amazon, but if you’re looking to learn more about advertising on the platform, check out our Amazon Services page or contact us today for more information.
-
SEO6 days ago
GPT Store Set To Launch In 2024 After ‘Unexpected’ Delays
-
SEARCHENGINES6 days ago
Google Core Update Done Followed By Intense Search Volatility, New Structured Data, Google Ads Head Steps Down & 20 Years Covering Search
-
PPC6 days ago
How to Get Clients for Your Agency (That You’ll Love Working With)
-
MARKETING6 days ago
The Complete Guide to Becoming an Authentic Thought Leader
-
WORDPRESS2 days ago
8 Best Zapier Alternatives to Automate Your Website
-
SEARCHENGINES6 days ago
Google Discover Showing Older Content Since Follow Feature Arrived
-
SEO6 days ago
96.55% of Content Gets No Traffic From Google. Here’s How to Be in the Other 3.45% [New Research for 2023]
-
MARKETING4 days ago
How Does Success of Your Business Depend on Choosing Type of Native Advertising?
You must be logged in to post a comment Login