MARKETING
SEO in Real Life: Harnessing Visual Search for Optimization Opportunities
The author’s views are entirely his or her own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.
The most exciting thing about visual search is that it’s becoming a highly accessible way for users to interpret the real world, in real time, as they see it. Rather than being a passive observer, camera phones are now a primary resource for knowledge and understanding in daily life.
Users are searching with their own, unique photos to discover content. This includes interactions with products, brand experiences, stores, and employees, and means that SEO can and should be taken into consideration for a number of real world situations, including:
Though SEOs have little control over which photos people take, we can optimize our brand presentation to ensure we are easily discoverable by visual search tools. By prioritizing the presence of high impact visual search elements and coordinating online SEO with offline branding, businesses of all sizes can see results.
What is visual search?
Sometimes referred to as search-what-you-see, in the context of SEO, visual search is the act of querying a search engine with a photo rather than with text. To surface results , search engines and digital platforms use AI and visual recognition technology to identify elements in the image and supply the user with relevant information.
Though Google’s visual search tools are getting a lot of attention at the moment, they aren’t the only tech team that’s working on visual search. Pinterest has been at the forefront of this space for many years, and today you can see visual search in action on:
In the last year, Google has spoken extensively about their visual search capabilities, hinging a number of their search improvements on Google Lens and adding more and more functionality all the time. As a result, year on year usage of Google Lens has increased by three fold, with an estimated8 billion Google Lens searches taking place each month.
Though there are many lessons to be learned from the wide range of visual search tools, which each have their own data sets, for the purpose of this article we will be looking at visual search on Google Lens and Search.
Are visual search and image search SEO the same?
No, visual search optimization is not exactly the same as image search optimization. Image search optimization forms part of the visual search optimization process, but they’re not interchangeable.
Image search SEO
With Image Search you should prioritize helping images to surface when users enter text based queries. To do this, your images should be using image SEO best practices like:
-
Modern file formats
-
Alt text
-
Alt tags
-
Relevant file names
-
Schema markup
All of this helps Google to return an image search result for a text based query, but one of the main challenges with this approach is that it requires the user to know which term to enter.
For instance, with the query dinosaur with horns, an image search will return a few different dinosaur topic filters and lots of different images. To find the best result, I would need to filter and refine the query significantly.
Visual search SEO
With visual search, the image is the query, meaning that I can take a photo of a toy dinosaur with horns, search with Google Lens, then Google refines the query based on what it can see from the image.
When you compare the two search results, the SERP for the visual search is a better match for the initial image query because there are visual cues within the image. So I am only seeing results for a dinosaur with horns, that is quadrupedal, and only has horns on the face, not the frill.
From a user perspective, this is great because I didn’t have to type anything and I got a helpful result. And from Google’s perspective, this is also more efficient because they can assess the photo and decide which element to filter for first in order to get to the best SERP.
The standard image optimizations form part of what Google considers in order to surface relevant results, but if you stop there, you don’t get the full picture.
Which content elements are best interpreted in visual search
Visual search tools identify objects, text, and images, but certain elements are easier to identify than others. When users carry out a visual search, Google taps into multiple data sources to satisfy the query.
The knowledge graph,Vision AI, Google Maps, and other sources combine to surface search results, but in particular, Google’s tools have a few priority elements. When these elements are present in a photo Google can sort, identify, and/or visually match similar content to return results:
-
Landmarks are identified visually but are also connected to their physical location on Google Maps, meaning that local businesses or business owners should use imagery to demonstrate their location.
-
Logos are interpreted in their entirety, rather than as single letters. So even without any text, Google can understand that that swoop means Nike. This data comes from the logos in knowledge panels, website structured data, Google Business Profile, Google Merchant, and other sources, so they should all align.
-
Knowledge Graph Entities are used to tag and categorize images and have a significant impact on what SERP is displayed for a visual search. Google recognizes around 5 billion KGE, so it is worth considering which ones are most relevant to your brand and ensuring that they are visually represented on your site.
-
Text is extracted from images via Optical Character Recognition, which has some limitations — not all languages are recognized, nor are backwards letters. So if your users regularly search photos of printed menus or other printed text, you should consider readability of the fonts (or handwriting on specials boards) you use.
-
Faces are interpreted for sentiment, but the quantity of faces also comes into account, meaning that businesses that serve large groups of people — like event venues or cultural institutions — would do well to include images that demonstrate this.
Visual Search Element |
Corresponding Online Activity |
Priority Verticals |
Landmarks |
Website Images
Google Maps
Google Business Profile |
Tourism
Restaurants
Cultural Institutions
Local Businesses |
Logo |
Website Images
Website Structured Data
Google Merchant
Google Business Profile
Wikipedia
Knowledge Panel |
All |
Knowledge Graph Entities |
Website Images
Image Structured Data
Google Business Profile |
Ecommerce
Events
Cultural Institutions |
Text |
Website copy
Google Business Profile |
All |
Faces |
Website images
Google Business Profile |
Events
Tourism
Cultural Institutions |
How to optimize real world spaces for visual search
Just as standard SEO should be focused on meeting and anticipating customer needs, visual search SEO requires awareness of how customers interact with products and services in real world spaces. This means SEOs should apply the same attention to UCG that one would use for keyword research. To that end, I would argue we should also think about consciously applying optimizations to the potential content of these images.
Optimize sponsorship with unobstructed placements
This might seem like a no brainer, but in busy sponsorship spaces it can sometimes be a challenge. As an example, let’s take this photo from a visit to the Staples Center a few years ago.
Like any sports arena, this is filled to the brim with sponsorship endorsements on the court, the basket, and around the venue.
But when I run a visual search assessment for logos, the only one that can clearly be identified is the Kia logo in the jumbotron.
This isn’t because their logo is so distinct or unique, since there is another Kia logo under the basketball hoop, rather this is because the jumbotron placement is clean in terms of composition, with lots of negative space around the logo and fewer identifiable entities in the immediate vicinity.
Within the wider arena, many of the other sponsorship placements are being read as text, including Kia’s logo below the hoop. This has some value for these brands, but since text recognition doesn’t always complete the word, the results can be inconsistent.
So what does any of this have to do with SEO?
Well, Google Image Search now includes results that are using visual recognition, independent of text cues. Meaning that for a Google Image Search for the query kia staples center, two of the top five results do not have the word kia in the copy, alt text, or alt tags of the web pages they are sourced from. So, visual search is impacting rankings here, and with Google Imagesaccounting for roughly 20% of online searches, this can have a significant impact on search visibility.
What steps should you take to SEO your sponsorships?
Whether it’s major league or the local bowling league, in order to get the most benefit from visual search, if you are sponsoring something which is likely to be photographed extensively, you should:
-
Ensure that your real life sponsorship placement is in an unobscured location
-
Use the same logo in real life that is in your schema, GBP, and knowledge panel
-
Get a placement with good lighting and high contrast brand colors
-
Don’t rely on “light up” logos or flags that have inconsistent visibility on camera phones
You should also ensure that you’re aligning your real life presence with your digital activity. Include images of the sponsorship display on your website so that you can surface for relevant queries. If you dedicate a blog to the sponsorship activity that includes relevant images, image search optimizations, and copy, you increase your chances of outranking other content and bringing those clicks to your site.
Optimizing merch & uniforms for search
When creating merchandising and uniforms, visual discoverability for search should be a priority because users can search photos of promotional merch and images with team members in a number of ways and for an indefinite period of time.
Add text and/or logos
For instance, from my own camera roll, I have a few photos that can be categorized via theGoogle Photo machine-learning-powered image search with the query nasa. Two of these photos include the word “NASA” and the others include the logo.
Oddly enough, though, the photo of my Women of NASA LEGO set does not surface for this query. It shows for lego but not for nasa. Looking closely at the item itself, I can see that neither the NASA logo nor the text have been included in the design of the set.
Adding relevant text and/or logos to this set would have optimized this merchandise for both brands.
Stick to relevant brand colors
And since Google’s visual search AI is also able to discern brand colors, you should also prioritize merchandise that is in keeping with your brand colors. T-shirts and merch that deviate from your core color scheme will be less likely to make Visual Matches when users search via Google Lens.
In the example above, event merchandise that was created outside of the core brand colors of red, black, and white were much less recognizable than stationary typical colors.
Focus on in-person brand experiences
Creating experiences with customers in store and at events can be a great way to build brand relationships. It’s possible to leverage these activities for search if you take an SEO-centric approach.
Reduce competition
Let’s consider this image from a promotional experience in Las Vegas for Lyft. As a user, I enjoyed this immensely, so much so that I took a photo.
Though the Viva Lyft Vegas event was created by the rideshare company, in terms of visual search, Pabst are genuinely taking the blue ribbon, as they are the main entity identified in this query. But why?
First, Pabst has claimed their knowledge panel while Lyft has not, meaning that Lyft is less recognizable as a visual entity because it is less defined as an entity.
Second, though it does not have a Google Maps entry, the Las Vegas PBR sign has had landmark-esque treatment since it was installed, with features in The Neon Museum and a UNLV Neon Survey. All of this to say that, in this context, Lyft is being upstaged.
So to create a more SEO-friendly promotional space, they could have laid the groundwork by claiming their knowledge panel and reduced visual search competitors from the viewable space to make sure all eyes were on them.
Encourage optimized use-generated content
Sticking to Las Vegas, here is a typical touristy photo of me with friends outside the Excalibur Hotel:
And when I say that it’s typical, that’s not conjecture. A quick visual search reveals many other social media posts and websites with similar images.
This is what I refer to as that picture. You know the kinds of high occurrence UGC photos: under the castle at the entrance to Disneyland or even thepink wall at Paul Smith’s on Melrose Ave. These are the photos that everyone takes.
Can you SEO these photos for visual search? Yes, I believe you can in two ways:
-
Encourage people to take photos in certain places that you know, or have designed to include relevant entities, text, logos, and/or landmarks in the viewline. You can do this by declaring an area a scenic viewpoint or creating a photo friendly, dare I say “Instagrammable”, area in your store or venue.
-
Ensure frequently photographed mobile brand representations (e.g. mascots and/or vehicles) are easily recognizable via visual search. Where applicable, you should also claim their knowledge panels.
Once you’ve taken these steps, create dedicated content on your website with images that can serve as a “visual match” to this high frequency UGC. Include relevant copy and image search optimizations to demonstrate authority and make the most of this visibility.
How does this change SEO?
The notion of bringing visual search considerations to real world spaces may seem initially daunting, but this is also an opportunity for businesses of all sizes to consolidate brand identities in an effective way. Those working in SEO should coordinate efforts with PR, branding, and sponsorship teams to capture visual search traffic for brand wins.
MARKETING
YouTube Ad Specs, Sizes, and Examples [2024 Update]
Introduction
With billions of users each month, YouTube is the world’s second largest search engine and top website for video content. This makes it a great place for advertising. To succeed, advertisers need to follow the correct YouTube ad specifications. These rules help your ad reach more viewers, increasing the chance of gaining new customers and boosting brand awareness.
Types of YouTube Ads
Video Ads
- Description: These play before, during, or after a YouTube video on computers or mobile devices.
- Types:
- In-stream ads: Can be skippable or non-skippable.
- Bumper ads: Non-skippable, short ads that play before, during, or after a video.
Display Ads
- Description: These appear in different spots on YouTube and usually use text or static images.
- Note: YouTube does not support display image ads directly on its app, but these can be targeted to YouTube.com through Google Display Network (GDN).
Companion Banners
- Description: Appears to the right of the YouTube player on desktop.
- Requirement: Must be purchased alongside In-stream ads, Bumper ads, or In-feed ads.
In-feed Ads
- Description: Resemble videos with images, headlines, and text. They link to a public or unlisted YouTube video.
Outstream Ads
- Description: Mobile-only video ads that play outside of YouTube, on websites and apps within the Google video partner network.
Masthead Ads
- Description: Premium, high-visibility banner ads displayed at the top of the YouTube homepage for both desktop and mobile users.
YouTube Ad Specs by Type
Skippable In-stream Video Ads
- Placement: Before, during, or after a YouTube video.
- Resolution:
- Horizontal: 1920 x 1080px
- Vertical: 1080 x 1920px
- Square: 1080 x 1080px
- Aspect Ratio:
- Horizontal: 16:9
- Vertical: 9:16
- Square: 1:1
- Length:
- Awareness: 15-20 seconds
- Consideration: 2-3 minutes
- Action: 15-20 seconds
Non-skippable In-stream Video Ads
- Description: Must be watched completely before the main video.
- Length: 15 seconds (or 20 seconds in certain markets).
- Resolution:
- Horizontal: 1920 x 1080px
- Vertical: 1080 x 1920px
- Square: 1080 x 1080px
- Aspect Ratio:
- Horizontal: 16:9
- Vertical: 9:16
- Square: 1:1
Bumper Ads
- Length: Maximum 6 seconds.
- File Format: MP4, Quicktime, AVI, ASF, Windows Media, or MPEG.
- Resolution:
- Horizontal: 640 x 360px
- Vertical: 480 x 360px
In-feed Ads
- Description: Show alongside YouTube content, like search results or the Home feed.
- Resolution:
- Horizontal: 1920 x 1080px
- Vertical: 1080 x 1920px
- Square: 1080 x 1080px
- Aspect Ratio:
- Horizontal: 16:9
- Square: 1:1
- Length:
- Awareness: 15-20 seconds
- Consideration: 2-3 minutes
- Headline/Description:
- Headline: Up to 2 lines, 40 characters per line
- Description: Up to 2 lines, 35 characters per line
Display Ads
- Description: Static images or animated media that appear on YouTube next to video suggestions, in search results, or on the homepage.
- Image Size: 300×60 pixels.
- File Type: GIF, JPG, PNG.
- File Size: Max 150KB.
- Max Animation Length: 30 seconds.
Outstream Ads
- Description: Mobile-only video ads that appear on websites and apps within the Google video partner network, not on YouTube itself.
- Logo Specs:
- Square: 1:1 (200 x 200px).
- File Type: JPG, GIF, PNG.
- Max Size: 200KB.
Masthead Ads
- Description: High-visibility ads at the top of the YouTube homepage.
- Resolution: 1920 x 1080 or higher.
- File Type: JPG or PNG (without transparency).
Conclusion
YouTube offers a variety of ad formats to reach audiences effectively in 2024. Whether you want to build brand awareness, drive conversions, or target specific demographics, YouTube provides a dynamic platform for your advertising needs. Always follow Google’s advertising policies and the technical ad specs to ensure your ads perform their best. Ready to start using YouTube ads? Contact us today to get started!
MARKETING
Why We Are Always ‘Clicking to Buy’, According to Psychologists
Amazon pillows.
MARKETING
A deeper dive into data, personalization and Copilots
Salesforce launched a collection of new, generative AI-related products at Connections in Chicago this week. They included new Einstein Copilots for marketers and merchants and Einstein Personalization.
To better understand, not only the potential impact of the new products, but the evolving Salesforce architecture, we sat down with Bobby Jania, CMO, Marketing Cloud.
Dig deeper: Salesforce piles on the Einstein Copilots
Salesforce’s evolving architecture
It’s hard to deny that Salesforce likes coming up with new names for platforms and products (what happened to Customer 360?) and this can sometimes make the observer wonder if something is brand new, or old but with a brand new name. In particular, what exactly is Einstein 1 and how is it related to Salesforce Data Cloud?
“Data Cloud is built on the Einstein 1 platform,” Jania explained. “The Einstein 1 platform is our entire Salesforce platform and that includes products like Sales Cloud, Service Cloud — that it includes the original idea of Salesforce not just being in the cloud, but being multi-tenancy.”
Data Cloud — not an acquisition, of course — was built natively on that platform. It was the first product built on Hyperforce, Salesforce’s new cloud infrastructure architecture. “Since Data Cloud was on what we now call the Einstein 1 platform from Day One, it has always natively connected to, and been able to read anything in Sales Cloud, Service Cloud [and so on]. On top of that, we can now bring in, not only structured but unstructured data.”
That’s a significant progression from the position, several years ago, when Salesforce had stitched together a platform around various acquisitions (ExactTarget, for example) that didn’t necessarily talk to each other.
“At times, what we would do is have a kind of behind-the-scenes flow where data from one product could be moved into another product,” said Jania, “but in many of those cases the data would then be in both, whereas now the data is in Data Cloud. Tableau will run natively off Data Cloud; Commerce Cloud, Service Cloud, Marketing Cloud — they’re all going to the same operational customer profile.” They’re not copying the data from Data Cloud, Jania confirmed.
Another thing to know is tit’s possible for Salesforce customers to import their own datasets into Data Cloud. “We wanted to create a federated data model,” said Jania. “If you’re using Snowflake, for example, we more or less virtually sit on your data lake. The value we add is that we will look at all your data and help you form these operational customer profiles.”
Let’s learn more about Einstein Copilot
“Copilot means that I have an assistant with me in the tool where I need to be working that contextually knows what I am trying to do and helps me at every step of the process,” Jania said.
For marketers, this might begin with a campaign brief developed with Copilot’s assistance, the identification of an audience based on the brief, and then the development of email or other content. “What’s really cool is the idea of Einstein Studio where our customers will create actions [for Copilot] that we hadn’t even thought about.”
Here’s a key insight (back to nomenclature). We reported on Copilot for markets, Copilot for merchants, Copilot for shoppers. It turns out, however, that there is just one Copilot, Einstein Copilot, and these are use cases. “There’s just one Copilot, we just add these for a little clarity; we’re going to talk about marketing use cases, about shoppers’ use cases. These are actions for the marketing use cases we built out of the box; you can build your own.”
It’s surely going to take a little time for marketers to learn to work easily with Copilot. “There’s always time for adoption,” Jania agreed. “What is directly connected with this is, this is my ninth Connections and this one has the most hands-on training that I’ve seen since 2014 — and a lot of that is getting people using Data Cloud, using these tools rather than just being given a demo.”
What’s new about Einstein Personalization
Salesforce Einstein has been around since 2016 and many of the use cases seem to have involved personalization in various forms. What’s new?
“Einstein Personalization is a real-time decision engine and it’s going to choose next-best-action, next-best-offer. What is new is that it’s a service now that runs natively on top of Data Cloud.” A lot of real-time decision engines need their own set of data that might actually be a subset of data. “Einstein Personalization is going to look holistically at a customer and recommend a next-best-action that could be natively surfaced in Service Cloud, Sales Cloud or Marketing Cloud.”
Finally, trust
One feature of the presentations at Connections was the reassurance that, although public LLMs like ChatGPT could be selected for application to customer data, none of that data would be retained by the LLMs. Is this just a matter of written agreements? No, not just that, said Jania.
“In the Einstein Trust Layer, all of the data, when it connects to an LLM, runs through our gateway. If there was a prompt that had personally identifiable information — a credit card number, an email address — at a mimum, all that is stripped out. The LLMs do not store the output; we store the output for auditing back in Salesforce. Any output that comes back through our gateway is logged in our system; it runs through a toxicity model; and only at the end do we put PII data back into the answer. There are real pieces beyond a handshake that this data is safe.”
-
SEARCHENGINES7 days ago
Daily Search Forum Recap: September 10, 2024
-
SEARCHENGINES6 days ago
Daily Search Forum Recap: September 11, 2024
-
WORDPRESS7 days ago
Roadmap Update – WordPress.com News
-
WORDPRESS5 days ago
14 Tools for Creating and Selling Digital Products (Expert Pick)
-
SEO7 days ago
Expert Embedding Techniques for SEO Success
-
SEARCHENGINES5 days ago
Daily Search Forum Recap: September 12, 2024
-
WORDPRESS6 days ago
The Secrets of One of the World’s Largest Ad-Free Blogs – WordPress.com News
-
GOOGLE5 days ago
Google Warns About Misuse of Its Indexing API
You must be logged in to post a comment Login