SEO
10 DuckDuckGo Facts For Digital Marketers & SEO Pros
How often do you use DuckDuckGo?
If you answered “never,” you might want to read this article.
Over the years, DuckDuckGo has redesigned itself and evolved to better meet searchers’ needs and protect their privacy.
In addition to its excellent search capabilities, DDG (DuckDuckGo) has many helpful features that can help you improve your search strategy while cutting the time it takes to complete research.
I’ve researched DDG and its value for digital marketers and SEO professionals.
Here, you’ll learn everything you need to know about DDG, why marketers should consider using it, and some interesting facts about the search engine.
What Is DuckDuckGo?
DDGo is a search engine that doesn’t track users, meaning it doesn’t store any information about what websites you visit.
This means that when you use DDG, no one knows who you are, where you live, what you like to search, or which sites you’ve visited – creating a private search history.
Gabriel Weinberg founded the company and has progressed his idea since 2008.
He started using Google for searching, but after seeing how much data Google collects, he decided to create a new, private search engine.
Marketers and SEO pros can use DDG on Safari, Chrome, and Firefox with a built-in extension.
In addition, the search engine is now available as an app for iOS devices, Android phones, and Windows 8 tablets.
Now, let’s get into why you should consider using DDG.
Why Use DuckDuckGo?
With its unique search algorithms, DDG has become one of the most popular alternative search engines.
And while many people are familiar with the name, few understand what makes it stand out from the crowd.
DDG has been designed from the ground up to be fast, private, and secure.
The search engine uses only what it needs to deliver results, which means no tracking cookies or other unnecessary data collection.
This makes DDG one of the best privacy-focused search engines available today.
It lets you search anonymously while blocking trackers on websites you use.
DDG uses HTTPS encryption for all searches and doesn’t store IP addresses.
While there are similarities that make DDG comparable as a search engine to Google, the main difference is that DDG doesn’t track you as Google does.
So, no matter how often you search, you won’t see ads based on your previous searches. Instead, you’ll see sponsored links relevant to the current topic.
This isn’t just good news for privacy advocates – it’s great news for anyone looking for quality information online.
If you want to find something specific without being tracked, DDG is a great option.
Also, if you’re trying to figure out how to incorporate SEO for DDG, you can check out this Search Engine Journal resource.
DDG allows you to block certain types of cookies, which means you can control whether third parties can track your browsing history across the Internet.
Access to this information can provide significant advantages, especially regarding marketing campaigns.
For example, you can target specific keywords based on the pages users visit, which means marketers can reach potential customers faster than ever before.
So now, let’s dive into the helpful features DDG offers.
Helpful DuckDuckGo Search Features
DDG has numerous features such as image searches, location-based searches, and voice searches.
As I mentioned, the main difference between Google and DDG is that DDG does not track users through cookies or other methods.
It also does not sell any data to third parties.
DDG aims to protect the people using its service.
The company has built several features enabling it to identify potential threats without invading user privacy.
It also has instructions to evaluate your add-ons to help you safely remove any unofficial and potentially harmful add-ons.
In addition, DDG has integrated with Apple Maps, allowing users to search for locations privately.
Some features include Safe Search, Instant Answers, and private searches. These help DDG protect its users from malicious websites, scams, and malware.
With Instant Answers, DDG uses over 100 different sources to provide answers to your queries without making you click on different websites for results.
The !Bang Syntax
One very cool feature on DDG is the !Bang Syntax function, which lets you directly search on a site from DDG without having to go to that site first.
For example, suppose you want to search for butter chicken recipes on Pinterest.
You can use DDG’s Pinterest shortcut by entering [!p butter chicken recipes] into the search bar, and it will transport you straight to the Pinterest results on that platform.
DDG has many other shortcuts to websites such as Amazon, Twitter, LinkedIn, and Wikipedia.
To see all the shortcuts, type in the exclamation mark in the search bar, and they will pop up.
Don’t worry – it is easy to use and very helpful for quick searches.
It takes the time out of navigating to a website to complete a search.
If you use DDG’s browser extension or have it as your browser’s default search engine, the !bang commands also work in the address bar.
DDG has many other interesting features you should check out, such as category pages, keyboard shortcuts, and Autosuggest.
If you’re not sold on the search engine yet, check out these ten DDG facts that might help to change your mind.
10 DuckDuckGo Facts
1. DDG Turned 14 In 2022
Google has become synonymous with Internet searches. Even though DDG is a relatively new search engine, the company has existed for over a decade and is still growing at an incredible rate.
2. DDG Hits 100 Million Searches Per Day
It is now one of the top 10 search engines worldwide.
In addition, the search engine hit the milestone of 100 million daily searches in 2021.
3. Over 100 Billion Searches Have Been Performed On DDG
In 2019, it broke one billion searches in a month – and in 2020, it broke 50 billion searches.
Due to its efficient and streamlined search capabilities, over 100 billion searches have been performed on DDG.
4. DDG Has A 11.43% Bounce Rate
While Google still ranks as the number one search engine in the U.S., DDG has worked its way up to the second leading search engine.
And DDG has a bounce rate of 11.43% compared to Google’s 28.46%.
5. DDG Employees Have Grown To 180
From its humble beginnings with founder and CEO Gabriel Weinberg running DDG by himself until 2011, the company now employs 180 people.
Additionally, the business is now profitable.
It’s a great example of how you can start small and grow into something bigger.
6. Average Of 6 Million Monthly Downloads On DDG
With more people looking to protect their data, there are an average of six million monthly downloads of DDG for both mobile and desktop use.
Since 2020 it is also the default search engine on Android throughout the EU.
7. Average Of 3 Billion Monthly Searches Performed On DDG
More people are benefiting from the DDG, and now there is an average of three billion monthly searches.
This is because more people rely on the site for everyday searches.
8. DDG Holds 2.42% Of The Search Market In The US
In 2019, the DDG market share began to grow, starting at 1.25%, and has nearly doubled today.
DDG holds 2.42% of the search engine market shares in the US.
9. The Cost Per Click On DDG Can Be 10x Cheaper Than Google
DDG runs pay-per-click advertising like Google and Bing.
But, some marketers have found DDG significantly cheaper than the cost per click of ads on Google, thus lowering the cost of their average conversion rate.
With the right strategy, DDG can be a valuable marketing opportunity for marketers and brands to increase conversion rates.
10. DDG Has An Average Rating Of 4.5 Stars
One of the best ways to determine if a platform is legitimate and worth your time is to look at its reviews.
DDG has an average rating of 4.5 stars, meaning people like using it.
With quick load times and ease for mobile users, it’s an effective search engine.
Key Takeaways
DDG is fast becoming one of the world’s most trusted and popular search engines due to its excellent privacy policy.
It is one of the top search engines for digital marketers and SEO pros because it offers a unique combination of features that help our users stay safe online.
As you can see, DDG provides a wide range of tools and features that can help you optimize your digital marketing strategy, create more opportunities for organic traffic, and increase your online presence.
But don’t let these benefits fool you: DDG is an entirely free service, requiring minimal investment on your part.
So it’s time to try DDG and take your online search strategy to the next level.
Featured Image: sdecoret/Shutterstock
SEO
8% Of Automattic Employees Choose To Resign
WordPress co-founder and Automattic CEO announced today that he offered Automattic employees the chance to resign with a severance pay and a total of 8.4 percent. Mullenweg offered $30,000 or six months of salary, whichever one is higher, with a total of 159 people taking his offer.
Reactions Of Automattic Employees
Given the recent controversies created by Mullenweg, one might be tempted to view the walkout as a vote of no-confidence in Mullenweg. But that would be a mistake because some of the employees announcing their resignations either praised Mullenweg or simply announced their resignation while many others tweeted how happy they are to stay at Automattic.
One former employee tweeted that he was sad about recent developments but also praised Mullenweg and Automattic as an employer.
He shared:
“Today was my last day at Automattic. I spent the last 2 years building large scale ML and generative AI infra and products, and a lot of time on robotics at night and on weekends.
I’m going to spend the next month taking a break, getting married, and visiting family in Australia.
I have some really fun ideas of things to build that I’ve been storing up for a while. Now I get to build them. Get in touch if you’d like to build AI products together.”
Another former employee, Naoko Takano, is a 14 year employee, an organizer of WordCamp conferences in Asia, a full-time WordPress contributor and Open Source Project Manager at Automattic announced on X (formerly Twitter) that today was her last day at Automattic with no additional comment.
She tweeted:
“Today was my last day at Automattic.
I’m actively exploring new career opportunities. If you know of any positions that align with my skills and experience!”
Naoko’s role at at WordPress was working with the global WordPress community to improve contributor experiences through the Five for the Future and Mentorship programs. Five for the Future is an important WordPress program that encourages organizations to donate 5% of their resources back into WordPress. Five for the Future is one of the issues Mullenweg had against WP Engine, asserting that they didn’t donate enough back into the community.
Mullenweg himself was bittersweet to see those employees go, writing in a blog post:
“It was an emotional roller coaster of a week. The day you hire someone you aren’t expecting them to resign or be fired, you’re hoping for a long and mutually beneficial relationship. Every resignation stings a bit.
However now, I feel much lighter. I’m grateful and thankful for all the people who took the offer, and even more excited to work with those who turned down $126M to stay. As the kids say, LFG!”
Read the entire announcement on Mullenweg’s blog:
Featured Image by Shutterstock/sdx15
SEO
YouTube Extends Shorts To 3 Minutes, Adds New Features
YouTube expands Shorts to 3 minutes, adds templates, AI tools, and the option to show fewer Shorts on the homepage.
- YouTube Shorts will allow 3-minute videos.
- New features include templates, enhanced remixing, and AI-generated video backgrounds.
- YouTube is adding a Shorts trends page and comment previews.
SEO
How To Stop Filter Results From Eating Crawl Budget
Today’s Ask An SEO question comes from Michal in Bratislava, who asks:
“I have a client who has a website with filters based on a map locations. When the visitor makes a move on the map, a new URL with filters is created. They are not in the sitemap. However, there are over 700,000 URLs in the Search Console (not indexed) and eating crawl budget.
What would be the best way to get rid of these URLs? My idea is keep the base location ‘index, follow’ and newly created URLs of surrounded area with filters switch to ‘noindex, no follow’. Also mark surrounded areas with canonicals to the base location + disavow the unwanted links.”
Great question, Michal, and good news! The answer is an easy one to implement.
First, let’s look at what you’re trying and apply it to other situations like ecommerce and publishers. This way, more people can benefit. Then, go into your strategies above and end with the solution.
What Crawl Budget Is And How Parameters Are Created That Waste It
If you’re not sure what Michal is referring to with crawl budget, this is a term some SEO pros use to explain that Google and other search engines will only crawl so many pages on your website before it stops.
If your crawl budget is used on low-value, thin, or non-indexable pages, your good pages and new pages may not be found in a crawl.
If they’re not found, they may not get indexed or refreshed. If they’re not indexed, they cannot bring you SEO traffic.
This is why optimizing a crawl budget for efficiency is important.
Michal shared an example of how “thin” URLs from an SEO point of view are created as customers use filters.
The experience for the user is value-adding, but from an SEO standpoint, a location-based page would be better. This applies to ecommerce and publishers, too.
Ecommerce stores will have searches for colors like red or green and products like t-shirts and potato chips.
These create URLs with parameters just like a filter search for locations. They could also be created by using filters for size, gender, color, price, variation, compatibility, etc. in the shopping process.
The filtered results help the end user but compete directly with the collection page, and the collection would be the “non-thin” version.
Publishers have the same. Someone might be on SEJ looking for SEO or PPC in the search box and get a filtered result. The filtered result will have articles, but the category of the publication is likely the best result for a search engine.
These filtered results can be indexed because they get shared on social media or someone adds them as a comment on a blog or forum, creating a crawlable backlink. It might also be an employee in customer service responded to a question on the company blog or any other number of ways.
The goal now is to make sure search engines don’t spend time crawling the “thin” versions so you can get the most from your crawl budget.
The Difference Between Indexing And Crawling
There’s one more thing to learn before we go into the proposed ideas and solutions – the difference between indexing and crawling.
- Crawling is the discovery of new pages within a website.
- Indexing is adding the pages that are worthy of showing to a person using the search engine to the database of pages.
Pages can get crawled but not indexed. Indexed pages have likely been crawled and will likely get crawled again to look for updates and server responses.
But not all indexed pages will bring in traffic or hit the first page because they may not be the best possible answer for queries being searched.
Now, let’s go into making efficient use of crawl budgets for these types of solutions.
Using Meta Robots Or X Robots
The first solution Michal pointed out was an “index,follow” directive. This tells a search engine to index the page and follow the links on it. This is a good idea, but only if the filtered result is the ideal experience.
From what I can see, this would not be the case, so I would recommend making it “noindex,follow.”
Noindex would say, “This is not an official page, but hey, keep crawling my site, you’ll find good pages in here.”
And if you have your main menu and navigational internal links done correctly, the spider will hopefully keep crawling them.
Canonicals To Solve Wasted Crawl Budget
Canonical links are used to help search engines know what the official page to index is.
If a product exists in three categories on three separate URLs, only one should be “the official” version, so the two duplicates should have a canonical pointing to the official version. The official one should have a canonical link that points to itself. This applies to the filtered locations.
If the location search would result in multiple city or neighborhood pages, the result would likely be a duplicate of the official one you have in your sitemap.
Have the filtered results point a canonical back to the main page of filtering instead of being self-referencing if the content on the page stays the same as the original category.
If the content pulls in your localized page with the same locations, point the canonical to that page instead.
In most cases, the filtered version inherits the page you searched or filtered from, so that is where the canonical should point to.
If you do both noindex and have a self-referencing canonical, which is overkill, it becomes a conflicting signal.
The same applies to when someone searches for a product by name on your website. The search result may compete with the actual product or service page.
With this solution, you’re telling the spider not to index this page because it isn’t worth indexing, but it is also the official version. It doesn’t make sense to do this.
Instead, use a canonical link, as I mentioned above, or noindex the result and point the canonical to the official version.
Disavow To Increase Crawl Efficiency
Disavowing doesn’t have anything to do with crawl efficiency unless the search engine spiders are finding your “thin” pages through spammy backlinks.
The disavow tool from Google is a way to say, “Hey, these backlinks are spammy, and we don’t want them to hurt us. Please don’t count them towards our site’s authority.”
In most cases, it doesn’t matter, as Google is good at detecting spammy links and ignoring them.
You do not want to add your own site and your own URLs to the disavow tool. You’re telling Google your own site is spammy and not worth anything.
Plus, submitting backlinks to disavow won’t prevent a spider from seeing what you want and do not want to be crawled, as it is only for saying a link from another site is spammy.
Disavowing won’t help with crawl efficiency or saving crawl budget.
How To Make Crawl Budgets More Efficient
The answer is robots.txt. This is how you tell specific search engines and spiders what to crawl.
You can include the folders you want them to crawl by marketing them as “allow,” and you can say “disallow” on filtered results by disallowing the “?” or “&” symbol or whichever you use.
If some of those parameters should be crawled, add the main word like “?filter=location” or a specific parameter.
Robots.txt is how you define crawl paths and work on crawl efficiency. Once you’ve optimized that, look at your internal links. A link from one page on your site to another.
These help spiders find your most important pages while learning what each is about.
Internal links include:
- Breadcrumbs.
- Menu navigation.
- Links within content to other pages.
- Sub-category menus.
- Footer links.
You can also use a sitemap if you have a large site, and the spiders are not finding the pages you want with priority.
I hope this helps answer your question. It is one I get a lot – you’re not the only one stuck in that situation.
More resources:
Featured Image: Paulo Bobita/Search Engine Journal
-
WORDPRESS2 days ago
WordPress biz Automattic details WP Engine deal demands • The Register
-
SEARCHENGINES4 days ago
Daily Search Forum Recap: September 30, 2024
-
SEO6 days ago
Yoast Co-Founder Suggests A WordPress Contributor Board
-
SEARCHENGINES6 days ago
Google’s 26th Birthday Doodle Is Missing
-
SEO5 days ago
6 Things You Can Do to Compete With Big Sites
-
SEARCHENGINES5 days ago
Google Volatility With Gains & Losses, Updated Web Spam Policies, Cache Gone & More Search News
-
SEARCHENGINES3 days ago
Daily Search Forum Recap: October 1, 2024
-
SEO5 days ago
An In-Depth Guide For Businesses
You must be logged in to post a comment Login