SEO
How Hunter Built 96 Links in 3 Months (Case Study)

Every business wants to be featured in top industry listicles. This gets you more backlinks and visibility from websites that rank well for keywords like “best X tools” or “top products for X.”
The problem is that listicle outreach is quite complicated. You need to identify the right listicles, as well as the right prospects to pitch your business to. Your cold email should provide value and be persuasive. Also, the success of your outreach depends a lot on the negotiation tactics you use.
When I worked for Hunter, listicle outreach was the link building tactic that brought us the best results. For example, in our last campaign, we achieved these results in less than three months with only one person involved in the process:
- 96 new links from 54 domains
- 33 new mentions in product listicles
- 17 upgraded positions in other listicles
In this guide, I’ll explain how to use this technique to get dozens of links and mentions for your website in no time. I’ll also share some tips on improving your chances of success.
The first step in the listicle outreach process is to define your targets.
There are three key types of listicles that work well for this tactic. You can pick the one that works the best for you or target many categories at the same time (there are no strict rules):
- Best options – Lists of products, services, or businesses united by a single topic. It can be a list of best SEO tools, top software for recruiters, etc.
- Alternative options – Lists of comparisons focused on a specific industry, topic, or problem. Aura’s listicle on top LifeLock competitors is a great example.
- Guides that include lists – People often include lists in guides. For example, Ahrefs’ guide to finding email addresses includes tips on email lookup. One of the tips is to use email lookup services and the author, Nick Churick, shares a list of the best tools he tested.
Once you’ve chosen a category (or many), brainstorm keywords and topics you can use to find listicles where your product or service will fit. Then combine them with so-called modifiers (the words commonly used in listicles).
I noticed these modifiers are present in the titles of most listicles: best, top, review, list, tools, free.

Here are a few examples of queries that could work:
- Best options listicles: best email finder, best seo tools, top restaurants in new york, etc.
- Alternative options listicles: best hunter alternatives, best ahrefs alternatives, etc.
Next, search for these terms in Google and use Ahrefs’ SEO Toolbar to export the results. Repeat the process for the other keywords from your list and then merge all CSVs into one.

If you like, you can speed this process up using Ahrefs’ Keywords Explorer. Just enter your keywords, hit “Export,” and choose the option to “Include SERPs.”

At this stage, you may have hundreds or even thousands of pages in your main CSV. So you need to clean the list so you can focus on pitching to the most relevant and valuable listicles.
You can start with basic cleaning: remove duplicates in bulk and delete all the websites with low DR. In mine, I always remove all the websites with DR 30 and lower.
As the export files from Ahrefs include this data, it’s easy enough to do in Excel or Google Sheets.

After that, manually check what’s left: remove pages that are not listicles and also remove listicles that don’t fit your requirements. Optionally, you can delete listicles with low traffic (again, the exports from Ahrefs include this data).
Once you clean the list, it’s time to do segmentation. I suggest adding the following tags to your CSV to segment prospects:
- Listicles where your product or service isn’t mentioned.
- Your business is mentioned, but without a link.
- You’ve got a mention, but it’s below your competitors.
Here’s what my file looks like after the segmentation:

Editor’s Note
If you want to speed up and automate this process, consider segmenting during the prospecting stage with Google search operators.
For example, adding “product name” to your search will find listicles that mention your product already. And adding -“product name” will find those that don’t mention your product.


Now it’s time to find the decision-makers behind the URLs you collected. It’s essential to dedicate enough time to this part of the process since the success of the campaign depends on the people you contact. The formula “contact the right people, with the right offer, at the right time” still works. And “people” play a significant role here.
After sending hundreds of listicle outreach emails, I noticed I got the best response rates from blog editors and content managers. Often, these were the writers who created a specific listicle. So make these positions your key target.
To start, find the full names of your prospects on LinkedIn and add them to your document.
For example, if I were reaching out to Ahrefs, I’d probably reach out to its head of content, Joshua Hardwick.

After this, if you use Google Sheets, you can use Hunter for Google Sheets to automate the email lookup process. All you have to do is feed it the columns with the first and last names, domains, and company names of your prospects, and it’ll do the rest.

If you don’t use Google Sheets, you can use Bulk Email Finder for email lookup. You’ll need to upload a CSV into the app, and then you’ll be able to export the result.
Recommendation
Let me guess: You want a mention in the listicle, more visibility from your prospect’s audience and, on top of that, a backlink.
Remember, asking for a lot without giving anything in return decreases your chances of getting anything.
Before writing something like “Hey! I have a great tool to add to your listicle. Can you give me a mention with a backlink?” think about what’s in it for them.
You should bring value. No value = no pitch. These are the rules of the listicle outreach game.
Let me show you two pitch examples and what I mean by providing value.
Example #1

This is one of the templates that helped me generate dozens of mentions. What’s the secret?
Instead of focusing on how fantastic my tool was and describing all the features in detail, I focused on the value. In simple words—what the blog editor would get after featuring my product.
I included three critical benefits in my pitch:
- If the editor accepted my offer, they would automatically become part of our affiliate program and get recurring revenue from this mention.
- I offered bloggers a generous pack of Hunter’s free requests that they could use for their cold outreach campaigns.
- I offered to help provide information for the listicle. (It’s a fact that many bloggers are busy and often have established content update schedules. If you don’t want to wait a year for the next update, offer to help.)
Example #2

I sent this template to the blog editor, whose site published an in-depth guide on email lookup. They mentioned our tool without a backlink. Also, the information in that article was a bit outdated.
In this pitch, my key message was focused on helping to update the guide, as the organic traffic had dropped lately. I confirmed my point with a screenshot from Ahrefs.
The value of the offer was to help them get better rankings.
I recommend spending quite a lot of time personalizing your emails on a high level instead of doing generic outreach. The more effort you invest in the email copy, the better results you can achieve.
For this campaign, we used two different approaches:
- Fully manual and personalized – Sometimes, it took up to 30 minutes to write an email. Yes, it’s time consuming, but I found the response rate to be incredibly high (more than 50%).
- Automated with a personal touch – We automated some of the email batches with Hunter Campaigns, a free tool for cold outreach automation. When segmenting prospects, you’ll notice many similarities between listicles. You can personalize copy at scale with custom attributes and icebreakers for such campaigns.
Also, don’t forget to send follow-ups. They help to increase the response rate.
Keep your follow-ups short and straight to the point. Don’t send more than two follow-ups to one prospect. This will prevent you from looking desperate and annoying.
You can schedule reminders to send follow-ups in X days directly in your Gmail or set up automated follow-ups with a tool for outreach automation.
In a perfect world, you send your cold pitch and immediately receive, “Sounds great, I just updated my article!”
In the real world, things work differently. When cold emailing for link building, guest posts, or listicle placements, remember that you often will be asked to provide an equal value or even more in return.
Some website editors may even ask for payment:

I don’t recommend paying for listicle placements for a few reasons:
- If the editor accepts submission only for the payment, they don’t care about the quality of the products listed. All they care about is quick money.
- If your product is already listed and you want to add a backlink, it may be risky. Google explicitly warns that it considers buying links a link scheme. If you’re going to stick to Google’s guidelines 100%, you shouldn’t buy links.
- Often, websites ask for the payment repeatedly. This means that they can feature your product or service in a listicle for a limited time. If you want to extend the placement, you need to continue paying.
Negotiations are one of the most complex parts of listicle outreach. You invested tons of time into research, prospecting, and sequence preparation. But one wrong word or a lack of flexibility in negotiations can cost you a prospect.
Here are a few tips that helped me close the most difficult negotiations (for example HubSpot, which has pretty high editorial standards):
- Be flexible – Don’t say no right away; always try to find a middle ground.
- Provide fast responses – You have a high chance of getting featured in the listicles if you respond to your prospects right away or at least on the same day. Don’t wait too long.
- Do something for them – Help spread the word about the updated article, share advice in your area of expertise, provide a backlink from your upcoming guest post, etc.
- Always follow up – People are busy, and they can forget about you—it’s OK. If they showed interest in your offer, don’t hesitate to remind them about yourself.
Final thoughts
Listicle outreach isn’t hard.
Given the number of tips in the article, it may seem like it. But once you get the hang of things, everything usually fits into place.
I’m not saying you can do it with no effort. You still have to do a lot of work to collect high-quality prospects, find decision-makers, prepare a valuable outreach sequence, etc. But it’s not the impossible task that many think it is.
If your campaign goes well, you can make it evergreen by monitoring for new listicles with Ahrefs Alerts. Here’s an example alert that monitors for new listicles with “best email finder” in their titles on DR 30+ websites with 1,000+ monthly search visits:

Got questions? Ping me on LinkedIn.
SEO
Optimize Your SEO Strategy For Maximum ROI With These 5 Tips

Wondering what improvements can you make to boost organic search results and increase ROI?
If you want to be successful in SEO, even after large Google algorithm updates, be sure to:
- Keep the SEO fundamentals at the forefront of your strategy.
- Prioritize your SEO efforts for the most rewarding outcomes.
- Focus on uncovering and prioritizing commercial opportunities if you’re in ecommerce.
- Dive into seasonal trends and how to plan for them.
- Get tip 5 and all of the step-by-step how-tos by joining our upcoming webinar.
We’ll share five actionable ways you can discover the most impactful opportunities for your business and achieve maximum ROI.
You’ll learn how to:
- Identify seasonal trends and plan for them.
- Report on and optimize your online share of voice.
- Maximize SERP feature opportunities, most notably Popular Products.
Join Jon Earnshaw, Chief Product Evangelist and Co-Founder of Pi Datametrics, and Sophie Moule, Head of Product and Marketing at Pi Datametrics, as they walk you through ways to drastically improve the ROI of your SEO strategy.
In this live session, we’ll uncover innovative ways you can step up your search strategy and outperform your competitors.
Ready to start maximizing your results and growing your business?
Sign up now and get the actionable insights you need for SEO success.
Can’t attend the live webinar? We’ve got you covered. Register anyway and you’ll get access to a recording, after the event.
SEO
TikTok’s US Future Uncertain: CEO Faces Congress

During a five-hour congressional hearing, TikTok CEO Shou Zi Chew faced intense scrutiny from U.S. lawmakers about the social media platform’s connections to its Chinese parent company, ByteDance.
Legislators from both sides demanded clear answers on whether TikTok spies on Americans for China.
The U.S. government has been pushing for the divestiture of TikTok and has even threatened to ban the app in the United States.
Chew found himself in a difficult position, attempting to portray TikTok as an independent company not influenced by China.
However, lawmakers remained skeptical, citing China’s opposition to the sale of TikTok as evidence of the country’s influence over the company.
The hearing was marked by a rare display of bipartisan unity, with the tone harsher than in previous congressional hearings featuring American social media executives.
The Future of TikTok In The US
With the U.S. and China at odds over TikTok’s sale, the app faces two possible outcomes in the United States.
Either TikTok gets banned, or it revisits negotiations for a technical fix to data security concerns.
Lindsay Gorman, head of technology and geopolitics at the German Marshall Fund, said, “The future of TikTok in the U.S. is definitely dimmer and more uncertain today than it was yesterday.”
TikTok has proposed measures to protect U.S. user data, but no security agreement has been reached.
Addressing Concerns About Societal Impact
Lawmakers at the hearing raised concerns about TikTok’s impact on young Americans, accusing the platform of invading privacy and harming mental health.
According to the Pew Research Center, the app is used by 67% of U.S. teenagers.
Critics argue that the app is too addictive and its algorithm can expose teens to dangerous or lethal situations.
Chew pointed to new screen time limits and content guidelines to address these concerns, but lawmakers remained unconvinced.
In Summary
The House Energy and Commerce Committee’s hearing on TikTok addressed concerns common to all social media platforms, like spreading harmful content and collecting massive user data.
Most committee members were critical of TikTok, but many avoided the typical grandstanding seen in high-profile hearings.
The hearing aimed to make a case for regulating social media and protecting children rather than focusing on the national security threat posed by the app’s connection to China.
If anything emerges from this hearing, it could be related to those regulations.
The hearing also allowed Congress to convince Americans that TikTok is a national security threat that warrants a ban.
This concern arises from the potential for the Chinese government to access the data of TikTok’s 150 million U.S. users or manipulate its recommendation algorithms to spread propaganda or disinformation.
However, limited public evidence supports these claims, making banning the app seem extreme and potentially unnecessary.
As events progress, staying informed is crucial as the outcome could impact the digital marketing landscape.
Featured Image: Rokas Tenys/Shutterstock
Full replay of congressional hearing available on YouTube.
SEO
Everything You Need To Know

Google has just released Bard, its answer to ChatGPT, and users are getting to know it to see how it compares to OpenAI’s artificial intelligence-powered chatbot.
The name ‘Bard’ is purely marketing-driven, as there are no algorithms named Bard, but we do know that the chatbot is powered by LaMDA.
Here is everything we know about Bard so far and some interesting research that may offer an idea of the kind of algorithms that may power Bard.
What Is Google Bard?
Bard is an experimental Google chatbot that is powered by the LaMDA large language model.
It’s a generative AI that accepts prompts and performs text-based tasks like providing answers and summaries and creating various forms of content.
Bard also assists in exploring topics by summarizing information found on the internet and providing links for exploring websites with more information.
Why Did Google Release Bard?
Google released Bard after the wildly successful launch of OpenAI’s ChatGPT, which created the perception that Google was falling behind technologically.
ChatGPT was perceived as a revolutionary technology with the potential to disrupt the search industry and shift the balance of power away from Google search and the lucrative search advertising business.
On December 21, 2022, three weeks after the launch of ChatGPT, the New York Times reported that Google had declared a “code red” to quickly define its response to the threat posed to its business model.
Forty-seven days after the code red strategy adjustment, Google announced the launch of Bard on February 6, 2023.
What Was The Issue With Google Bard?
The announcement of Bard was a stunning failure because the demo that was meant to showcase Google’s chatbot AI contained a factual error.
The inaccuracy of Google’s AI turned what was meant to be a triumphant return to form into a humbling pie in the face.
Google’s shares subsequently lost a hundred billion dollars in market value in a single day, reflecting a loss of confidence in Google’s ability to navigate the looming era of AI.
How Does Google Bard Work?
Bard is powered by a “lightweight” version of LaMDA.
LaMDA is a large language model that is trained on datasets consisting of public dialogue and web data.
There are two important factors related to the training described in the associated research paper, which you can download as a PDF here: LaMDA: Language Models for Dialog Applications (read the abstract here).
- A. Safety: The model achieves a level of safety by tuning it with data that was annotated by crowd workers.
- B. Groundedness: LaMDA grounds itself factually with external knowledge sources (through information retrieval, which is search).
The LaMDA research paper states:
“…factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator.
We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible.”
Google used three metrics to evaluate the LaMDA outputs:
- Sensibleness: A measurement of whether an answer makes sense or not.
- Specificity: Measures if the answer is the opposite of generic/vague or contextually specific.
- Interestingness: This metric measures if LaMDA’s answers are insightful or inspire curiosity.
All three metrics were judged by crowdsourced raters, and that data was fed back into the machine to keep improving it.
The LaMDA research paper concludes by stating that crowdsourced reviews and the system’s ability to fact-check with a search engine were useful techniques.
Google’s researchers wrote:
“We find that crowd-annotated data is an effective tool for driving significant additional gains.
We also find that calling external APIs (such as an information retrieval system) offers a path towards significantly improving groundedness, which we define as the extent to which a generated response contains claims that can be referenced and checked against a known source.”
How Is Google Planning To Use Bard In Search?
The future of Bard is currently envisioned as a feature in search.
Google’s announcement in February was insufficiently specific on how Bard would be implemented.
The key details were buried in a single paragraph close to the end of the blog announcement of Bard, where it was described as an AI feature in search.
That lack of clarity fueled the perception that Bard would be integrated into search, which was never the case.
Google’s February 2023 announcement of Bard states that Google will at some point integrate AI features into search:
“Soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether that’s seeking out additional perspectives, like blogs from people who play both piano and guitar, or going deeper on a related topic, like steps to get started as a beginner.
These new AI features will begin rolling out on Google Search soon.”
It’s clear that Bard is not search. Rather, it is intended to be a feature in search and not a replacement for search.
What Is A Search Feature?
A feature is something like Google’s Knowledge Panel, which provides knowledge information about notable people, places, and things.
Google’s “How Search Works” webpage about features explains:
“Google’s search features ensure that you get the right information at the right time in the format that’s most useful to your query.
Sometimes it’s a webpage, and sometimes it’s real-world information like a map or inventory at a local store.”
In an internal meeting at Google (reported by CNBC), employees questioned the use of Bard in search.
One employee pointed out that large language models like ChatGPT and Bard are not fact-based sources of information.
The Google employee asked:
“Why do we think the big first application should be search, which at its heart is about finding true information?”
Jack Krawczyk, the product lead for Google Bard, answered:
“I just want to be very clear: Bard is not search.”
At the same internal event, Google’s Vice President of Engineering for Search, Elizabeth Reid, reiterated that Bard is not search.
She said:
“Bard is really separate from search…”
What we can confidently conclude is that Bard is not a new iteration of Google search. It is a feature.
Bard Is An Interactive Method For Exploring Topics
Google’s announcement of Bard was fairly explicit that Bard is not search. This means that, while search surfaces links to answers, Bard helps users investigate knowledge.
The announcement explains:
“When people think of Google, they often think of turning to us for quick factual answers, like ‘how many keys does a piano have?’
But increasingly, people are turning to Google for deeper insights and understanding – like, ‘is the piano or guitar easier to learn, and how much practice does each need?’
Learning about a topic like this can take a lot of effort to figure out what you really need to know, and people often want to explore a diverse range of opinions or perspectives.”
It may be helpful to think of Bard as an interactive method for accessing knowledge about topics.
Bard Samples Web Information
The problem with large language models is that they mimic answers, which can lead to factual errors.
The researchers who created LaMDA state that approaches like increasing the size of the model can help it gain more factual information.
But they noted that this approach fails in areas where facts are constantly changing during the course of time, which researchers refer to as the “temporal generalization problem.”
Freshness in the sense of timely information cannot be trained with a static language model.
The solution that LaMDA pursued was to query information retrieval systems. An information retrieval system is a search engine, so LaMDA checks search results.
This feature from LaMDA appears to be a feature of Bard.
The Google Bard announcement explains:
“Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models.
It draws on information from the web to provide fresh, high-quality responses.”
LaMDA and (possibly by extension) Bard achieve this with what is called the toolset (TS).
The toolset is explained in the LaMDA researcher paper:
“We create a toolset (TS) that includes an information retrieval system, a calculator, and a translator.
TS takes a single string as input and outputs a list of one or more strings. Each tool in TS expects a string and returns a list of strings.
For example, the calculator takes “135+7721”, and outputs a list containing [“7856”]. Similarly, the translator can take “hello in French” and output [‘Bonjour’].
Finally, the information retrieval system can take ‘How old is Rafael Nadal?’, and output [‘Rafael Nadal / Age / 35’].
The information retrieval system is also capable of returning snippets of content from the open web, with their corresponding URLs.
The TS tries an input string on all of its tools, and produces a final output list of strings by concatenating the output lists from every tool in the following order: calculator, translator, and information retrieval system.
A tool will return an empty list of results if it can’t parse the input (e.g., the calculator cannot parse ‘How old is Rafael Nadal?’), and therefore does not contribute to the final output list.”
Here’s a Bard response with a snippet from the open web:

Conversational Question-Answering Systems
There are no research papers that mention the name “Bard.”
However, there is quite a bit of recent research related to AI, including by scientists associated with LaMDA, that may have an impact on Bard.
The following doesn’t claim that Google is using these algorithms. We can’t say for certain that any of these technologies are used in Bard.
The value in knowing about these research papers is in knowing what is possible.
The following are algorithms relevant to AI-based question-answering systems.
One of the authors of LaMDA worked on a project that’s about creating training data for a conversational information retrieval system.
You can download the 2022 research paper as a PDF here: Dialog Inpainting: Turning Documents into Dialogs (and read the abstract here).
The problem with training a system like Bard is that question-and-answer datasets (like datasets comprised of questions and answers found on Reddit) are limited to how people on Reddit behave.
It doesn’t encompass how people outside of that environment behave and the kinds of questions they would ask, and what the correct answers to those questions would be.
The researchers explored creating a system read webpages, then used a “dialog inpainter” to predict what questions would be answered by any given passage within what the machine was reading.
A passage in a trustworthy Wikipedia webpage that says, “The sky is blue,” could be turned into the question, “What color is the sky?”
The researchers created their own dataset of questions and answers using Wikipedia and other webpages. They called the datasets WikiDialog and WebDialog.
- WikiDialog is a set of questions and answers derived from Wikipedia data.
- WebDialog is a dataset derived from webpage dialog on the internet.
These new datasets are 1,000 times larger than existing datasets. The importance of that is it gives conversational language models an opportunity to learn more.
The researchers reported that this new dataset helped to improve conversational question-answering systems by over 40%.
The research paper describes the success of this approach:
“Importantly, we find that our inpainted datasets are powerful sources of training data for ConvQA systems…
When used to pre-train standard retriever and reranker architectures, they advance state-of-the-art across three different ConvQA retrieval benchmarks (QRECC, OR-QUAC, TREC-CAST), delivering up to 40% relative gains on standard evaluation metrics…
Remarkably, we find that just pre-training on WikiDialog enables strong zero-shot retrieval performance—up to 95% of a finetuned retriever’s performance—without using any in-domain ConvQA data. “
Is it possible that Google Bard was trained using the WikiDialog and WebDialog datasets?
It’s difficult to imagine a scenario where Google would pass on training a conversational AI on a dataset that is over 1,000 times larger.
But we don’t know for certain because Google doesn’t often comment on its underlying technologies in detail, except on rare occasions like for Bard or LaMDA.
Large Language Models That Link To Sources
Google recently published an interesting research paper about a way to make large language models cite the sources for their information. The initial version of the paper was published in December 2022, and the second version was updated in February 2023.
This technology is referred to as experimental as of December 2022.
You can download the PDF of the paper here: Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models (read the Google abstract here).
The research paper states the intent of the technology:
“Large language models (LLMs) have shown impressive results while requiring little or no direct supervision.
Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios.
We believe the ability of an LLM to attribute the text that it generates is likely to be crucial in this setting.
We formulate and study Attributed QA as a key first step in the development of attributed LLMs.
We propose a reproducible evaluation framework for the task and benchmark a broad set of architectures.
We take human annotations as a gold standard and show that a correlated automatic metric is suitable for development.
Our experimental work gives concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third (How to build LLMs with attribution?).”
This kind of large language model can train a system that can answer with supporting documentation that, theoretically, assures that the response is based on something.
The research paper explains:
“To explore these questions, we propose Attributed Question Answering (QA). In our formulation, the input to the model/system is a question, and the output is an (answer, attribution) pair where answer is an answer string, and attribution is a pointer into a fixed corpus, e.g., of paragraphs.
The returned attribution should give supporting evidence for the answer.”
This technology is specifically for question-answering tasks.
The goal is to create better answers – something that Google would understandably want for Bard.
- Attribution allows users and developers to assess the “trustworthiness and nuance” of the answers.
- Attribution allows developers to quickly review the quality of the answers since the sources are provided.
One interesting note is a new technology called AutoAIS that strongly correlates with human raters.
In other words, this technology can automate the work of human raters and scale the process of rating the answers given by a large language model (like Bard).
The researchers share:
“We consider human rating to be the gold standard for system evaluation, but find that AutoAIS correlates well with human judgment at the system level, offering promise as a development metric where human rating is infeasible, or even as a noisy training signal. “
This technology is experimental; it’s probably not in use. But it does show one of the directions that Google is exploring for producing trustworthy answers.
Research Paper On Editing Responses For Factuality
Lastly, there’s a remarkable technology developed at Cornell University (also dating from the end of 2022) that explores a different way to source attribution for what a large language model outputs and can even edit an answer to correct itself.
Cornell University (like Stanford University) licenses technology related to search and other areas, earning millions of dollars per year.
It’s good to keep up with university research because it shows what is possible and what is cutting-edge.
You can download a PDF of the paper here: RARR: Researching and Revising What Language Models Say, Using Language Models (and read the abstract here).
The abstract explains the technology:
“Language models (LMs) now excel at many tasks such as few-shot learning, question answering, reasoning, and dialog.
However, they sometimes generate unsupported or misleading content.
A user cannot easily determine whether their outputs are trustworthy or not, because most LMs do not have any built-in mechanism for attribution to external evidence.
To enable attribution while still preserving all the powerful advantages of recent generation models, we propose RARR (Retrofit Attribution using Research and Revision), a system that 1) automatically finds attribution for the output of any text generation model and 2) post-edits the output to fix unsupported content while preserving the original output as much as possible.
…we find that RARR significantly improves attribution while otherwise preserving the original input to a much greater degree than previously explored edit models.
Furthermore, the implementation of RARR requires only a handful of training examples, a large language model, and standard web search.”
How Do I Get Access To Google Bard?
Google is currently accepting new users to test Bard, which is currently labeled as experimental. Google is rolling out access for Bard here.

Google is on the record saying that Bard is not search, which should reassure those who feel anxiety about the dawn of AI.
We are at a turning point that is unlike any we’ve seen in, perhaps, a decade.
Understanding Bard is helpful to anyone who publishes on the web or practices SEO because it’s helpful to know the limits of what is possible and the future of what can be achieved.
More Resources:
Featured Image: Whyredphotographor/Shutterstock
-
WORDPRESS2 days ago
Internal Linking for SEO: The Ultimate Guide of Best Practices
-
AMAZON4 days ago
The Top 10 Benefits of Amazon AWS Lightsail: Why It’s a Great Choice for Businesses
-
SEARCHENGINES7 days ago
Google AdSense Auto Ads Publisher Console Update
-
WORDPRESS6 days ago
The best web hosting solutions for your personal webpage or business site
-
WORDPRESS6 days ago
ActivityPub for WordPress Joins the Automattic Family – WordPress.com News
-
SEARCHENGINES1 day ago
Google Search Status Dashboard Adds Google Ranking Updates
-
PPC4 days ago
PPC Campaign Testing: The Dos & Don’ts to Turn Risks into Rewards
-
MARKETING5 days ago
How to calculate customer lifetime value and maximize it for your business