Connect with us

GOOGLE

FLAN: Google Research Develops Better Machine Learning

Published

on

Main Article Image - Natural Language Statistics

Google recently published research on a technique to train a model to be able to solve natural language processing problems in a way that can be applied to multiple tasks. Rather than train a model to solve one kind of problem, this approach teaches it how to solve a wide range of problems, making it more efficient and advancing the state of the art.

Google Doesn’t Use All Research In Their Algorithms

Google’s official statement on research papers is that just because it publishes an algorithm doesn’t mean that it’s in use at Google Search.

Nothing in the research paper says it should be used in search. But what makes this research of interest is that it advances the state of the art and improves on current technology.

The Value Of Being Aware of Technology

People who don’t know how search engines work can end up understanding it in terms that are pure speculation.

That’s how the search industry ended up with false ideas such as “LSI Keywords” and nonsensical strategies such as trying to beat the competition by creating content that is ten times better (or simply bigger) than the competitor’s content, with zero consideration of what users might need and require.

The value in knowing about these algorithms and techniques is of being aware of the general contours of what goes on in search engines so that one does not make the error of underestimating what search engines are capable of.

The Problem That FLAN Solves

The main problem this technique solves is of enabling a machine to use its vast amount of knowledge to solve real-world tasks.

The approach teaches the machine how to generalize problem solving in order to be able to solve unseen problems.

It does this by feeding instructions to solve specific problems then generalizing those instructions in order to solve other problems.

The researchers state:

“The model is fine-tuned on disparate sets of instructions and generalizes to unseen instructions. As more types of tasks are added to the fine-tuning data model performance improves.

…We show that by training a model on these instructions it not only becomes good at solving the kinds of instructions it has seen during training but becomes good at following instructions in general.”

The research paper cites a current popular technique called “zero-shot or few-shot prompting” that trains a machine to solve a specific language problem and describes the shortcoming in this technique.

Referencing the zero shot/few shot prompting technique:

“This technique formulates a task based on text that a language model might have seen during training, where then the language model generates the answer by completing the text.

For instance, to classify the sentiment of a movie review, a language model might be given the sentence, “The movie review ‘best RomCom since Pretty Woman’ is _” and be asked to complete the sentence with either the word “positive” or “negative”.”

The researchers note that the zero shot approach performs well but that the performance has to be measured against tasks that the model has previously seen before.

The researchers write:

“…it requires careful prompt engineering to design tasks to look like data that the model has seen during training…”

And that kind of shortcoming is what FLAN solves. Because the training instructions are generalized the model is able to solve more problems including solving tasks it has not previously been trained on.

Could This Technique Be Used By Google?

Google rarely discusses specific research papers and whether or not what’s described is in use. Google’s official stance on research papers that it publishes many of them and that they don’t necessarily end up in their search ranking algorithm.

Google is generally opaque about what’s in their algorithms and rightly so.

Even when it announces new technologies Google tends to give them names that do not correspond with published research papers. For example, names like Neural Matching and Rank Brain don’t correspond with specific research papers.

It’s important to review the success of the research because some research falls short of their goals and don’t perform as well as current state of the art in techniques and algorithms.

Those research papers that fall short can more or less be ignored but they’re good to know about.

The research papers that are of most value to the search marketing community are those that are successful and perform significantly better than the current state of the art.

And that is the case with FLAN.

FLAN performs better than other techniques and for that reason FLAN is something to be aware of.

The researchers noted:

“We evaluated FLAN on 25 tasks and found that it improves over zero-shot prompting on all but four of them. We found that our results are better than zero-shot GPT-3 on 20 of 25 tasks, and better than even few-shot GPT-3 on some tasks.”

Natural Language Inference

Natural Language Inference Task is one in which the machine has to determine if a given premise is true, false or undetermined/neutral (neither true or false).

Natural Language Inference Performance of FLAN

Natural Language Inference

Reading Comprehension

This is a task of answering a question based on content in a document.

Reading Comprehension Performance of FLAN

Reading Comprehension

Closed-book QA

This is the ability to answer questions with factual data, which tests the ability to match known facts with the questions. An example is answering questions like what color is the sky or who was the first president of the United States.

Closed Book QA Performance of FLAN

Closed Book QA

Is Google Using FLAN?

As previously stated, Google does not generally confirm whether they’re using a specific algorithm or technique.

However, the fact that this particular technique moves the state of the art forward could mean that it’s not unreasonable to speculate that some form of it could be integrated into Google’s algorithm, improving its ability to answer search queries.

This research was published on October 28, 2021.

Could some of this have been incorporated into the recent Core Algorithm Update?

Core algorithm updates are generally focused on understanding queries and web pages better and providing better answers.

One can only speculate as Google rarely shares specifics, especially with regard to core algorithm updates.

Citation

Introducing FLAN: More generalizable Language Models with Instruction Fine-Tuning

Image by Shutterstock

Searchenginejournal.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

AI

Exploring the Evolution of Language Translation: A Comparative Analysis of AI Chatbots and Google Translate

Published

on

By

A Comparative Analysis of AI Chatbots and Google Translate

According to an article on PCMag, while Google Translate makes translating sentences into over 100 languages easy, regular users acknowledge that there’s still room for improvement.

In theory, large language models (LLMs) such as ChatGPT are expected to bring about a new era in language translation. These models consume vast amounts of text-based training data and real-time feedback from users worldwide, enabling them to quickly learn to generate coherent, human-like sentences in a wide range of languages.

However, despite the anticipation that ChatGPT would revolutionize translation, previous experiences have shown that such expectations are often inaccurate, posing challenges for translation accuracy. To put these claims to the test, PCMag conducted a blind test, asking fluent speakers of eight non-English languages to evaluate the translation results from various AI services.

The test compared ChatGPT (both the free and paid versions) to Google Translate, as well as to other competing chatbots such as Microsoft Copilot and Google Gemini. The evaluation involved comparing the translation quality for two test paragraphs across different languages, including Polish, French, Korean, Spanish, Arabic, Tagalog, and Amharic.

In the first test conducted in June 2023, participants consistently favored AI chatbots over Google Translate. ChatGPT, Google Bard (now Gemini), and Microsoft Bing outperformed Google Translate, with ChatGPT receiving the highest praise. ChatGPT demonstrated superior performance in converting colloquialisms, while Google Translate often provided literal translations that lacked cultural nuance.

For instance, ChatGPT accurately translated colloquial expressions like “blow off steam,” whereas Google Translate produced more literal translations that failed to resonate across cultures. Participants appreciated ChatGPT’s ability to maintain consistent levels of formality and its consideration of gender options in translations.

The success of AI chatbots like ChatGPT can be attributed to reinforcement learning with human feedback (RLHF), which allows these models to learn from human preferences and produce culturally appropriate translations, particularly for non-native speakers. However, it’s essential to note that while AI chatbots outperformed Google Translate, they still had limitations and occasional inaccuracies.

In a subsequent test, PCMag evaluated different versions of ChatGPT, including the free and paid versions, as well as language-specific AI agents from OpenAI’s GPTStore. The paid version of ChatGPT, known as ChatGPT Plus, consistently delivered the best translations across various languages. However, Google Translate also showed improvement, performing surprisingly well compared to previous tests.

Overall, while ChatGPT Plus emerged as the preferred choice for translation, Google Translate demonstrated notable improvement, challenging the notion that AI chatbots are always superior to traditional translation tools.


Source: https://www.pcmag.com/articles/google-translate-vs-chatgpt-which-is-the-best-language-translator

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google Implements Stricter Guidelines for Mass Email Senders to Gmail Users

Published

on

1280x924 gmail

Beginning in April, Gmail senders bombarding users with unwanted mass emails will encounter a surge in message rejections unless they comply with the freshly minted Gmail email sender protocols, Google cautions.

Fresh Guidelines for Dispatching Mass Emails to Gmail Inboxes In an elucidative piece featured on Forbes, it was highlighted that novel regulations are being ushered in to shield Gmail users from the deluge of unsolicited mass emails. Initially, there were reports surfacing about certain marketers receiving error notifications pertaining to messages dispatched to Gmail accounts. Nonetheless, a Google representative clarified that these specific errors, denoted as 550-5.7.56, weren’t novel but rather stemmed from existing authentication prerequisites.

Moreover, Google has verified that commencing from April, they will initiate “the rejection of a portion of non-compliant email traffic, progressively escalating the rejection rate over time.” Google elaborates that, for instance, if 75% of the traffic adheres to the new email sender authentication criteria, then a portion of the remaining non-conforming 25% will face rejection. The exact proportion remains undisclosed. Google does assert that the implementation of the new regulations will be executed in a “step-by-step fashion.”

This cautious and methodical strategy seems to have already kicked off, with transient errors affecting a “fraction of their non-compliant email traffic” coming into play this month. Additionally, Google stipulates that bulk senders will be granted until June 1 to integrate “one-click unsubscribe” in all commercial or promotional correspondence.

Exclusively Personal Gmail Accounts Subject to Rejection These alterations exclusively affect bulk emails dispatched to personal Gmail accounts. Entities sending out mass emails, specifically those transmitting a minimum of 5,000 messages daily to Gmail accounts, will be mandated to authenticate outgoing emails and “refrain from dispatching unsolicited emails.” The 5,000 message threshold is tabulated based on emails transmitted from the same principal domain, irrespective of the employment of subdomains. Once the threshold is met, the domain is categorized as a permanent bulk sender.

These guidelines do not extend to communications directed at Google Workspace accounts, although all senders, including those utilizing Google Workspace, are required to adhere to the updated criteria.

Augmented Security and Enhanced Oversight for Gmail Users A Google spokesperson emphasized that these requisites are being rolled out to “fortify sender-side security and augment user control over inbox contents even further.” For the recipient, this translates to heightened trust in the authenticity of the email sender, thus mitigating the risk of falling prey to phishing attempts, a tactic frequently exploited by malevolent entities capitalizing on authentication vulnerabilities. “If anything,” the spokesperson concludes, “meeting these stipulations should facilitate senders in reaching their intended recipients more efficiently, with reduced risks of spoofing and hijacking by malicious actors.”

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google’s Next-Gen AI Chatbot, Gemini, Faces Delays: What to Expect When It Finally Launches

Published

on

By

Google AI Chatbot Gemini

In an unexpected turn of events, Google has chosen to postpone the much-anticipated debut of its revolutionary generative AI model, Gemini. Initially poised to make waves this week, the unveiling has now been rescheduled for early next year, specifically in January.

Gemini is set to redefine the landscape of conversational AI, representing Google’s most potent endeavor in this domain to date. Positioned as a multimodal AI chatbot, Gemini boasts the capability to process diverse data types. This includes a unique proficiency in comprehending and generating text, images, and various content formats, even going so far as to create an entire website based on a combination of sketches and written descriptions.

Originally, Google had planned an elaborate series of launch events spanning California, New York, and Washington. Regrettably, these events have been canceled due to concerns about Gemini’s responsiveness to non-English prompts. According to anonymous sources cited by The Information, Google’s Chief Executive, Sundar Pichai, personally decided to postpone the launch, acknowledging the importance of global support as a key feature of Gemini’s capabilities.

Gemini is expected to surpass the renowned ChatGPT, powered by OpenAI’s GPT-4 model, and preliminary private tests have shown promising results. Fueled by significantly enhanced computing power, Gemini has outperformed GPT-4, particularly in FLOPS (Floating Point Operations Per Second), owing to its access to a multitude of high-end AI accelerators through the Google Cloud platform.

SemiAnalysis, a research firm affiliated with Substack Inc., expressed in an August blog post that Gemini appears poised to “blow OpenAI’s model out of the water.” The extensive compute power at Google’s disposal has evidently contributed to Gemini’s superior performance.

Google’s Vice President and Manager of Bard and Google Assistant, Sissie Hsiao, offered insights into Gemini’s capabilities, citing examples like generating novel images in response to specific requests, such as illustrating the steps to ice a three-layer cake.

While Google’s current generative AI offering, Bard, has showcased noteworthy accomplishments, it has struggled to achieve the same level of consumer awareness as ChatGPT. Gemini, with its unparalleled capabilities, is expected to be a game-changer, demonstrating impressive multimodal functionalities never seen before.

During the initial announcement at Google’s I/O developer conference in May, the company emphasized Gemini’s multimodal prowess and its developer-friendly nature. An application programming interface (API) is under development, allowing developers to seamlessly integrate Gemini into third-party applications.

As the world awaits the delayed unveiling of Gemini, the stakes are high, with Google aiming to revolutionize the AI landscape and solidify its position as a leader in generative artificial intelligence. The postponed launch only adds to the anticipation surrounding Gemini’s eventual debut in the coming year.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending