Connect with us

NEWS

Research Papers May Show What Google MUM Is

Published

on

Google Multitask Unified Model (MUM) is a new technology for answering complex questions that don’t have direct answers.  Google has published research papers that may offer clues of what the MUM AI is and how it works.

Google Algorithms Described in Research Papers and Patents

Google generally does not confirm whether or not algorithms described in research papers or patents are in use.

Google has not confirmed what the Multitask Unified Model (MUM) technology is.

Multitask Unified Model Research Papers

Sometimes, as was the case with Neural Matching, there are no research papers or patents that explicitly use the name of the technology. It’s as if Google invented a descriptive brand name for the algorithms.

This is somewhat the case with Multitask Unified Model (MUM). There are no patents or research papers with the MUM brand name exactly. But…

There are research papers that discuss similar problems that MUM solves using Multitask and Unified Model solutions.

Background on Problem that MUM Solves

Long Form Question Answering is a complex search query that cannot be answered with a link or snippet. The answer requires paragraphs of information containing multiple subtopics.

Google’s MUM announcement described the complexity of certain questions with an example of a searcher wanting to know how to prepare for hiking Mount Fuji in the fall.

This is Google’s example of a complex search query:

“Today, Google could help you with this, but it would take many thoughtfully considered searches — you’d have to search for the elevation of each mountain, the average temperature in the fall, difficulty of the hiking trails, the right gear to use, and more.”

Here’s an example of a Long Form Question:

“What are the differences between bodies of water like lakes, rivers, and oceans?”

The above question requires multiple paragraphs to discuss the qualities of lakes, rivers and seas, plus a comparison between each body of water to each other.

Here’s an example of the complexity of the answer:

  • A lake is generally referred to as still water because it does not flow.
  • A river is flowing.
  • Both a lake and a river are generally freshwater.
  • But a river and a lake can sometimes be brackish (salty).
  • An ocean can be miles deep.

Answering a Long Form question requires a complex answer comprised of multiple steps, like the example Google shared about asking how to prepare to hike Mount Fuji in the fall.

Google’s MUM announcement did not mention Long Form Question Answering but the problem MUM solves appears to be exactly that.
(Citation: Google Research Paper Reveals a Shortcoming in Search).

Change in How Questions are Answered

In May 2021, a Google researcher named Donald Metzler published a paper that presented the case that how search engines answer questions needs to take a new direction in order to give  answers to complex questions.

The paper stated that the current method of information retrieval consisting of indexing web pages and ranking them are inadequate for answering complex search queries.

The paper is entitled, Rethinking Search: Making Experts out of Dilettantes (PDF)

A dilettante is someone who has a superficial knowledge of something, like an amateur and not an expert.

The paper positions the state of search engines today like this:

“Today’s state-of-the-art systems often rely on a combination of term-based… and semantic …retrieval to generate an initial set of candidates.

This set of candidates is then typically passed into one or more stages of re-ranking models, which are quite likely to be neural network-based learning-to-rank models.

As mentioned previously, the index-retrieve-then-rank paradigm has withstood the test of time and it is no surprise that advanced machine learning and NLP-based approaches are an integral part of the indexing, retrieval, and ranking components of modern day systems.”

Model-based Information Retrieval

The new system that the Making Experts out of Dilettantes research paper describes is one that does away with the index-retrieve-rank part of the algorithm.

This section of the research paper makes reference to IR, which means Information Retrieval, which is what search engines do.

Here is how the paper describes this new direction for search engines:

“The approach, referred to as model-based information retrieval, is meant to replace the long-lived “retrieve-then-rank” paradigm by collapsing the indexing, retrieval, and ranking components of traditional IR systems into a single unified model.”

The paper next goes into detail about how the “unified model” works.

Let’s stop right here to remind that the name of Google’s new algorithm is Multitask Unified Model

I will skip the description of the unified model for now and just note this:

“The important distinction between the systems of today and the envisioned system is the fact that a unified model replaces the indexing, retrieval, and ranking components. In essence, it is referred to as model-based because there is nothing but a model.”

Screenshot Showing What a Unified Model Is

Illustration of Multitask Unified Model

In another place the Dilettantes research paper states:

“To accomplish this, a so-called model-based information retrieval framework is proposed that breaks away from the traditional index retrieve-then-rank paradigm by encoding the knowledge contained in a corpus in a unified model that replaces the indexing, retrieval, and ranking components of traditional systems.”

Is it a coincidence that Google’s technology for answering complex questions is called Multitask Unified Model and the system discussed in this May 2021 paper makes the case for the need of a “unified model” for answering complex questions?

What is the MUM Research Paper?

The “Rethinking Search: Making Experts out of Dilettantes” research paper lists Donald Metzler as an author. It announces the need for an algorithm that accomplishes the task of answering complex questions and suggests a unified model for accomplishing that.

It gives an overview of the process but it is somewhat short on details and experiments.

There is another research paper published in December 2020 that describes an algorithm that does have experiments and details and one of the authors is… Donald Metzler.

The name of the December 2020 research paper is, Multitask Mixture of Sequential Experts for User Activity Streams

Let’s stop right here, back up and reiterate the name of Google’s new algorithm: Multitask Unified Model

The May 2021 Rethinking Search: Making Experts out of Dilettantes paper outlined the need for a Unified Model. The earlier research paper from December 2020 (by the same author) is called, Multitask Mixture of Sequential Experts for User Activity Streams (PDF).

Are these coincidences? Maybe not. The similarities between MUM and this other research paper are uncannily similar.

MoSE: Multitask Mixture of Sequential Experts for User Activity Streams

TL/DR:
MoSE is a machine intelligence technology that learns from multiple data sources (search and browsing logs) in order to predict complex multi-step search patterns. It is highly efficient, which makes it scalable and powerful.

Those features of MoSE match certain qualities of the MUM algorithm, specifically that MUM can answer complex search queries and is 1,000 times more powerful than technologies like BERT.

What MoSE Does

TL/DR:
MoSE learns from the sequential order of user click and browsing data. This information allows it to model the process of complex search queries to produce satisfactory answers.

The December 2020 MoSE research paper from Google describes modeling user behavior in sequential order, as opposed to modeling on the search query and the context.

Modeling the user behavior in sequential order is like studying how a user searched for this, then this, then that in order understand how to answer a complex query.

The paper describes it like this:

“In this work, we study the challenging problem of how to model sequential user behavior in the neural multi-task learning settings.

Our major contribution is a novel framework, Mixture of Sequential Experts (MoSE). It explicitly models sequential user behavior using Long Short-Term Memory (LSTM) in the state-of-art Multi-gate Mixture-of-Expert multi-task modeling framework.”

That last part about “Multi-gate Mixture-of-Expert multi-task modeling framework” is a mouthful.

It’s a reference to a type of algorithm that optimizes for multiple tasks/goals and that’s pretty much all that needs to be known about it for now. (Citation: Modeling Task Relationships in Multi-task Learning with Multi-gate Mixture-of-Experts)

The MoSE research paper discusses other similar multi-task algorithms that are optimized for multiple goals such as simultaneously predicting what video a user might want to watch on YouTube, which videos will perpetuate more engagement and which videos will generate more user satisfaction. That’s three tasks/goals.

The paper comments:

“Multi-task learning is effective especially when tasks are closely correlated.”

MoSE was Trained on Search

The MoSE algorithm focuses on learning from what it calls heterogeneous data, which means different/diverse forms of data.

Of interest to us, in the context of MUM, is that the MoSE algorithm is discussed in the context of search and the interactions of searchers in their quest for answers, i.e. what steps a searcher took to find an answer.

“…in this work, we focus on modeling user activity streams from heterogeneous data sources (e.g., search logs and browsing logs) and the interactions among them.”

The researchers experimented and tested the MoSE algorithm on search tasks within G Suite and Gmail.

MoSE and Search Behavior Prediction

Another feature that makes MoSE an interesting candidate for being relevant to MUM is that it can predict a series of sequential searches and behaviors.

Complex search queries, as noted by the Google MUM announcement, can take up to eight searches.

But if an algorithm can predict these searches and incorporate those into answers, the algorithm can be better able to answer those complex questions.

The MUM announcement states:

“But with a new technology called Multitask Unified Model, or MUM, we’re getting closer to helping you with these types of complex needs. So in the future, you’ll need fewer searches to get things done.”

And here is what the MoSE research paper states:

“For example, user behavior streams, such as user search logs in search systems, are naturally a temporal sequence. Modeling user sequential behaviors as explicit sequential representations can empower the multi-task model to incorporate temporal dependencies, thus predicting future user behavior more accurately.”

MoSE is Highly Efficient with Resource Costs

The efficiency of MoSE is important.

The less computing resources an algorithm needs to complete a task the more powerful it can be at those tasks because this gives it more room to scale.

MUM is said to be 1,000 times more powerful than BERT.

The MoSE research paper mentions balancing search quality with “resource costs,” resource costs being a reference to computing resources.

The ideal is to have high quality results with minimal computing resource costs which will allow it to scale up for a bigger task like search.

The original Penguin algorithm could only be run on the map of the entire web (called a link graph) a couple times a year. Presumably that was because it was resource intensive and could not be run on a daily basis.

In 2016 Penguin became more powerful because it could now run in real time. This is an example of why it’s important to produce high quality results with minimal resource costs.

The less resource costs MoSE requires the more powerful and scalable it can be.

This is what the researchers said about the resource costs of MoSE:

“In experiments, we show the effectiveness of the MoSE architecture over seven alternative architectures on both synthetic and noisy real-world user data in G Suite.

We also demonstrate the effectiveness and flexibility of the MoSE architecture in a real-world decision making engine in GMail that involves millions of users, balancing between search quality and resource costs.”

Then toward the end of the paper it reports these remarkable results:

“We emphasize two benefits of MoSE. First, performance wise, MoSE significantly outperforms the heavily tuned shared bottom model. At the requirement of 80% resource savings, MoSE is able to preserve approximately 8% more document search clicks, which is very significant in the product.

Also, MoSE is robust across different resource saving level due to the its modeling power, even though we assigned equal weights to the tasks during training.”

And of the sheer power and flexibility to pivot to change, it boasts:

“This gives MoSE more flexibility when the business requirement keeps changing in practice since a more robust model like MoSE may alleviate the need to re-train the model, comparing with models that are more sensitive to the importance weights during training.”

Mum, MoSE and Transformer

MUM was announced to have been built using the Transformer technique.

Google’s announcement noted:

“MUM has the potential to transform how Google helps you with complex tasks. Like BERT, MUM is built on a Transformer architecture, but it’s 1,000 times more powerful.”

The results reported in the MoSE research paper from December 2020, six months ago, were remarkable.

But the version of MoSE tested in 2020 was not built using the Transformer architecture. The researchers noted that MoSE could easily be extended with transformers.

The researchers (in paper published in December 2020) mentioned transformers as a future direction for MoSE:

“Experimenting with more advanced techniques such as Transformer is considered as future work.

… MoSE, consisting of general building blocks, can be easily extended, such as using other sequential modeling units besides LSTM, including GRUs, attentions, and Transformers…”

According to the research paper then, MoSE could easily be supercharged by using other architectures, like Transformers. This means that MoSE could be a part of what Google announced as MUM.

Why Success of MoSE is Notable

Google publishes many algorithm patents and research papers. Many of them are pushing the edges of the state of the art while also noting flaws and errors that require further research.

That’s not the case with MoSE. It’s quite the opposite. The researchers note the accomplishments of MoSE and how there is still opportunity to make it even better.

What makes the MoSE research even more notable then is the level of success that it claims and the door it leaves open for doing even better.

It is noteworthy and important when a research paper claims success and not a mix of success and losses.

This is especially true when the researchers claim to achieve these successes without significant resource levels.

Is MoSE the Google MUM AI Technology?

MUM is described as an Artificial Intelligence technology. MoSE is categorized as Machine Intelligence on Google’s AI blog. What’s the difference between AI and Machine Intelligence? Not a whole lot, they’re pretty much in the same category (note that I wrote machine INTELLIGENCE, not machine learning). The Google AI Publications database classifies research papers on Artificial Intelligence under the Machine Intelligence category. There is no Artificial Intelligence category.

We cannot say with certainty that MoSE is part of the technology underlying Google’s MUM.

  • It’s possible that MUM is actually a number of technologies working together and that MoSE is a part of that.
  • It could be that MoSE is a major part of Google MUM.
  • Or it could be that MoSE has nothing to do with MUM whatsoever.

Nevertheless, it’s intriguing that MoSE is a successful approach to predicting user search behavior and that it can easily be scaled using Transformers.

Whether or not this is a part of Google’s MUM technology, the algorithms described within these papers show what the state of the art in information retrieval is.

Citations

MoSE – Multitask Mixture of Sequential Experts for User Activity Streams (PDF)

Rethinking Search: Making Experts out of Dilettantes (PDF)

Official Google Announcement of MUM
MUM: A new AI Milestone for Understanding Information

Searchenginejournal.com

NEWS

What can ChatGPT do?

Published

on

ChatGPT Explained

ChatGPT is a large language model developed by OpenAI that is trained on a massive amount of text data. It is capable of generating human-like text and has been used in a variety of applications, such as chatbots, language translation, and text summarization.

One of the key features of ChatGPT is its ability to generate text that is similar to human writing. This is achieved through the use of a transformer architecture, which allows the model to understand the context and relationships between words in a sentence. The transformer architecture is a type of neural network that is designed to process sequential data, such as natural language.

Another important aspect of ChatGPT is its ability to generate text that is contextually relevant. This means that the model is able to understand the context of a conversation and generate responses that are appropriate to the conversation. This is accomplished by the use of a technique called “masked language modeling,” which allows the model to predict the next word in a sentence based on the context of the previous words.

One of the most popular applications of ChatGPT is in the creation of chatbots. Chatbots are computer programs that simulate human conversation and can be used in customer service, sales, and other applications. ChatGPT is particularly well-suited for this task because of its ability to generate human-like text and understand context.

Another application of ChatGPT is language translation. By training the model on a large amount of text data in multiple languages, it can be used to translate text from one language to another. The model is able to understand the meaning of the text and generate a translation that is grammatically correct and semantically equivalent.

In addition to chatbots and language translation, ChatGPT can also be used for text summarization. This is the process of taking a large amount of text and condensing it into a shorter, more concise version. ChatGPT is able to understand the main ideas of the text and generate a summary that captures the most important information.

Despite its many capabilities and applications, ChatGPT is not without its limitations. One of the main challenges with using language models like ChatGPT is the risk of generating text that is biased or offensive. This can occur when the model is trained on text data that contains biases or stereotypes. To address this, OpenAI has implemented a number of techniques to reduce bias in the training data and in the model itself.

In conclusion, ChatGPT is a powerful language model that is capable of generating human-like text and understanding context. It has a wide range of applications, including chatbots, language translation, and text summarization. While there are limitations to its use, ongoing research and development is aimed at improving the model’s performance and reducing the risk of bias.

** The above article has been written 100% by ChatGPT. This is an example of what can be done with AI. This was done to show the advanced text that can be written by an automated AI.

Continue Reading

NEWS

Google December Product Reviews Update Affects More Than English Language Sites? via @sejournal, @martinibuster

Published

on

Google’s Product Reviews update was announced to be rolling out to the English language. No mention was made as to if or when it would roll out to other languages. Mueller answered a question as to whether it is rolling out to other languages.

Google December 2021 Product Reviews Update

On December 1, 2021, Google announced on Twitter that a Product Review update would be rolling out that would focus on English language web pages.

The focus of the update was for improving the quality of reviews shown in Google search, specifically targeting review sites.

A Googler tweeted a description of the kinds of sites that would be targeted for demotion in the search rankings:

“Mainly relevant to sites that post articles reviewing products.

Think of sites like “best TVs under $200″.com.

Goal is to improve the quality and usefulness of reviews we show users.”

Advertisement

Continue Reading Below

Google also published a blog post with more guidance on the product review update that introduced two new best practices that Google’s algorithm would be looking for.

The first best practice was a requirement of evidence that a product was actually handled and reviewed.

The second best practice was to provide links to more than one place that a user could purchase the product.

The Twitter announcement stated that it was rolling out to English language websites. The blog post did not mention what languages it was rolling out to nor did the blog post specify that the product review update was limited to the English language.

Google’s Mueller Thinking About Product Reviews Update

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Product Review Update Targets More Languages?

The person asking the question was rightly under the impression that the product review update only affected English language search results.

Advertisement

Continue Reading Below

But he asserted that he was seeing search volatility in the German language that appears to be related to Google’s December 2021 Product Review Update.

This is his question:

“I was seeing some movements in German search as well.

So I was wondering if there could also be an effect on websites in other languages by this product reviews update… because we had lots of movement and volatility in the last weeks.

…My question is, is it possible that the product reviews update affects other sites as well?”

John Mueller answered:

“I don’t know… like other languages?

My assumption was this was global and and across all languages.

But I don’t know what we announced in the blog post specifically.

But usually we try to push the engineering team to make a decision on that so that we can document it properly in the blog post.

I don’t know if that happened with the product reviews update. I don’t recall the complete blog post.

But it’s… from my point of view it seems like something that we could be doing in multiple languages and wouldn’t be tied to English.

And even if it were English initially, it feels like something that is relevant across the board, and we should try to find ways to roll that out to other languages over time as well.

So I’m not particularly surprised that you see changes in Germany.

But I also don’t know what we actually announced with regards to the locations and languages that are involved.”

Does Product Reviews Update Affect More Languages?

While the tweeted announcement specified that the product reviews update was limited to the English language the official blog post did not mention any such limitations.

Google’s John Mueller offered his opinion that the product reviews update is something that Google could do in multiple languages.

One must wonder if the tweet was meant to communicate that the update was rolling out first in English and subsequently to other languages.

It’s unclear if the product reviews update was rolled out globally to more languages. Hopefully Google will clarify this soon.

Citations

Google Blog Post About Product Reviews Update

Product reviews update and your site

Google’s New Product Reviews Guidelines

Write high quality product reviews

John Mueller Discusses If Product Reviews Update Is Global

Watch Mueller answer the question at the 14:00 Minute Mark

[embedded content]

Searchenginejournal.com

Continue Reading

NEWS

Survey says: Amazon, Google more trusted with your personal data than Apple is

Published

on

survey-says:-amazon,-google-more-trusted-with-your-personal-data-than-apple-is-–-phonearena
 

MacRumors reveals that more people feel better with their personal data in the hands of Amazon and Google than Apple’s. Companies that the public really doesn’t trust when it comes to their personal data include Facebook, TikTok, and Instagram.

The survey asked over 1,000 internet users in the U.S. how much they trusted certain companies such as Facebook, TikTok, Instagram, WhatsApp, YouTube, Google, Microsoft, Apple, and Amazon to handle their user data and browsing activity responsibly.

Amazon and Google are considered by survey respondents to be more trustworthy than Apple

Those surveyed were asked whether they trusted these firms with their personal data “a great deal,” “a good amount,” “not much,” or “not at all.” Respondents could also answer that they had no opinion about a particular company. 18% of those polled said that they trust Apple “a great deal” which topped the 14% received by Google and Amazon.

However, 39% said that they trust Amazon  by “a good amount” with Google picking up 34% of the votes in that same category. Only 26% of those answering said that they trust Apple by “a good amount.” The first two responses, “a great deal” and “a good amount,” are considered positive replies for a company. “Not much” and “not at all” are considered negative responses.

By adding up the scores in the positive categories,

Apple tallied a score of 44% (18% said it trusted Apple with its personal data “a great deal” while 26% said it trusted Apple “a good amount”). But that placed the tech giant third after Amazon’s 53% and Google’s 48%. After Apple, Microsoft finished fourth with 43%, YouTube (which is owned by Google) was fifth with 35%, and Facebook was sixth at 20%.

Rounding out the remainder of the nine firms in the survey, Instagram placed seventh with a positive score of 19%, WhatsApp was eighth with a score of 15%, and TikTok was last at 12%.

Looking at the scoring for the two negative responses (“not much,” or “not at all”), Facebook had a combined negative score of 72% making it the least trusted company in the survey. TikTok was next at 63% with Instagram following at 60%. WhatsApp and YouTube were both in the middle of the pact at 53% followed next by Google and Microsoft at 47% and 42% respectively. Apple and Amazon each had the lowest combined negative scores at 40% each.

74% of those surveyed called targeted online ads invasive

The survey also found that a whopping 82% of respondents found targeted online ads annoying and 74% called them invasive. Just 27% found such ads helpful. This response doesn’t exactly track the 62% of iOS users who have used Apple’s App Tracking Transparency feature to opt-out of being tracked while browsing websites and using apps. The tracking allows third-party firms to send users targeted ads online which is something that they cannot do to users who have opted out.

The 38% of iOS users who decided not to opt out of being tracked might have done so because they find it convenient to receive targeted ads about a certain product that they looked up online. But is ATT actually doing anything?

Marketing strategy consultant Eric Seufert said last summer, “Anyone opting out of tracking right now is basically having the same level of data collected as they were before. Apple hasn’t actually deterred the behavior that they have called out as being so reprehensible, so they are kind of complicit in it happening.”

The Financial Times says that iPhone users are being lumped together by certain behaviors instead of unique ID numbers in order to send targeted ads. Facebook chief operating officer Sheryl Sandberg says that the company is working to rebuild its ad infrastructure “using more aggregate or anonymized data.”

Aggregated data is a collection of individual data that is used to create high-level data. Anonymized data is data that removes any information that can be used to identify the people in a group.

When consumers were asked how often do they think that their phones or other tech devices are listening in to them in ways that they didn’t agree to, 72% answered “very often” or “somewhat often.” 28% responded by saying “rarely” or “never.”

Continue Reading

Trending

en_USEnglish