Connect with us

SEO

Google LIMoE – A Step Towards Goal Of A Single AI

Published

on

Google LIMoE - A Step Towards Goal Of A Single AI

Google announced a new technology called LIMoE that it says represents a step toward reaching Google’s goal of an AI architecture called Pathways.

Pathways is an AI architecture that is a single model that can learn to do multiple tasks that are currently accomplished by employing multiple algorithms.

LIMoE is an acronym that stands for Learning Multiple Modalities with One Sparse Mixture-of-Experts Model. It’s a model that processes vision and text together.

While there are other architectures that to do similar things, the breakthrough is in the way the new model accomplishes these tasks, using a neural network technique called a Sparse Model.

The sparse model is described in a research paper from 2017 that introduced the Mixture-of-Experts layer (MoE) approach, in a research paper titled, Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer.

Advertisement

In 2021 Google announced a MoE model called GLaM: Efficient Scaling of Language Models with Mixture-of-Experts that was trained just on text.

The difference with LIMoE is that it works on text and images simultaneously.

The sparse model is different from the the “dense” models in that instead of devoting every part of the model to accomplishing a task, the sparse model assigns the task to various “experts” that specialize in a part of the task.

What this does is to lower the computational cost, making the model more efficient.

So, similar to how a brain sees a dog and know it’s a dog, that it’s a pug and that the pug displays a silver fawn color coat, this model can also view an image and accomplish the task in a similar way, by assigning computational tasks to different experts that specialize in the task of recognizing a dog, its breed, its color, etc.

The LIMoE model routes the problems to the “experts” specializing in a particular task, achieving similar or better results than current approaches to solving problems.

Advertisement

An interesting feature of the model is how some of the experts specialize mostly in processing images, others specialize mostly in processing text and some experts specialize in doing both.

Google’s description of how LIMoE works shows how there’s an expert on eyes, another for wheels, an expert for striped textures, solid textures, words, door handles, food & fruits, sea & sky, and an expert for plant images.

The announcement about the new algorithm describes these experts:

“There are also some clear qualitative patterns among the image experts — e.g., in most LIMoE models, there is an expert that processes all image patches that contain text. …one expert processes fauna and greenery, and another processes human hands.”

Experts that specialize in different parts of the problems provide the ability to scale and to accurately accomplish many different tasks but at a lower computational cost.

The research paper summarizes their findings:

  • “We propose LIMoE, the first large-scale multimodal mixture of experts models.
  • We demonstrate in detail how prior approaches to regularising mixture of experts models fall short for multimodal learning, and propose a new entropy-based regularisation scheme to stabilise training.
  • We show that LIMoE generalises across architecture scales, with relative improvements in zero-shot ImageNet accuracy ranging from 7% to 13% over equivalent dense models.
  • Scaled further, LIMoE-H/14 achieves 84.1% zeroshot ImageNet accuracy, comparable to SOTA contrastive models with per-modality backbones and pre-training.”

Matches State of the Art

There are many research papers published every month. But only a few are highlighted by Google.

Typically Google spotlights research because it accomplishes something new, in addition to attaining a state of the art.

Advertisement

LIMoE accomplishes this feat of attaining comparable results to today’s best algorithms but does it more efficiently.

The researchers highlight this advantage:

“On zero-shot image classification, LIMoE outperforms both comparable dense multimodal models and two-tower approaches.

The largest LIMoE achieves 84.1% zero-shot ImageNet accuracy, comparable to more expensive state-of-the-art models.

Sparsity enables LIMoE to scale up gracefully and learn to handle very different inputs, addressing the tension between being a jack-of-all-trades generalist and a master-of-one specialist.”

The successful outcomes of LIMoE led the researchers to observe that LIMoE could be a way forward for achieving a multimodal generalist model.

The researchers observed:

Advertisement

“We believe the ability to build a generalist model with specialist components, which can decide how different modalities or tasks should interact, will be key to creating truly multimodal multitask models which excel at everything they do.

LIMoE is a promising first step in that direction.”

Potential Shortcomings, Biases & Other Ethical Problems

There are shortcomings to this architecture that are not discussed in Google’s announcement but are mentioned in the research paper itself.

The research paper notes that, similar to other large-scale models, LIMoE may also introduce biases into the results.

The researchers state that they have not yet “explicitly” addressed the problems inherent in large scale models.

They write:

“The potential harms of large scale models…, contrastive models… and web-scale multimodal data… also carry over here, as LIMoE does not explicitly address them.”

The above statement makes a reference (in a footnote link) to a 2021 research paper called, On the Opportunities and Risks of Foundation Models (PDF here).

Advertisement

That research paper from 2021 warns how emergent AI technologies can cause negative societal impact such as:

“…inequity, misuse, economic and environmental impact, legal and ethical considerations.”

According to the cited paper, ethical problems can also arise from the tendency toward the homogenization of tasks, which can then introduce a point of failure that is then reproduced to other tasks that follow downstream.

The cautionary research paper states:

“The significance of foundation models can be summarized with two words: emergence and homogenization.

Emergence means that the behavior of a system is implicitly induced rather than explicitly constructed; it is both the source of scientific excitement and anxiety about unanticipated consequences.

Homogenization indicates the consolidation of methodologies for building machine learning systems across a wide range of applications; it provides strong leverage towards many tasks but also creates single points of failure.”

One area of caution is in vision related AI.

Advertisement

The 2021 paper states that the ubiquity of cameras means that any advances in AI related to vision could carry a concomitant risk toward the technology being applied in an unanticipated manner which can have a “disruptive impact,” including with regard to privacy and surveillance.

Another cautionary warning related to advances in vision related AI is problems with accuracy and bias.

They note:

“There is a well-documented history of learned bias in computer vision models, resulting in lower accuracies and correlated errors for underrepresented groups, with consequently inappropriate and premature deployment to some real-world settings.”

The rest of the paper documents how AI technologies can learn existing biases and perpetuate inequities.

“Foundation models have the potential to yield inequitable outcomes: the treatment of people that is unjust, especially due to unequal distribution along lines that compound historical discrimination…. Like any AI system, foundation models can compound existing inequities by producing unfair outcomes, entrenching systems of power, and disproportionately distributing negative consequences of technology to those already marginalized…”

The LIMoE researchers noted that this particular model may be able to work around some of the biases against underrepresented groups because of the nature of how the experts specialize in certain things.

These kinds of negative outcomes are not theories, they are realities and have already negatively impacted lives in real-world applications such as unfair racial-based biases introduced by employment recruitment algorithms.

Advertisement

The authors of the LIMoE paper acknowledge those potential shortcomings in a short paragraph that serves as a cautionary caveat.

But they also note that there may be a potential to address some of the biases with this new approach.

They wrote:

“…the ability to scale models with experts that can specialize deeply may result in better performance on underrepresented groups.”

Lastly, a key attribute of this new technology that should be noted is that there is no explicit use stated for it.

It’s simply a technology that can process images and text in an efficient manner.

How it can be applied, if it ever is applied in this form or a future form, is never addressed.

Advertisement

And that’s an important factor that is raised by the cautionary paper (Opportunities and Risks of Foundation Models), calls attention to in that researchers create capabilities for AI without consideration for how they can be used and the impact they may have on issues like privacy and security.

“Foundation models are intermediary assets with no specified purpose before they are adapted; understanding their harms requires reasoning about both their properties and the role they play in building task-specific models.”

All of those caveats are left out of Google’s announcement article but are referenced in the PDF version of the research paper itself.

Pathways AI Architecture & LIMoE

Text, images, audio data are referred to as modalities, different kinds of data or task specialization, so to speak. Modalities can also mean spoken language and symbols.

So when you see the phrase “multimodal” or “modalities” in scientific articles and research papers, what they’re generally talking about is different kinds of data.

Google’s ultimate goal for AI is what it calls the Pathways Next-Generation AI Architecture.

Pathways represents a move away from machine learning models that do one thing really well (thus requiring thousands of them) to a single model that does everything really well.

Advertisement

Pathways (and LIMoE) is a multimodal approach to solving problems.

It’s described like this:

“People rely on multiple senses to perceive the world. That’s very different from how contemporary AI systems digest information.

Most of today’s models process just one modality of information at a time. They can take in text, or images or speech — but typically not all three at once.

Pathways could enable multimodal models that encompass vision, auditory, and language understanding simultaneously.”

What makes LIMoE important is that it is a multimodal architecture that is referred to by the researchers as an “…important step towards the Pathways vision…

The researchers describe LIMoE a “step” because there is more work to be done, which includes exploring how this approach can work with modalities beyond just images and text.

Advertisement

This research paper and the accompanying summary article shows what direction Google’s AI research is going and how it is getting there.


Citations

Read Google’s Summary Article About LIMoE

LIMoE: Learning Multiple Modalities with One Sparse Mixture-of-Experts Model

Download and Read the LIMoE Research Paper

Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts (PDF)

Image by Shutterstock/SvetaZi



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Google Further Postpones Third-Party Cookie Deprecation In Chrome

Published

on

By

Close-up of a document with a grid and a red stamp that reads "delayed" over the word "status" due to Chrome's deprecation of third-party cookies.

Google has again delayed its plan to phase out third-party cookies in the Chrome web browser. The latest postponement comes after ongoing challenges in reconciling feedback from industry stakeholders and regulators.

The announcement was made in Google and the UK’s Competition and Markets Authority (CMA) joint quarterly report on the Privacy Sandbox initiative, scheduled for release on April 26.

Chrome’s Third-Party Cookie Phaseout Pushed To 2025

Google states it “will not complete third-party cookie deprecation during the second half of Q4” this year as planned.

Instead, the tech giant aims to begin deprecating third-party cookies in Chrome “starting early next year,” assuming an agreement can be reached with the CMA and the UK’s Information Commissioner’s Office (ICO).

The statement reads:

Advertisement

“We recognize that there are ongoing challenges related to reconciling divergent feedback from the industry, regulators and developers, and will continue to engage closely with the entire ecosystem. It’s also critical that the CMA has sufficient time to review all evidence, including results from industry tests, which the CMA has asked market participants to provide by the end of June.”

Continued Engagement With Regulators

Google reiterated its commitment to “engaging closely with the CMA and ICO” throughout the process and hopes to conclude discussions this year.

This marks the third delay to Google’s plan to deprecate third-party cookies, initially aiming for a Q3 2023 phaseout before pushing it back to late 2024.

The postponements reflect the challenges in transitioning away from cross-site user tracking while balancing privacy and advertiser interests.

Transition Period & Impact

In January, Chrome began restricting third-party cookie access for 1% of users globally. This percentage was expected to gradually increase until 100% of users were covered by Q3 2024.

However, the latest delay gives websites and services more time to migrate away from third-party cookie dependencies through Google’s limited “deprecation trials” program.

The trials offer temporary cookie access extensions until December 27, 2024, for non-advertising use cases that can demonstrate direct user impact and functional breakage.

Advertisement

While easing the transition, the trials have strict eligibility rules. Advertising-related services are ineligible, and origins matching known ad-related domains are rejected.

Google states the program aims to address functional issues rather than relieve general data collection inconveniences.

Publisher & Advertiser Implications

The repeated delays highlight the potential disruption for digital publishers and advertisers relying on third-party cookie tracking.

Industry groups have raised concerns that restricting cross-site tracking could push websites toward more opaque privacy-invasive practices.

However, privacy advocates view the phaseout as crucial in preventing covert user profiling across the web.

With the latest postponement, all parties have more time to prepare for the eventual loss of third-party cookies and adopt Google’s proposed Privacy Sandbox APIs as replacements.

Advertisement

Featured Image: Novikov Aleksey/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

How To Write ChatGPT Prompts To Get The Best Results

Published

on

By

How To Write ChatGPT Prompts To Get The Best Results

ChatGPT is a game changer in the field of SEO. This powerful language model can generate human-like content, making it an invaluable tool for SEO professionals.

However, the prompts you provide largely determine the quality of the output.

To unlock the full potential of ChatGPT and create content that resonates with your audience and search engines, writing effective prompts is crucial.

In this comprehensive guide, we’ll explore the art of writing prompts for ChatGPT, covering everything from basic techniques to advanced strategies for layering prompts and generating high-quality, SEO-friendly content.

Writing Prompts For ChatGPT

What Is A ChatGPT Prompt?

A ChatGPT prompt is an instruction or discussion topic a user provides for the ChatGPT AI model to respond to.

Advertisement

The prompt can be a question, statement, or any other stimulus to spark creativity, reflection, or engagement.

Users can use the prompt to generate ideas, share their thoughts, or start a conversation.

ChatGPT prompts are designed to be open-ended and can be customized based on the user’s preferences and interests.

How To Write Prompts For ChatGPT

Start by giving ChatGPT a writing prompt, such as, “Write a short story about a person who discovers they have a superpower.”

ChatGPT will then generate a response based on your prompt. Depending on the prompt’s complexity and the level of detail you requested, the answer may be a few sentences or several paragraphs long.

Use the ChatGPT-generated response as a starting point for your writing. You can take the ideas and concepts presented in the answer and expand upon them, adding your own unique spin to the story.

Advertisement

If you want to generate additional ideas, try asking ChatGPT follow-up questions related to your original prompt.

For example, you could ask, “What challenges might the person face in exploring their newfound superpower?” Or, “How might the person’s relationships with others be affected by their superpower?”

Remember that ChatGPT’s answers are generated by artificial intelligence and may not always be perfect or exactly what you want.

However, they can still be a great source of inspiration and help you start writing.

Must-Have GPTs Assistant

I recommend installing the WebBrowser Assistant created by the OpenAI Team. This tool allows you to add relevant Bing results to your ChatGPT prompts.

This assistant adds the first web results to your ChatGPT prompts for more accurate and up-to-date conversations.

Advertisement

It is very easy to install in only two clicks. (Click on Start Chat.)

Screenshot from ChatGPT, April 2024

For example, if I ask, “Who is Vincent Terrasi?,” ChatGPT has no answer.

With WebBrower Assistant, the assistant creates a new prompt with the first Bing results, and now ChatGPT knows who Vincent Terrasi is.

Enabling reverse prompt engineeringScreenshot from ChatGPT, March 2023

You can test other GPT assistants available in the GPTs search engine if you want to use Google results.

Master Reverse Prompt Engineering

ChatGPT can be an excellent tool for reverse engineering prompts because it generates natural and engaging responses to any given input.

By analyzing the prompts generated by ChatGPT, it is possible to gain insight into the model’s underlying thought processes and decision-making strategies.

One key benefit of using ChatGPT to reverse engineer prompts is that the model is highly transparent in its decision-making.

Advertisement

This means that the reasoning and logic behind each response can be traced, making it easier to understand how the model arrives at its conclusions.

Once you’ve done this a few times for different types of content, you’ll gain insight into crafting more effective prompts.

Prepare Your ChatGPT For Generating Prompts

First, activate the reverse prompt engineering.

  • Type the following prompt: “Enable Reverse Prompt Engineering? By Reverse Prompt Engineering I mean creating a prompt from a given text.”
Enabling reverse prompt engineeringScreenshot from ChatGPT, March 2023

ChatGPT is now ready to generate your prompt. You can test the product description in a new chatbot session and evaluate the generated prompt.

  • Type: “Create a very technical reverse prompt engineering template for a product description about iPhone 11.”
Reverse Prompt engineering via WebChatGPTScreenshot from ChatGPT, March 2023

The result is amazing. You can test with a full text that you want to reproduce. Here is an example of a prompt for selling a Kindle on Amazon.

  • Type: “Reverse Prompt engineer the following {product), capture the writing style and the length of the text :
    product =”
Reverse prompt engineering: Amazon productScreenshot from ChatGPT, March 2023

I tested it on an SEJ blog post. Enjoy the analysis – it is excellent.

  • Type: “Reverse Prompt engineer the following {text}, capture the tone and writing style of the {text} to include in the prompt :
    text = all text coming from https://www.searchenginejournal.com/google-bard-training-data/478941/”
Reverse prompt engineering an SEJ blog postScreenshot from ChatGPT, March 2023

But be careful not to use ChatGPT to generate your texts. It is just a personal assistant.

Go Deeper

Prompts and examples for SEO:

  • Keyword research and content ideas prompt: “Provide a list of 20 long-tail keyword ideas related to ‘local SEO strategies’ along with brief content topic descriptions for each keyword.”
  • Optimizing content for featured snippets prompt: “Write a 40-50 word paragraph optimized for the query ‘what is the featured snippet in Google search’ that could potentially earn the featured snippet.”
  • Creating meta descriptions prompt: “Draft a compelling meta description for the following blog post title: ’10 Technical SEO Factors You Can’t Ignore in 2024′.”

Important Considerations:

  • Always Fact-Check: While ChatGPT can be a helpful tool, it’s crucial to remember that it may generate inaccurate or fabricated information. Always verify any facts, statistics, or quotes generated by ChatGPT before incorporating them into your content.
  • Maintain Control and Creativity: Use ChatGPT as a tool to assist your writing, not replace it. Don’t rely on it to do your thinking or create content from scratch. Your unique perspective and creativity are essential for producing high-quality, engaging content.
  • Iteration is Key: Refine and revise the outputs generated by ChatGPT to ensure they align with your voice, style, and intended message.

Additional Prompts for Rewording and SEO:
– Rewrite this sentence to be more concise and impactful.
– Suggest alternative phrasing for this section to improve clarity.
– Identify opportunities to incorporate relevant internal and external links.
– Analyze the keyword density and suggest improvements for better SEO.

Remember, while ChatGPT can be a valuable tool, it’s essential to use it responsibly and maintain control over your content creation process.

Experiment And Refine Your Prompting Techniques

Writing effective prompts for ChatGPT is an essential skill for any SEO professional who wants to harness the power of AI-generated content.

Advertisement

Hopefully, the insights and examples shared in this article can inspire you and help guide you to crafting stronger prompts that yield high-quality content.

Remember to experiment with layering prompts, iterating on the output, and continually refining your prompting techniques.

This will help you stay ahead of the curve in the ever-changing world of SEO.

More resources: 


Featured Image: Tapati Rinchumrus/Shutterstock

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Measuring Content Impact Across The Customer Journey

Published

on

By

Measuring Content Impact Across The Customer Journey

Understanding the impact of your content at every touchpoint of the customer journey is essential – but that’s easier said than done. From attracting potential leads to nurturing them into loyal customers, there are many touchpoints to look into.

So how do you identify and take advantage of these opportunities for growth?

Watch this on-demand webinar and learn a comprehensive approach for measuring the value of your content initiatives, so you can optimize resource allocation for maximum impact.

You’ll learn:

  • Fresh methods for measuring your content’s impact.
  • Fascinating insights using first-touch attribution, and how it differs from the usual last-touch perspective.
  • Ways to persuade decision-makers to invest in more content by showcasing its value convincingly.

With Bill Franklin and Oliver Tani of DAC Group, we unravel the nuances of attribution modeling, emphasizing the significance of layering first-touch and last-touch attribution within your measurement strategy. 

Check out these insights to help you craft compelling content tailored to each stage, using an approach rooted in first-hand experience to ensure your content resonates.

Advertisement

Whether you’re a seasoned marketer or new to content measurement, this webinar promises valuable insights and actionable tactics to elevate your SEO game and optimize your content initiatives for success. 

View the slides below or check out the full webinar for all the details.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS