Connect with us

SEO

Leaked Google Memo Admits Defeat By Open Source AI

Published

on

Leaked Google Memo Admits Defeat By Open Source AI

A leaked Google memo offers a point by point summary of why Google is losing to open source AI and suggests a path back to dominance and owning the platform.

The memo opens by acknowledging their competitor was never OpenAI and was always going to be Open Source.

Cannot Compete Against Open Source

Further, they admit that they are not positioned in any way to compete against open source, acknowledging that they have already lost the struggle for AI dominance.

They wrote:

“We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.

Advertisement

I’m talking, of course, about open source.

Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.”

The bulk of the memo is spent describing how Google is outplayed by open source.

And even though Google has a slight advantage over open source, the author of the memo acknowledges that it is slipping away and will never return.

The self-analysis of the metaphoric cards they’ve dealt themselves is considerably downbeat:

“While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly.

Open-source models are faster, more customizable, more private, and pound-for-pound more capable.

Advertisement

They are doing things with $100 and 13B params that we struggle with at $10M and 540B.

And they are doing so in weeks, not months.”

Large Language Model Size is Not an Advantage

Perhaps the most chilling realization expressed in the memo is Google’s size is no longer an advantage.

The outlandishly large size of their models are now seen as disadvantages and not in any way the insurmountable advantage they thought them to be.

The leaked memo lists a series of events that signal Google’s (and OpenAI’s) control of AI may rapidly be over.

It recounts that barely a month ago, in March 2023, the open source community obtained a leaked open source model large language model developed by Meta called LLaMA.

Advertisement

Within days and weeks the global open source community developed all the building parts necessary to create Bard and ChatGPT clones.

Sophisticated steps such as instruction tuning and reinforcement learning from human feedback (RLHF) were quickly replicated by the global open source community, on the cheap no less.

  • Instruction tuning
    A process of fine-tuning a language model to make it do something specific that it wasn’t initially trained to do.
  • Reinforcement learning from human feedback (RLHF)
    A technique where humans rate a language models output so that it learns which outputs are satisfactory to humans.

RLHF is the technique used by OpenAI to create InstructGPT, which is a model underlying ChatGPT and allows the GPT-3.5 and GPT-4 models to take instructions and complete tasks.

RLHF is the fire that open source has taken from

Scale of Open Source Scares Google

What scares Google in particular is the fact that the Open Source movement is able to scale their projects in a way that closed source cannot.

The question and answer dataset used to create the open source ChatGPT clone, Dolly 2.0, was entirely created by thousands of employee volunteers.

Google and OpenAI relied partially on question and answers from scraped from sites like Reddit.

Advertisement

The open source Q&A dataset created by Databricks is claimed to be of a higher quality because the humans who contributed to creating it were professionals and the answers they provided were longer and more substantial than what is found in a typical question and answer dataset scraped from a public forum.

The leaked memo observed:

“At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public.

It had no instruction or conversation tuning, and no RLHF.

Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments…

Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other.

Advertisement

Most importantly, they have solved the scaling problem to the extent that anyone can tinker.

Many of the new ideas are from ordinary people.

The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.”

In other words, what took months and years for Google and OpenAI to train and build only took a matter of days for the open source community.

That has to be a truly frightening scenario to Google.

It’s one of the reasons why I’ve been writing so much about the open source AI movement as it truly looks like where the future of generative AI will be in a relatively short period of time.

Advertisement

Open Source Has Historically Surpassed Closed Source

The memo cites the recent experience with OpenAI’s DALL-E, the deep learning model used to create images versus the open source Stable Diffusion as a harbinger of what is currently befalling Generative AI like Bard and ChatGPT.

Dall-e was released by OpenAI in January 2021. Stable Diffusion, the open source version, was released a year and a half later in August 2022 and in a few short weeks overtook the popularity of Dall-E.

This timeline graph shows how fast Stable Diffusion overtook Dall-E:

The above Google Trends timeline shows how interest in the open source Stable Diffusion model vastly surpassed that of Dall-E within a matter of three weeks of its release.

And though Dall-E had been out for a year and a half, interest in Stable Diffusion kept soaring exponentially while OpenAI’s Dall-E remained stagnant.

Advertisement

The existential threat of similar events overtaking Bard (and OpenAI) is giving Google nightmares.

The Creation Process of Open Source Model is Superior

Another factor that’s alarming engineers at Google is that the process for creating and improving open source models is fast, inexpensive and lends itself perfectly to a global collaborative approach common to open source projects.

The memo observes that new techniques such as LoRA (Low-Rank Adaptation of Large Language Models), allow for the fine-tuning of language models in a matter of days with exceedingly low cost, with the final LLM comparable to the exceedingly more expensive LLMs created by Google and OpenAI.

Another benefit is that open source engineers can build on top of previous work, iterate, instead of having to start from scratch.

Building large language models with billions of parameters in the way that OpenAI and Google have been doing is not necessary today.

Which may be the point that Sam Alton recently was hinting at when he recently said that the era of massive large language models is over.

Advertisement

The author of the Google memo contrasted the cheap and fast LoRA approach to creating LLMs against the current big AI approach.

The memo author reflects on Google’s shortcoming:

“By contrast, training giant models from scratch not only throws away the pretraining, but also any iterative improvements that have been made on top. In the open source world, it doesn’t take long before these improvements dominate, making a full retrain extremely costly.

We should be thoughtful about whether each new application or idea really needs a whole new model.

…Indeed, in terms of engineer-hours, the pace of improvement from these models vastly outstrips what we can do with our largest variants, and the best are already largely indistinguishable from ChatGPT.”

The author concludes with the realization that what they thought was their advantage, their giant models and concomitant prohibitive cost, was actually a disadvantage.

The global-collaborative nature of Open Source is more efficient and orders of magnitude faster at innovation.

Advertisement

How can a closed-source system compete against the overwhelming multitude of engineers around the world?

The author concludes that they cannot compete and that direct competition is, in their words, a “losing proposition.”

That’s the crisis, the storm, that’s developing outside of Google.

If You Can’t Beat Open Source Join Them

The only consolation the memo author finds in open source is that because the open source innovations are free, Google can also take advantage of it.

Lastly, the author concludes that the only approach open to Google is to own the platform in the same way they dominate the open source Chrome and Android platforms.

They point to how Meta is benefiting from releasing their LLaMA large language model for research and how they now have thousands of people doing their work for free.

Advertisement

Perhaps the big takeaway from the memo then is that Google may in the near future try to replicate their open source dominance by releasing their projects on an open source basis and thereby own the platform.

The memo concludes that going open source is the most viable option:

“Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation.

This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models.

But this compromise is inevitable.

We cannot hope to both drive innovation and control it.”

Open Source Walks Away With the AI Fire

Advertisement

Last week I made an allusion to the Greek myth of the human hero Prometheus stealing fire from the gods on Mount Olympus, pitting the open source to Prometheus against the “Olympian gods” of Google and OpenAI:

I tweeted:

“While Google, Microsoft and Open AI squabble amongst each other and have their backs turned, is Open Source walking off with their fire?”

The leak of Google’s memo confirms that observation but it also points at a possible strategy change at Google to  join the open source movement and thereby co-opt it and dominate it in the same way they did with Chrome and Android.

Read the leaked Google memo here:

Google “We Have No Moat, And Neither Does OpenAI”



Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Google On Hyphens In Domain Names

Published

on

By

What Google says about using hyphens in domain names

Google’s John Mueller answered a question on Reddit about why people don’t use hyphens with domains and if there was something to be concerned about that they were missing.

Domain Names With Hyphens For SEO

I’ve been working online for 25 years and I remember when using hyphens in domains was something that affiliates did for SEO when Google was still influenced by keywords in the domain, URL, and basically keywords anywhere on the webpage. It wasn’t something that everyone did, it was mainly something that was popular with some affiliate marketers.

Another reason for choosing domain names with keywords in them was that site visitors tended to convert at a higher rate because the keywords essentially prequalified the site visitor. I know from experience how useful two-keyword domains (and one word domain names) are for conversions, as long as they didn’t have hyphens in them.

A consideration that caused hyphenated domain names to fall out of favor is that they have an untrustworthy appearance and that can work against conversion rates because trustworthiness is an important factor for conversions.

Lastly, hyphenated domain names look tacky. Why go with tacky when a brandable domain is easier for building trust and conversions?

Advertisement

Domain Name Question Asked On Reddit

This is the question asked on Reddit:

“Why don’t people use a lot of domains with hyphens? Is there something concerning about it? I understand when you tell it out loud people make miss hyphen in search.”

And this is Mueller’s response:

“It used to be that domain names with a lot of hyphens were considered (by users? or by SEOs assuming users would? it’s been a while) to be less serious – since they could imply that you weren’t able to get the domain name with fewer hyphens. Nowadays there are a lot of top-level-domains so it’s less of a thing.

My main recommendation is to pick something for the long run (assuming that’s what you’re aiming for), and not to be overly keyword focused (because life is too short to box yourself into a corner – make good things, course-correct over time, don’t let a domain-name limit what you do online). The web is full of awkward, keyword-focused short-lived low-effort takes made for SEO — make something truly awesome that people will ask for by name. If that takes a hyphen in the name – go for it.”

Pick A Domain Name That Can Grow

Mueller is right about picking a domain name that won’t lock your site into one topic. When a site grows in popularity the natural growth path is to expand the range of topics the site coves. But that’s hard to do when the domain is locked into one rigid keyword phrase. That’s one of the downsides of picking a “Best + keyword + reviews” domain, too. Those domains can’t grow bigger and look tacky, too.

That’s why I’ve always recommended brandable domains that are memorable and encourage trust in some way.

Advertisement

Read the post on Reddit:

Are domains with hyphens bad?

Read Mueller’s response here.

Featured Image by Shutterstock/Benny Marty

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Reddit Post Ranks On Google In 5 Minutes

Published

on

By

Google apparently ranks Reddit posts within minutes

Google’s Danny Sullivan disputed the assertions made in a Reddit discussion that Google is showing a preference for Reddit in the search results. But a Redditor’s example proves that it’s possible for a Reddit post to rank in the top ten of the search results within minutes and to actually improve rankings to position #2 a week later.

Discussion About Google Showing Preference To Reddit

A Redditor (gronetwork) complained that Google is sending so many visitors to Reddit that the server is struggling with the load and shared an example that proved that it can only take minutes for a Reddit post to rank in the top ten.

That post was part of a 79 post Reddit thread where many in the r/SEO subreddit were complaining about Google allegedly giving too much preference to Reddit over legit sites.

The person who did the test (gronetwork) wrote:

“…The website is already cracking (server down, double posts, comments not showing) because there are too many visitors.

…It only takes few minutes (you can test it) for a post on Reddit to appear in the top ten results of Google with keywords related to the post’s title… (while I have to wait months for an article on my site to be referenced). Do the math, the whole world is going to spam here. The loop is completed.”

Advertisement

Reddit Post Ranked Within Minutes

Another Redditor asked if they had tested if it takes “a few minutes” to rank in the top ten and gronetwork answered that they had tested it with a post titled, Google SGE Review.

gronetwork posted:

“Yes, I have created for example a post named “Google SGE Review” previously. After less than 5 minutes it was ranked 8th for Google SGE Review (no quotes). Just after Washingtonpost.com, 6 authoritative SEO websites and Google.com’s overview page for SGE (Search Generative Experience). It is ranked third for SGE Review.”

It’s true, not only does that specific post (Google SGE Review) rank in the top 10, the post started out in position 8 and it actually improved ranking, currently listed beneath the number one result for the search query “SGE Review”.

Screenshot Of Reddit Post That Ranked Within Minutes

Anecdotes Versus Anecdotes

Okay, the above is just one anecdote. But it’s a heck of an anecdote because it proves that it’s possible for a Reddit post to rank within minutes and get stuck in the top of the search results over other possibly more authoritative websites.

hankschrader79 shared that Reddit posts outrank Toyota Tacoma forums for a phrase related to mods for that truck.

Advertisement

Google’s Danny Sullivan responded to that post and the entire discussion to dispute that Reddit is not always prioritized over other forums.

Danny wrote:

“Reddit is not always prioritized over other forums. [super vhs to mac adapter] I did this week, it goes Apple Support Community, MacRumors Forum and further down, there’s Reddit. I also did [kumo cloud not working setup 5ghz] recently (it’s a nightmare) and it was the Netgear community, the SmartThings Community, GreenBuildingAdvisor before Reddit. Related to that was [disable 5g airport] which has Apple Support Community above Reddit. [how to open an 8 track tape] — really, it was the YouTube videos that helped me most, but it’s the Tapeheads community that comes before Reddit.

In your example for [toyota tacoma], I don’t even get Reddit in the top results. I get Toyota, Car & Driver, Wikipedia, Toyota again, three YouTube videos from different creators (not Toyota), Edmunds, a Top Stories unit. No Reddit, which doesn’t really support the notion of always wanting to drive traffic just to Reddit.

If I guess at the more specific query you might have done, maybe [overland mods for toyota tacoma], I get a YouTube video first, then Reddit, then Tacoma World at third — not near the bottom. So yes, Reddit is higher for that query — but it’s not first. It’s also not always first. And sometimes, it’s not even showing at all.”

hankschrader79 conceded that they were generalizing when they wrote that Google always prioritized Reddit. But they also insisted that that didn’t diminish what they said is a fact that Google’s “prioritization” forum content has benefitted Reddit more than actual forums.

Why Is The Reddit Post Ranked So High?

It’s possible that Google “tested” that Reddit post in position 8 within minutes and that user interaction signals indicated to Google’s algorithms that users prefer to see that Reddit post. If that’s the case then it’s not a matter of Google showing preference to Reddit post but rather it’s users that are showing the preference and the algorithm is responding to those preferences.

Advertisement

Nevertheless, an argument can be made that user preferences for Reddit can be a manifestation of Familiarity Bias. Familiarity Bias is when people show a preference for things that are familiar to them. If a person is familiar with a brand because of all the advertising they were exposed to then they may show a bias for the brand products over unfamiliar brands.

Users who are familiar with Reddit may choose Reddit because they don’t know the other sites in the search results or because they have a bias that Google ranks spammy and optimized websites and feel safer reading Reddit.

Google may be picking up on those user interaction signals that indicate a preference and satisfaction with the Reddit results but those results may simply be biases and not an indication that Reddit is trustworthy and authoritative.

Is Reddit Benefiting From A Self-Reinforcing Feedback Loop?

It may very well be that Google’s decision to prioritize user generated content may have started a self-reinforcing pattern that draws users in to Reddit through the search results and because the answers seem plausible those users start to prefer Reddit results. When they’re exposed to more Reddit posts their familiarity bias kicks in and they start to show a preference for Reddit. So what could be happening is that the users and Google’s algorithm are creating a self-reinforcing feedback loop.

Is it possible that Google’s decision to show more user generated content has kicked off a cycle where more users are exposed to Reddit which then feeds back into Google’s algorithm which in turn increases Reddit visibility, regardless of lack of expertise and authoritativeness?

Featured Image by Shutterstock/Kues

Advertisement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

WordPress Releases A Performance Plugin For “Near-Instant Load Times”

Published

on

By

WordPress speculative loading plugin

WordPress released an official plugin that adds support for a cutting edge technology called speculative loading that can help boost site performance and improve the user experience for site visitors.

Speculative Loading

Rendering means constructing the entire webpage so that it instantly displays (rendering). When your browser downloads the HTML, images, and other resources and puts it together into a webpage, that’s rendering. Prerendering is putting that webpage together (rendering it) in the background.

What this plugin does is to enable the browser to prerender the entire webpage that a user might navigate to next. The plugin does that by anticipating which webpage the user might navigate to based on where they are hovering.

Chrome lists a preference for only prerendering when there is an at least 80% probability of a user navigating to another webpage. The official Chrome support page for prerendering explains:

“Pages should only be prerendered when there is a high probability the page will be loaded by the user. This is why the Chrome address bar prerendering options only happen when there is such a high probability (greater than 80% of the time).

There is also a caveat in that same developer page that prerendering may not happen based on user settings, memory usage and other scenarios (more details below about how analytics handles prerendering).

Advertisement

The Speculative Loading API solves a problem that previous solutions could not because in the past they were simply prefetching resources like JavaScript and CSS but not actually prerendering the entire webpage.

The official WordPress announcement explains it like this:

Introducing the Speculation Rules API
The Speculation Rules API is a new web API that solves the above problems. It allows defining rules to dynamically prefetch and/or prerender URLs of certain structure based on user interaction, in JSON syntax—or in other words, speculatively preload those URLs before the navigation. This API can be used, for example, to prerender any links on a page whenever the user hovers over them.”

The official WordPress page about this new functionality describes it:

“The Speculation Rules API is a new web API… It allows defining rules to dynamically prefetch and/or prerender URLs of certain structure based on user interaction, in JSON syntax—or in other words, speculatively preload those URLs before the navigation.

This API can be used, for example, to prerender any links on a page whenever the user hovers over them. Also, with the Speculation Rules API, “prerender” actually means to prerender the entire page, including running JavaScript. This can lead to near-instant load times once the user clicks on the link as the page would have most likely already been loaded in its entirety. However that is only one of the possible configurations.”

The new WordPress plugin adds support for the Speculation Rules API. The Mozilla developer pages, a great resource for HTML technical understanding describes it like this:

“The Speculation Rules API is designed to improve performance for future navigations. It targets document URLs rather than specific resource files, and so makes sense for multi-page applications (MPAs) rather than single-page applications (SPAs).

The Speculation Rules API provides an alternative to the widely-available <link rel=”prefetch”> feature and is designed to supersede the Chrome-only deprecated <link rel=”prerender”> feature. It provides many improvements over these technologies, along with a more expressive, configurable syntax for specifying which documents should be prefetched or prerendered.”

Advertisement

See also: Are Websites Getting Faster? New Data Reveals Mixed Results

Performance Lab Plugin

The new plugin was developed by the official WordPress performance team which occasionally rolls out new plugins for users to test ahead of possible inclusion into the actual WordPress core. So it’s a good opportunity to be first to try out new performance technologies.

The new WordPress plugin is by default set to prerender “WordPress frontend URLs” which are pages, posts, and archive pages. How it works can be fine-tuned under the settings:

Settings > Reading > Speculative Loading

Browser Compatibility

The Speculative API is supported by Chrome 108 however the specific rules used by the new plugin require Chrome 121 or higher. Chrome 121 was released in early 2024.

Browsers that do not support will simply ignore the plugin and will have no effect on the user experience.

Check out the new Speculative Loading WordPress plugin developed by the official core WordPress performance team.

Advertisement

How Analytics Handles Prerendering

A WordPress developer commented with a question asking how Analytics would handle prerendering and someone else answered that it’s up to the Analytics provider to detect a prerender and not count it as a page load or site visit.

Fortunately both Google Analytics and Google Publisher Tags (GPT) both are able to handle prerenders. The Chrome developers support page has a note about how analytics handles prerendering:

“Google Analytics handles prerender by delaying until activation by default as of September 2023, and Google Publisher Tag (GPT) made a similar change to delay triggering advertisements until activation as of November 2023.”

Possible Conflict With Ad Blocker Extensions

There are a couple things to be aware of about this plugin, aside from the fact that it’s an experimental feature that requires Chrome 121 or higher.

A comment by a WordPress plugin developer that this feature may not work with browsers that are using the uBlock Origin ad blocking browser extension.

Download the plugin:
Speculative Loading Plugin by the WordPress Performance Team

Read the announcement at WordPress
Speculative Loading in WordPress

Advertisement

See also: WordPress, Wix & Squarespace Show Best CWV Rate Of Improvement

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS