Connect with us

NEWS

Google December 2020 Core Update Insights

Published

on

december 2020 core update 5fe05f090cc06

Five search marketers contributed opinions on Google’s December 2020 Core Update. The observations offer interesting feedback on what may have happened.

In my opinion, Google updates have increasingly been less about ranking factors and more about improving how queries and web pages are understood.

Some have offered the opinion that Google is randomizing search results in order to fool those who try to reverse engineer Google’s algorithm.

I don’t share that opinion.

Certain algorithm features are hard to detect in the search results. It’s not easy to to point at a search result and say it is ranking because of the BERT algorithm or Neural Matching.

But it is easy to point to backlinks, E-A-T or user experience as reasons to explain why a site is ranking or not ranking if that’s what’s sticking out, even when the actual reason might be more related to BERT.

So the Search Engine Results Pages (SERPs) may appear confusing and random to those who are scrutinizing the SERPs looking for traditional old school ranking factors to explain why pages are ranking or why they lost rankings in an update.

Of course the Google Updates may appear to be inscrutable. The reasons why web pages rank have dramatically changed over the past few years because of technologies like natural language processing.

What if Google Updates and Nobody Sees What Changed?

It’s happened in the past that Google has changed something and the SEO community didn’t notice.

For example, when Google added an algorithm like BERT many couldn’t detect what had changed.

Now, what if Google added something like the SMITH algorithm? How would the SEO community detect that?

SMITH is described in a Google Research paper published in April 2020 and revised in October 2020. What SMITH does is make it easier to understand a long page of content, outperforming BERT.

Here is what it says:

“In recent years, self-attention based models like Transformers and BERT have achieved state-of-the-art performance in the task of text matching.

These models, however, are still limited to short text like a few sentences or one paragraph due to the quadratic computational complexity of self-attention with respect to input text length.

In this paper, we address the issue by proposing the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for long-form document matching.

Our experimental results on several benchmark datasets for long-form document matching show that our proposed SMITH model outperforms the previous state-of-the-art models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT.

Comparing to BERT based baselines, our model is able to increase maximum input text length from 512 to 2048.”

I’m not saying that Google has introduced the SMITH algorithm (PDF) or that it’s related to the Passages Algorithm.

What I am pointing out is that the December 2020 Core Update contains the quality of seemingly non-observable changes.

If Google added a new AI based feature or updated an existing feature like BERT, would the search marketing community be able to detect it? Probably not.

And it is that quality of non-observable changes that may indicate that what has changed might have something to do with how Google understands web queries and web pages.

If that is the case, then it may mean that instead of spinning wheels on the usual ranking factors that are easily observed (links from scraper sites, site speed, etc.), that it may be useful to step back and consider that it may be something more profound than the usual ranking factors that has changed.

Insights into Google December 2020 Core Update

I thank those who had time to contribute their opinions, they provided excellent information that may help you to put Google’s December Core Algorithm Update into perspective.

Dave Davies (@oohloo)
Beanstalk Internet Marketing

Dave puts this update in the context of what Google has said was coming soon to the algorithm and how that might play a role in the fluctuations.

Dave offered:

“The December 2020 Core Update was a unique one to watch roll out. Many sites we work with started with losses and ended with wins, and vice-versa.

So clearly it had something to do with a signal or signals that cascade. That is, where the change caused one result, but once that new calculation worked its way through the system, it produced another. Like PageRank recalculating, though this one likely had nothing to do with PageRank.

Alternatively, Google may have made adjustments on the fly, or made other changes during the rollout, but I find that less likely.

If we think about the timing, and how it ties to the rolling out of passage indexing and that it’s a Core Update, I suspect it ties to content interpretation systems and not links or signals along those lines.

We also know that Core Web Vitals are entering the algorithm in May of 2021 so there may be elements to support that in the update, but those would not be producing the impact we’ve all been seeing presently given that Web Vitals should technically be inert as a signal at this stage so at the very least, there would be more to the update than that.

As far as general community reaction, this one has been difficult to gauge past “it was big.” As one can expect in any zero-sum scenario, when one person is complaining about a loss, another is smiling all the way up the SERPs.

I suspect that before the end of January it’ll become clear exactly what they were rolling out and why. I believe it has to do with future features and capabilities, but I’ve been around long enough to know I could be wrong, and I need to watch closely.”

Steven Kang (@SEOSignalsLab)

Steven Kang, founder of the popular SEO Signals Lab Facebook group notes that nothing appears to stand out in terms of commonalities or symptoms between the winners and losers.

“This one seems to be tricky. I’m finding gains and losses. I would need to wait more for this one.”

Daniel K Cheung (@danielkcheung)
Team Lead, Prosperity Media

Daniel believes that it’s helpful to step back and view Google updates from the big picture view of the forest rather than the tree of the latest update, and to put these updates into the context of what we know is going on in Search.

One example is the apparent drop in reports of manual actions in Google Search Console. The implication is, does that mean Google is better at ranking sites where they belong, without having to resort to punitive manual actions?

This is how Daniel views the latest core algorithm update from Google:

“I think we as Search/Discoverability people need to stop thinking about Core Updates as individual events and instead look at Core Updates as a continuum of ongoing tests and ‘improvements’ to what we see in the SERPs.

So when I refer to the December core update, I want to stress that it is just one event of many.

For example, some affiliate marketers and analysts have found sites that were previously ‘hit’ by the May 2020 update to have recovered in the December rollout. However, this has not been consistent.

And again, here is the problem, we can’t talk about sites that have won or lost because it’s all about individual URLs.

So looking at pure visibility across an entire website doesn’t really give us any clues.

There are murmurs of 301 redirects, PBNs, low-quality backlinks and poor content being reasons why some sites have been pushed from page 1 to page 6-10 of the SERPs (practically invisible).

But these practices have always been susceptible to the daily fluctuations of the algorithm.

What’s been really interesting throughout 2020 is that there have been very few reports of manual penalties within GSC.

This has been eerily replaced with impression and click graphs jumping off a cliff without the site being de-indexed.

In my humble opinion, core updates are becoming less about targeting a specific selection of practices, but rather, an incremental opportunity for the algorithm to mature.

Now, I’m not saying that Google gets it right 100% of the time – the algorithm clearly doesn’t and I don’t think it ever will (due to humanity’s curiosity).”

Cristoph Cemper (@cemper)
CEO LinkResearchTools

Cristoph Cemper views the latest update as having an impact across a wide range of factors.

Here is what he shared:

“High level, Google is adjusting things that have a global impact in core updates.

That is:

a) Weight ratios for different types of links, and their signals

I think the NoFollow 2.0 rollout from Sept 2019 is not completed, but tweaked. I.e. how much power for which NoFollow in which context.

b) Answer boxes, a lot more. Google increases their own real estate

c) Mass devaluation of PBN link networks and quite obvious footprints of “outreach link building.”

Just because someone sent an outreach email doesn’t make a paid link more natural, even if it was paid with “content” or “exchange of services.”

Michael Martinez (@seo_theory)
Founder of SEOTheory

Michael Martinez offered these insights:

“Based on what I’ve seen in online discussions, people are confused and frustrated. They don’t really know what happened and few seem to have any theories as to why things changed.

In a general sense, it feels to me like Google rewrote a number of its quality policy enforcement algorithms.

Nothing specific in mind but other people’s sites I’ve looked at struck me as being okay, not great. Some of the sites in our portfolio went up, others went down.

Again, it just struck me as being about enforcement or algorithmic interpretation of signals mapped to their guidelines.

Not about punishing anything, but maybe about trying some different approaches to resolving queries.”

What Happened in Google December 2020 Core Update?

The perspectives on what happened in Google’s core algorithm update vary. What most observers seem to agree is that no obvious factors or changes seem to stand out.

And that’s an interesting observation because it could mean that something related to AI or Natural Language Processing was refined or introduced. But that’s just speculation until Google explicitly rules it out or in.

Searchenginejournal.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

NEWS

OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models

Published

on

By

OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models

OpenAI has today announced the release of fine-tuning capabilities for its flagship GPT-4 large language model, marking a significant milestone in the AI landscape. This new functionality empowers developers to create tailored versions of GPT-4 to suit specialized use cases, enhancing the model’s utility across various industries.

Fine-tuning has long been a desired feature for developers who require more control over AI behavior, and with this update, OpenAI delivers on that demand. The ability to fine-tune GPT-4 allows businesses and developers to refine the model’s responses to better align with specific requirements, whether for customer service, content generation, technical support, or other unique applications.

Why Fine-Tuning Matters

GPT-4 is a very flexible model that can handle many different tasks. However, some businesses and developers need more specialized AI that matches their specific language, style, and needs. Fine-tuning helps with this by letting them adjust GPT-4 using custom data. For example, companies can train a fine-tuned model to keep a consistent brand tone or focus on industry-specific language.

Fine-tuning also offers improvements in areas like response accuracy and context comprehension. For use cases where nuanced understanding or specialized knowledge is crucial, this can be a game-changer. Models can be taught to better grasp intricate details, improving their effectiveness in sectors such as legal analysis, medical advice, or technical writing.

Key Features of GPT-4 Fine-Tuning

The fine-tuning process leverages OpenAI’s established tools, but now it is optimized for GPT-4’s advanced architecture. Notable features include:

  • Enhanced Customization: Developers can precisely influence the model’s behavior and knowledge base.
  • Consistency in Output: Fine-tuned models can be made to maintain consistent formatting, tone, or responses, essential for professional applications.
  • Higher Efficiency: Compared to training models from scratch, fine-tuning GPT-4 allows organizations to deploy sophisticated AI with reduced time and computational cost.

Additionally, OpenAI has emphasized ease of use with this feature. The fine-tuning workflow is designed to be accessible even to teams with limited AI experience, reducing barriers to customization. For more advanced users, OpenAI provides granular control options to achieve highly specialized outputs.

Implications for the Future

The launch of fine-tuning capabilities for GPT-4 signals a broader shift toward more user-centric AI development. As businesses increasingly adopt AI, the demand for models that can cater to specific business needs, without compromising on performance, will continue to grow. OpenAI’s move positions GPT-4 as a flexible and adaptable tool that can be refined to deliver optimal value in any given scenario.

By offering fine-tuning, OpenAI not only enhances GPT-4’s appeal but also reinforces the model’s role as a leading AI solution across diverse sectors. From startups seeking to automate niche tasks to large enterprises looking to scale intelligent systems, GPT-4’s fine-tuning capability provides a powerful resource for driving innovation.

OpenAI announced that fine-tuning GPT-4o will cost $25 for every million tokens used during training. After the model is set up, it will cost $3.75 per million input tokens and $15 per million output tokens. To help developers get started, OpenAI is offering 1 million free training tokens per day for GPT-4o and 2 million free tokens per day for GPT-4o mini until September 23. This makes it easier for developers to try out the fine-tuning service.

As AI continues to evolve, OpenAI’s focus on customization and adaptability with GPT-4 represents a critical step in making advanced AI accessible, scalable, and more aligned with real-world applications. This new capability is expected to accelerate the adoption of AI across industries, creating a new wave of AI-driven solutions tailored to specific challenges and opportunities.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

This Week in Search News: Simple and Easy-to-Read Update

Published

on

This Week in Search News: Simple and Easy-to-Read Update

Here’s what happened in the world of Google and search engines this week:

1. Google’s June 2024 Spam Update

Google finished rolling out its June 2024 spam update over a period of seven days. This update aims to reduce spammy content in search results.

2. Changes to Google Search Interface

Google has removed the continuous scroll feature for search results. Instead, it’s back to the old system of pages.

3. New Features and Tests

  • Link Cards: Google is testing link cards at the top of AI-generated overviews.
  • Health Overviews: There are more AI-generated health overviews showing up in search results.
  • Local Panels: Google is testing AI overviews in local information panels.

4. Search Rankings and Quality

  • Improving Rankings: Google said it can improve its search ranking system but will only do so on a large scale.
  • Measuring Quality: Google’s Elizabeth Tucker shared how they measure search quality.

5. Advice for Content Creators

  • Brand Names in Reviews: Google advises not to avoid mentioning brand names in review content.
  • Fixing 404 Pages: Google explained when it’s important to fix 404 error pages.

6. New Search Features in Google Chrome

Google Chrome for mobile devices has added several new search features to enhance user experience.

7. New Tests and Features in Google Search

  • Credit Card Widget: Google is testing a new widget for credit card information in search results.
  • Sliding Search Results: When making a new search query, the results might slide to the right.

8. Bing’s New Feature

Bing is now using AI to write “People Also Ask” questions in search results.

9. Local Search Ranking Factors

Menu items and popular times might be factors that influence local search rankings on Google.

10. Google Ads Updates

  • Query Matching and Brand Controls: Google Ads updated its query matching and brand controls, and advertisers are happy with these changes.
  • Lead Credits: Google will automate lead credits for Local Service Ads. Google says this is a good change, but some advertisers are worried.
  • tROAS Insights Box: Google Ads is testing a new insights box for tROAS (Target Return on Ad Spend) in Performance Max and Standard Shopping campaigns.
  • WordPress Tag Code: There is a new conversion code for Google Ads on WordPress sites.

These updates highlight how Google and other search engines are continuously evolving to improve user experience and provide better advertising tools.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

FACEBOOK

Facebook Faces Yet Another Outage: Platform Encounters Technical Issues Again

Published

on

By

Facebook Problem Again

Uppdated: It seems that today’s issues with Facebook haven’t affected as many users as the last time. A smaller group of people appears to be impacted this time around, which is a relief compared to the larger incident before. Nevertheless, it’s still frustrating for those affected, and hopefully, the issues will be resolved soon by the Facebook team.

Facebook had another problem today (March 20, 2024). According to Downdetector, a website that shows when other websites are not working, many people had trouble using Facebook.

This isn’t the first time Facebook has had issues. Just a little while ago, there was another problem that stopped people from using the site. Today, when people tried to use Facebook, it didn’t work like it should. People couldn’t see their friends’ posts, and sometimes the website wouldn’t even load.

Downdetector, which watches out for problems on websites, showed that lots of people were having trouble with Facebook. People from all over the world said they couldn’t use the site, and they were not happy about it.

When websites like Facebook have problems, it affects a lot of people. It’s not just about not being able to see posts or chat with friends. It can also impact businesses that use Facebook to reach customers.

Since Facebook owns Messenger and Instagram, the problems with Facebook also meant that people had trouble using these apps. It made the situation even more frustrating for many users, who rely on these apps to stay connected with others.

During this recent problem, one thing is obvious: the internet is always changing, and even big websites like Facebook can have problems. While people wait for Facebook to fix the issue, it shows us how easily things online can go wrong. It’s a good reminder that we should have backup plans for staying connected online, just in case something like this happens again.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending