The new app is called watchGPT and as I tipped off already, it gives you access to ChatGPT from your Apple Watch. Now the $10,000 question (or more accurately the $3.99 question, as that is the one-time cost of the app) is why having ChatGPT on your wrist is remotely necessary, so let’s dive into what exactly the app can do.
NEWS
Google on Penalizing Misinformation

A member of the SEO community expressed the opinion that misinformation in medical search results topics are as harmful and bad for users as spam content. And if that’s true, then why why doesn’t Google penalize misinformation sites with the vigor that Google penalizes sites for spam? Google’s Danny Sullivan offered an explanation.
Should Misleading Information Be Treated Like Spam?
Joe Hall (@joehall), a member of the search marketing community, framed the question of misinformation in the search results within the context of a bad user experience and compared it to spam.
One of the reasons why Google cracks down on spam is because it’s a poor user experience, so it’s not unreasonable to link misinformation with spam.
Joe Hall isn’t alone in linking misleading information with spam. Google does too.
Google Defines Misleading Content as Spam
Google’s own Webmaster Guidelines defines misleading information as spam because it harms the user experience.
“A rich result may be considered spam if it harms the user experience by highlighting falsified or misleading information. For example, a rich result promoting a travel package as an Event or displaying fabricated Reviews would be considered spam.”
If a user searches for “this” and is taken to a page of content about “that,” according to Google’s own guidelines, that’s considered spam.
Is Misleading Different From Misinformation?
Some may quibble that there’s a difference between the words misleading and misinformation.
But this is how the dictionary defines those words:
Merriam-Webster Definition of misleading:
“…to lead in a wrong direction or into a mistaken action or belief often by deliberate deceit… to lead astray : give a wrong impression…”
Merriam-Webster Definition of Misinformation:
“…incorrect or misleading information”
Regardless if you believe that there’s a gulf of difference between misleading and misinformation, the end result is the same, an unfulfilled user and a bad user experience.
Google’s Algorithm Designed to Fulfill Information Needs
Google’s documentation on their ranking updates states that the purpose of the changes is to fulfill users information needs. The reason they want to send users to sites that fulfill their information needs is because that’s a “great user experience.”
Here’s what Google said about their algorithms:
“The goal of many of our ranking changes is to help searchers find sites that provide a great user experience and fulfill their information needs.”
If a site that provides quality information provides a great user experience then it’s not unreasonable to say that sites that provide misleading information provide a poor user experience.
The word “egregious” means shockingly bad, an appropriate word to describe a site that provides misleading information for sensitive medical related search queries.
So, if it’s true that misleading information provides a poor user experience then why isn’t Google tackling these sites in the same way they take down spam sites?
If misleading information is as bad or worse than spam, why doesn’t Google hand out the most severest penalties (like manual actions) to sites that are egregious offenders?
“If you are found to spread misinformation about COVID19 vaccines… Then you shouldn’t be in Google’s index at all. It’s time that G puts it’s money where its mouth is in regards to content quality.”
Joe next tweeted about the seeming futility of the algorithm or concepts like E-A-T for dealing with misinformation and the difference between how Google treats spam and misinformation:
“Forget Core Quality Updates, YMYL, and EAT, just kick them out of the index. Sick of seeing Google put the hammer down for things like buying links… But consistently turns a blind eye to content that causes real harm to people.”
Google Responds to Issue of Misinformation in SERPs
Google’s Danny Sullivan insisted that Google is not turning a blind eye to misinformation. He affirmed Google’s commitment to showing useful information in the SERPs.
We don’t turn a blind eye. Just because something is indexed is entirely different from whether it ranks. We invest a huge amount of resources to ensure we’re returning useful, authoritative information in ranking. See also: https://t.co/SRUFrTcg56 and https://t.co/cTveD8XNxp
— Danny Sullivan (@dannysullivan) December 10, 2020
The end result is the same. Our systems look to reward quality. If you are posting misinformation, you’re not rewarded, because you don’t rank well. If you try to artificially boost your relevance, you’re not rewarded, because you get a manual action and don’t rank well.
— Danny Sullivan (@dannysullivan) December 10, 2020
“Bottom line, protecting your user’s life/health should take a higher precedence than punishing link buyers.”
“It already does. You are choosing to deliberately focus on the fact that we take manual action on *some* things in *addition to* automated protections to make it seem like our existing ranking systems are somehow not trying to show the best and most useful info we can.”
Google’s Danny Sullivan then followed up with:
“It seems like you equate manual action in the case of some spam attempts as somehow like we’re not working across all pages all the time to fight both spam and misinformation. We are.”
Joe Hall returned to ask why misleading sites aren’t penalized in a similar manner as spam is:
I understand that. The point I’m trying to make is why isn’t there a manual penalty for spreading disinformation that can kill people? Why is it that manual penalties are only reserved for links? Algorithms don’t carry the same message that manual penalties do.
— Joe Hall 🦡 (@joehall) December 11, 2020
Danny explained in two tweets the challenge of manually reviewing millions of misleading sites and of ranking breaking news:
“There are millions of pages with misinformation out there. We can’t manually review all existing pages, somehow judge them & also review every new page that’s created for topics that are entirely new. The best way to deal with that is how we do, a focus on quality ranking…
Remember the whole 15% of queries are new thing. That’s a big deal. Some new story breaks, uncertain info flows, misinfo flows along with authority info that flows. Our systems have to deal with this within seconds. Seconds. Over thousands+ pages that quickly emerge…”
Next Danny asserted that automated systems do far more heavy lifting against spam than manual actions.
“Yes, we will take manual actions in addition to the automated stuff, but that’s a tiny amount and also something where a manual approach can work, because it’s pretty clear to us what’s spam or not.”
Google and Misinformation
It’s uncertain whether Google’s algorithms are doing a good job surfacing high quality information in the search results.
But the question as to whether Google should elevate how it treats misinformation is a valid one. Particularly in YMYL queries in medical topics, blocking misinformation in those search results seems to be as important as blocking spam.
NEWS
We asked ChatGPT what will be Google (GOOG) stock price for 2030

Investors who have invested in Alphabet Inc. (NASDAQ: GOOG) stock have reaped significant benefits from the company’s robust financial performance over the last five years. Google’s dominance in the online advertising market has been a key driver of the company’s consistent revenue growth and impressive profit margins.
In addition, Google has expanded its operations into related fields such as cloud computing and artificial intelligence. These areas show great promise as future growth drivers, making them increasingly attractive to investors. Notably, Alphabet’s stock price has been rising due to investor interest in the company’s recent initiatives in the fast-developing field of artificial intelligence (AI), adding generative AI features to Gmail and Google Docs.
However, when it comes to predicting the future pricing of a corporation like Google, there are many factors to consider. With this in mind, Finbold turned to the artificial intelligence tool ChatGPT to suggest a likely pricing range for GOOG stock by 2030. Although the tool was unable to give a definitive price range, it did note the following:
“Over the long term, Google has a track record of strong financial performance and has shown an ability to adapt to changing market conditions. As such, it’s reasonable to expect that Google’s stock price may continue to appreciate over time.”
GOOG stock price prediction
While attempting to estimate the price range of future transactions, it is essential to consider a variety of measures in addition to the AI chat tool, which includes deep learning algorithms and stock market experts.
Finbold collected forecasts provided by CoinPriceForecast, a finance prediction tool that utilizes machine self-learning technology, to anticipate Google stock price by the end of 2030 to compare with ChatGPT’s projection.
According to the most recent long-term estimate, which Finbold obtained on March 20, the price of Google will rise beyond $200 in 2030 and touch $247 by the end of the year, which would indicate a 141% gain from today to the end of the year.
Google has been assigned a recommendation of ‘strong buy’ by the majority of analysts working on Wall Street for a more near-term time frame. Significantly, 36 analysts of the 48 have recommended a “strong buy,” while seven people have advocated a “buy.” The remaining five analysts had given a ‘hold’ rating.

The average price projection for Alphabet stock over the last three months has been $125.32; this objective represents a 22.31% upside from its current price. It’s interesting to note that the maximum price forecast for the next year is $160, representing a gain of 56.16% from the stock’s current price of $102.46.
While the outlook for Google stock may be positive, it’s important to keep in mind that some potential challenges and risks could impact its performance, including competition from ChatGPT itself, which could affect Google’s price.
Disclaimer: The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.
NEWS
This Apple Watch app brings ChatGPT to your wrist — here’s why you want it

ChatGPT feels like it is everywhere at the moment; the AI-powered tool is rapidly starting to feel like internet connected home devices where you are left wondering if your flower pot really needed Bluetooth. However, after hearing about a new Apple Watch app that brings ChatGPT to your favorite wrist computer, I’m actually convinced this one is worth checking out.
NEWS
Discord goes all in with AI: chatbots, automods, whiteboards and more

AI is the future, at least over on Discord.
The messaging application originally made for gamers has become Gen Z’s favorite online hangout destination of choice, and now it’s rolling out a number of features powered by artificial intelligence.
In an announcement(Opens in a new tab) on Thursday, Discord shared what’s coming to the platform soon: an AI chatbot, an automated AI moderator, a conversation summarizer, an avatar remixer, and a whiteboard. Some of these features begin rolling out today, March 9. Others will launch in the coming weeks and months.
While AI has jumped into the mainstream thanks to the popularity of OpenAI’s ChatGPT chatbot, Discord has had an active AI community for quite a while now. According to the company, third-party AI apps already on the platform already have more than 30 million monthly users. Nearly 3 million servers on Discord have some AI element integrated into the community.
In fact, the biggest community on Discord is Midjourney, a text-to-image AI project which allows users to generate art from right within the server. Discord says Midjourney’s server has more than 13 million members.
So, with AI being such an integral part of Discord already, it seemed like only a matter of time before Discord itself started bringing AI directly into the platform.
AutoMod AI
Credit: Discord
The first feature coming to some Discord servers as soon as today is AutoMod AI. Discord already has an AutoMod feature, which basically automatically moderates rooms for admins based on the rules of the server. Discord has now integrated OpenAI-powered AI into AutoMod, allowing it to search the server and contact moderators when it thinks rules are possibly being broken. According to Discord, AutoMod AI can also consider the context of a conversation so, for example, users don’t get penalized for posts that are misconstrued.
Clyde is a bot that Discord users may already be familiar with, and starting next week, Clyde is getting an AI upgrade. Currently, the Clyde bot provides information, such as server error messages, and also responds to timeout or ban requests from users and mods. However, that’s pretty much all Clyde was able to do. Until now.

Clyde
Credit: Discord
Clyde will now be able to answer all sorts of questions from users, much like OpenAI’s ChatGPT chatbot. Users simply have to type “@Clyde” followed by their prompt. Clyde will be able to pull up information and also help find specific emojis or GIFs based on a user’s description.
Another AI feature coming to Discord next week is Conversation Summaries. Again, the name is fairly descriptive of what it does. With users all over the world, many Discord channels are always moving regardless of time of day. Conversation Summaries will allow users to catch up on what they missed on a Discover Server. The AI-powered feature will “bundle” chats into topics so users can easily read up on what they find most interesting.

Conversation Summaries
Credit: Discord
Starting today, developers can start playing with Avatar Remix, an open-source Discord app that integrates AI art into the messaging app. Avatar Remix allows users to take a fellow user’s avatar and change it up “using the power of generative image models.” What does that mean? In the demo that Discord showed Mashable, a user was able to add a party hat or a mustache to a friend’s avatar by simply mentioning their username and describing what changes they’d like to make.

Avatar Remix
Credit: Discord
The company is also launching an “AI incubator,” offering support for developers creating AI-powered apps on Discord.
Finally, Discord revealed a feature that’s coming soon that has long been requested by the Discord community: a whiteboard. But, of course, this won’t be just any collaborative whiteboard feature. It’s going to be AI-powered, allowing users to collaborate in generating AI art and more.
-
SEO6 days ago
Google Discusses Fixing 404 Errors From Inbound Links
-
SOCIAL4 days ago
Musk regrets controversial post but won’t bow to advertiser ‘blackmail’
-
SEARCHENGINES6 days ago
Google Search Console Was Down Today
-
MARKETING7 days ago
10 Advanced Tips for Crafting Engaging Social Content Strategies
-
SEO4 days ago
A Year Of AI Developments From OpenAI
-
MARKETING6 days ago
How to Schedule Ad Customizers for Google RSAs [2024]
-
SEO5 days ago
SEO Salary Survey 2023 [Industry Research]
-
PPC5 days ago
Facebook Ads Benchmarks for 2024: NEW Data + Insights for Your Industry