Connect with us

NEWS

Big Tech companies cannot be trusted to self-regulate: We need Congress to act

Published

on

It’s been two months since Donald Trump was kicked off of social media following the violent insurrection on Capitol Hill in January. While the constant barrage of hate-fueled commentary and disinformation from the former president has come to a halt, we must stay vigilant.

Now is the time to think about how to prevent Trump, his allies and other bad actors from fomenting extremism in the future. It’s time to figure out how we as a society address the misinformation, conspiracy theories and lies that threaten our democracy by destroying our information infrastructure.

As vice president at Color Of Change, my team and I have had countless meetings with leaders of multi-billion-dollar tech companies like Facebook, Twitter and Google, where we had to consistently flag hateful, racist content and disinformation on their platforms. We’ve also raised demands supported by millions of our members to adequately address these systemic issues — calls that are too often met with a lack of urgency and sense of responsibility to keep users and Black communities safe.

The violent insurrection by white nationalists and far-right extremists in our nation’s capital was absolutely fueled and enabled by tech companies who had years to address hate speech and disinformation that proliferated on their social media platforms. Many social media companies relinquished their platforms to far-right extremists, white supremacists and domestic terrorists long ago, and it will take more than an attempted coup to hold them fully accountable for their complicity in the erosion of our democracy — and to ensure it can’t happen again.

To restore our systems of knowledge-sharing and eliminate white nationalist organizing online, Big Tech must move beyond its typical reactive and shallow approach to addressing the harm they cause to our communities and our democracy. But it’s more clear than ever that the federal government must step in to ensure tech giants act.

After six years leading corporate accountability campaigns and engaging with Big Tech leaders, I can definitively say it’s evident that social media companies do have the power, resources and tools to enforce policies that protect our democracy and our communities. However, leaders at these tech giants have demonstrated time and time again that they will choose not to implement and enforce adequate measures to stem the dangerous misinformation, targeted hate and white nationalist organizing on their platforms if it means sacrificing maximum profit and growth.

And they use their massive PR teams to create an illusion that they’re sufficiently addressing these issues. For example, social media companies like Facebook continue to follow a reactive formula of announcing disparate policy changes in response to whatever public relations disaster they’re fending off at the moment. Before the insurrection, the company’s leaders failed to heed the warnings of advocates like Color Of Change about the dangers of white supremacists, far-right conspiracists and racist militias using their platforms to organize, recruit and incite violence. They did not ban Trump, implement stronger content moderation policies or change algorithms to stop the spread of misinformation-superspreader Facebook groups — as we had been recommending for years.

These threats were apparent long before the attack on Capitol Hill. They were obvious as Color Of Change and our allies propelled the #StopHateForProfit campaign last summer, when over 1,000 advertisers pulled millions in ad revenues from the platform. They were obvious when Facebook finally agreed to conduct a civil rights audit in 2018 after pressure from our organization and our members. They were obvious even before the deadly white nationalist demonstration in Charlottesville in 2017.

Only after significant damage had already been done did social media companies take action and concede to some of our most pressing demands, including the call to ban Trump’s accounts, implement disclaimers on voter fraud claims, and move aggressively remove COVID misinformation as well as posts inciting violence at the polls amid the 2020 election. But even now, these companies continue to shirk full responsibility by, for example, using self-created entities like the Facebook Oversight Board — an illegitimate substitute for adequate policy enforcement — as PR cover while the fate of recent decisions, such as the suspension of Trump’s account, hang in the balance.

Facebook, Twitter, YouTube and many other Big Tech companies kick into action when their profits, self-interests and reputation are threatened, but always after the damage has been done because their business models are built solely around maximizing engagement. The more polarized content is, the more engagement it gets; the more comments it elicits or times it’s shared, the more of our attention they command and can sell to advertisers. Big Tech leaders have demonstrated they neither have the willpower nor the ability to proactively and successfully self-regulate, and that’s why Congress must immediately intervene.

Congress should enact and enforce federal regulations to reign in the outsized power of Big Tech behemoths, and our lawmakers must create policies that translate to real-life changes in our everyday lives — policies that protect Black and other marginalized communities both online and offline.

We need stronger antitrust enforcement laws to break up big tech monopolies that evade corporate accountability and impact Black businesses and workers; comprehensive privacy and algorithmic discrimination legislation to ensure that profits from our data aren’t being used to fuel our exploitation; expanded broadband access to close the digital divide for Black and low-income communities; restored net neutrality so that internet services providers can’t charge differently based on content or equipment; and disinformation and content moderation by making it clear that Section 230 does not exempt platforms from complying with civil rights laws.

We’ve already seen some progress following pressure from activists and advocacy groups including Color Of Change. Last year alone, Big Tech companies like Zoom hired chief diversity experts; Google took action to block the Proud Boys website and online store; and major social media platforms like TikTok adopted better, stronger policies on banning hateful content.

But we’re not going to applaud billion-dollar tech companies for doing what they should and could have already done to address the years of misinformation, hate and violence fueled by social media platforms. We’re not going to wait for the next PR stunt or blanket statement to come out or until Facebook decides whether or not to reinstate Trump’s accounts — and we’re not going to stand idly by until more lives are lost.

The federal government and regulatory powers need to hold Big Tech accountable to their commitments by immediately enacting policy change. Our nation’s leaders have a responsibility to protect us from the harms Big Tech is enabling on our democracy and our communities — to regulate social media platforms and change the dangerous incentives in the digital economy. Without federal intervention, tech companies are on pace to repeat history.

TechCrunch

NEWS

What can ChatGPT do?

Published

on

ChatGPT Explained

ChatGPT is a large language model developed by OpenAI that is trained on a massive amount of text data. It is capable of generating human-like text and has been used in a variety of applications, such as chatbots, language translation, and text summarization.

One of the key features of ChatGPT is its ability to generate text that is similar to human writing. This is achieved through the use of a transformer architecture, which allows the model to understand the context and relationships between words in a sentence. The transformer architecture is a type of neural network that is designed to process sequential data, such as natural language.

Another important aspect of ChatGPT is its ability to generate text that is contextually relevant. This means that the model is able to understand the context of a conversation and generate responses that are appropriate to the conversation. This is accomplished by the use of a technique called “masked language modeling,” which allows the model to predict the next word in a sentence based on the context of the previous words.

One of the most popular applications of ChatGPT is in the creation of chatbots. Chatbots are computer programs that simulate human conversation and can be used in customer service, sales, and other applications. ChatGPT is particularly well-suited for this task because of its ability to generate human-like text and understand context.

Another application of ChatGPT is language translation. By training the model on a large amount of text data in multiple languages, it can be used to translate text from one language to another. The model is able to understand the meaning of the text and generate a translation that is grammatically correct and semantically equivalent.

In addition to chatbots and language translation, ChatGPT can also be used for text summarization. This is the process of taking a large amount of text and condensing it into a shorter, more concise version. ChatGPT is able to understand the main ideas of the text and generate a summary that captures the most important information.

Despite its many capabilities and applications, ChatGPT is not without its limitations. One of the main challenges with using language models like ChatGPT is the risk of generating text that is biased or offensive. This can occur when the model is trained on text data that contains biases or stereotypes. To address this, OpenAI has implemented a number of techniques to reduce bias in the training data and in the model itself.

In conclusion, ChatGPT is a powerful language model that is capable of generating human-like text and understanding context. It has a wide range of applications, including chatbots, language translation, and text summarization. While there are limitations to its use, ongoing research and development is aimed at improving the model’s performance and reducing the risk of bias.

** The above article has been written 100% by ChatGPT. This is an example of what can be done with AI. This was done to show the advanced text that can be written by an automated AI.

Continue Reading

NEWS

Google December Product Reviews Update Affects More Than English Language Sites? via @sejournal, @martinibuster

Published

on

Google’s Product Reviews update was announced to be rolling out to the English language. No mention was made as to if or when it would roll out to other languages. Mueller answered a question as to whether it is rolling out to other languages.

Google December 2021 Product Reviews Update

On December 1, 2021, Google announced on Twitter that a Product Review update would be rolling out that would focus on English language web pages.

The focus of the update was for improving the quality of reviews shown in Google search, specifically targeting review sites.

A Googler tweeted a description of the kinds of sites that would be targeted for demotion in the search rankings:

“Mainly relevant to sites that post articles reviewing products.

Think of sites like “best TVs under $200″.com.

Goal is to improve the quality and usefulness of reviews we show users.”

Advertisement

Continue Reading Below

Google also published a blog post with more guidance on the product review update that introduced two new best practices that Google’s algorithm would be looking for.

The first best practice was a requirement of evidence that a product was actually handled and reviewed.

The second best practice was to provide links to more than one place that a user could purchase the product.

The Twitter announcement stated that it was rolling out to English language websites. The blog post did not mention what languages it was rolling out to nor did the blog post specify that the product review update was limited to the English language.

Google’s Mueller Thinking About Product Reviews Update

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Screenshot of Google's John Mueller trying to recall if December Product Review Update affects more than the English language

Product Review Update Targets More Languages?

The person asking the question was rightly under the impression that the product review update only affected English language search results.

Advertisement

Continue Reading Below

But he asserted that he was seeing search volatility in the German language that appears to be related to Google’s December 2021 Product Review Update.

This is his question:

“I was seeing some movements in German search as well.

So I was wondering if there could also be an effect on websites in other languages by this product reviews update… because we had lots of movement and volatility in the last weeks.

…My question is, is it possible that the product reviews update affects other sites as well?”

John Mueller answered:

“I don’t know… like other languages?

My assumption was this was global and and across all languages.

But I don’t know what we announced in the blog post specifically.

But usually we try to push the engineering team to make a decision on that so that we can document it properly in the blog post.

I don’t know if that happened with the product reviews update. I don’t recall the complete blog post.

But it’s… from my point of view it seems like something that we could be doing in multiple languages and wouldn’t be tied to English.

And even if it were English initially, it feels like something that is relevant across the board, and we should try to find ways to roll that out to other languages over time as well.

So I’m not particularly surprised that you see changes in Germany.

But I also don’t know what we actually announced with regards to the locations and languages that are involved.”

Does Product Reviews Update Affect More Languages?

While the tweeted announcement specified that the product reviews update was limited to the English language the official blog post did not mention any such limitations.

Google’s John Mueller offered his opinion that the product reviews update is something that Google could do in multiple languages.

One must wonder if the tweet was meant to communicate that the update was rolling out first in English and subsequently to other languages.

It’s unclear if the product reviews update was rolled out globally to more languages. Hopefully Google will clarify this soon.

Citations

Google Blog Post About Product Reviews Update

Product reviews update and your site

Google’s New Product Reviews Guidelines

Write high quality product reviews

John Mueller Discusses If Product Reviews Update Is Global

Watch Mueller answer the question at the 14:00 Minute Mark

[embedded content]

Searchenginejournal.com

Continue Reading

NEWS

Survey says: Amazon, Google more trusted with your personal data than Apple is

Published

on

survey-says:-amazon,-google-more-trusted-with-your-personal-data-than-apple-is-–-phonearena
 

MacRumors reveals that more people feel better with their personal data in the hands of Amazon and Google than Apple’s. Companies that the public really doesn’t trust when it comes to their personal data include Facebook, TikTok, and Instagram.

The survey asked over 1,000 internet users in the U.S. how much they trusted certain companies such as Facebook, TikTok, Instagram, WhatsApp, YouTube, Google, Microsoft, Apple, and Amazon to handle their user data and browsing activity responsibly.

Amazon and Google are considered by survey respondents to be more trustworthy than Apple

Those surveyed were asked whether they trusted these firms with their personal data “a great deal,” “a good amount,” “not much,” or “not at all.” Respondents could also answer that they had no opinion about a particular company. 18% of those polled said that they trust Apple “a great deal” which topped the 14% received by Google and Amazon.

However, 39% said that they trust Amazon  by “a good amount” with Google picking up 34% of the votes in that same category. Only 26% of those answering said that they trust Apple by “a good amount.” The first two responses, “a great deal” and “a good amount,” are considered positive replies for a company. “Not much” and “not at all” are considered negative responses.

By adding up the scores in the positive categories,

Apple tallied a score of 44% (18% said it trusted Apple with its personal data “a great deal” while 26% said it trusted Apple “a good amount”). But that placed the tech giant third after Amazon’s 53% and Google’s 48%. After Apple, Microsoft finished fourth with 43%, YouTube (which is owned by Google) was fifth with 35%, and Facebook was sixth at 20%.

Rounding out the remainder of the nine firms in the survey, Instagram placed seventh with a positive score of 19%, WhatsApp was eighth with a score of 15%, and TikTok was last at 12%.

Looking at the scoring for the two negative responses (“not much,” or “not at all”), Facebook had a combined negative score of 72% making it the least trusted company in the survey. TikTok was next at 63% with Instagram following at 60%. WhatsApp and YouTube were both in the middle of the pact at 53% followed next by Google and Microsoft at 47% and 42% respectively. Apple and Amazon each had the lowest combined negative scores at 40% each.

74% of those surveyed called targeted online ads invasive

The survey also found that a whopping 82% of respondents found targeted online ads annoying and 74% called them invasive. Just 27% found such ads helpful. This response doesn’t exactly track the 62% of iOS users who have used Apple’s App Tracking Transparency feature to opt-out of being tracked while browsing websites and using apps. The tracking allows third-party firms to send users targeted ads online which is something that they cannot do to users who have opted out.

The 38% of iOS users who decided not to opt out of being tracked might have done so because they find it convenient to receive targeted ads about a certain product that they looked up online. But is ATT actually doing anything?

Marketing strategy consultant Eric Seufert said last summer, “Anyone opting out of tracking right now is basically having the same level of data collected as they were before. Apple hasn’t actually deterred the behavior that they have called out as being so reprehensible, so they are kind of complicit in it happening.”

The Financial Times says that iPhone users are being lumped together by certain behaviors instead of unique ID numbers in order to send targeted ads. Facebook chief operating officer Sheryl Sandberg says that the company is working to rebuild its ad infrastructure “using more aggregate or anonymized data.”

Aggregated data is a collection of individual data that is used to create high-level data. Anonymized data is data that removes any information that can be used to identify the people in a group.

When consumers were asked how often do they think that their phones or other tech devices are listening in to them in ways that they didn’t agree to, 72% answered “very often” or “somewhat often.” 28% responded by saying “rarely” or “never.”

Continue Reading

Trending

en_USEnglish