Connect with us

GOOGLE

Tech giants still not doing enough to fight fakes, says European Commission

Published

on

tech giants still not doing enough to fight fakes says european commission

It’s a year since the European Commission got a bunch of adtech giants together to spill ink on a voluntary Code of Practice to do something — albeit, nothing very quantifiable — as a first step to stop the spread of disinformation online.

Its latest report card on this voluntary effort sums to the platforms could do better.

The Commission said the same in January. And will doubtless say it again. Unless or until regulators grasp the nettle of online business models that profit by maximizing engagement. As the saying goes, lies fly while the truth comes stumbling after. So attempts to shrink disinformation without fixing the economic incentives to spread BS in the first place are mostly dealing in cosmetic tweaks and optics.

Signatories to the Commission’s EU Code of Practice on Disinformation are: Facebook, Google, Twitter, Mozilla, Microsoft and several trade associations representing online platforms, the advertising industry, and advertisers — including the Internet Advertising Bureau (IAB) and World Federation of Advertisers (WFA).

In a press release assessing today’s annual reports, compiled by signatories, the Commission expresses disappointment that no other Internet platforms or advertising companies have signed up since Microsoft joined as a late addition to the Code this year.

“We commend the commitment of the online platforms to become more transparent about their policies and to establish closer cooperation with researchers, fact-checkers and Member States. However, progress varies a lot between signatories and the reports provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny,” write commissioners Věra Jourová, Julian King, and Mariya Gabriel said in a joint statement. [emphasis ours]

“While the 2019 European Parliament elections in May were clearly not free from disinformation, the actions and the monthly reporting ahead of the elections contributed to limiting the space for interference and improving the integrity of services, to disrupting economic incentives for disinformation, and to ensuring greater transparency of political and issue-based advertising. Still, large-scale automated propaganda and disinformation persist and there is more work to be done under all areas of the Code. We cannot accept this as a new normal,” they add.

Advertisement

The risk, of course, is that the Commission’s limp-wristed code risks rapidly cementing a milky jelly of self-regulation in the fuzzy zone of disinformation as the new normal, as we warned when the Code launched last year.

The Commission continues to leave the door open (a crack) to doing something platforms can’t (mostly) ignore — i.e. actual regulation — saying it’s assessment of the effectiveness of the Code remains ongoing.

But that’s just a dangled stick. At this transitionary point between outgoing and incoming Commissions, it seems content to stay in a ‘must do better’ holding pattern. (Or: “It’s what the Commission says when it has other priorities,” as one source inside the institution put it.)

A comprehensive assessment of how the Code is working is slated as coming in early 2020 — i.e. after the new Commission has taken up its mandate. So, yes, that’s the sound of the can being kicked a few more months on.

Summing up its main findings from signatories’ self-marked ‘progress’ reports, the outgoing Commission says they have reported improved transparency between themselves vs a year ago on discussing their respective policies against disinformation. 

But it flags poor progress on implementing commitments to empower consumers and the research community.

Advertisement

“The provision of data and search tools is still episodic and arbitrary and does not respond to the needs of researchers for independent scrutiny,” it warns. 

This is ironically an issue that one of the signatories, Mozilla, has been an active critic of others over — including Facebook, whose political ad API it reviewed damningly this year, finding it not fit for purpose and “designed in ways that hinders the important work of researchers, who inform the public and policymakers about the nature and consequences of misinformation”. So, er, ouch.

The Commission is also critical of what it says are “significant” variations in the scope of actions undertaken by platforms to implement “commitments” under the Code, noting also differences in implementation of platform policy; cooperation with stakeholders; and sensitivity to electoral contexts persist across Member States; as well as differences in EU-specific metrics provided.

But given the Code only ever asked for fairly vague action in some pretty broad areas, without prescribing exactly what platforms were committing themselves to doing, nor setting benchmarks for action to be measured against, inconsistency and variety is really what you’d expect. That and the can being kicked down the road.

The Code did extract one quasi-firm commitment from signatories — on the issue of bot detection and identification — by getting platforms to promise to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

A year later it’s hard to see clear sign of progress on that goal. Although platforms might argue that what they claim is increased effort toward catching and killing malicious bot accounts before they have a chance to spread any fakes is where most of their sweat is going on that front.

Advertisement

Twitter’s annual report, for instance, talks about what it’s doing to fight “spam and malicious automation strategically and at scale” on its platform — saying its focus is “increasingly on proactively identifying problematic accounts and behaviour rather than waiting until we receive a report”; after which it says it aims to “challenge… accounts engaging in spammy or manipulative behavior before users are ​exposed to ​misleading, inauthentic, or distracting content”.

So, in other words, if Twitter does this perfectly — and catches every malicious bot before it has a chance to tweet — it might plausibly argue that bot labels are redundant. Though it’s clearly not in a position to claim it’s won the spam/malicious bot war yet. Ergo, its users remain at risk of consuming inauthentic tweets that aren’t clearly labeled as such (or even as ‘potentially suspect’ by Twitter). Presumably because these are the accounts that continue slipping under its bot-detection radar.

There’s also nothing in Twitter’s report about it labelling even (non-malicious) bot accounts as bots — for the purpose of preventing accidental confusion (after all satire misinterpreted as truth can also result in disinformation). And this despite the company suggesting a year ago that it was toying with adding contextual labels to bot accounts, at least where it could detect them.

In the event it’s resisted adding any more badges to accounts. While an internal reform of its verification policy for verified account badges was put on pause last year.

Facebook’s report also only makes a passing mention of bots, under a section sub-headed “spam” — where it writes circularly: “Content actioned for spam has increased considerably, since we found and took action on more content that goes against our standards.”

It includes some data-points to back up this claim of more spam squashed — citing a May 2019 Community Standards Enforcement report — where it states that in Q4 2018 and Q1 2019 it acted on 1.8 billion pieces of spam in each of the quarters vs 737 million in Q4 2017; 836 million in Q1 2018; 957 million in Q2 2018; and 1.2 billion in Q3 2018.

Advertisement

Though it’s lagging on publishing more up-to-date spam data now, noting in the report submitted to the EC that: “Updated spam metrics are expected to be available in November 2019 for Q2 and Q3 2019″ — i.e. conveniently late for inclusion in this report.

Facebook’s report notes ongoing efforts to put contextual labels on certain types of suspect/partisan content, such as labelling photos and videos which have been independently fact-checked as misleading; labelling state-controlled media; and labelling political ads.

Labelling bots is not discussed in the report — presumably because Facebook prefers to focus attention on self-defined spam-removal metrics vs muddying the water with discussion of how much suspect activity it continues to host on its platform, either through incompetence, lack of resources or because it’s politically expedient for its business to do so.

Labelling all these bots would mean Facebook signposting inconsistencies in how it applies its own policies –in a way that might foreground its own political bias. And there’s no self-regulatory mechanism under the sun that will make Facebook fess up to such double-standards.

For now, the Code’s requirement for signatories to publish an annual report on what they’re doing to tackle disinformation looks to be the biggest win so far. Albeit, it’s very loosely bound self-reporting. While some of these ‘reports’ don’t even run to a full page of A4-text — so set your expectations accordingly.

The Commission has published all the reports here. It has also produced its own summary and assessment of them (here).

Advertisement

“Overall, the reporting would benefit from more detailed and qualitative insights in some areas and from further big-picture context, such as trends,” it writes. “In addition, the metrics provided so far are mainly output indicators rather than impact indicators.”

Of the Code generally — as a “self-regulatory standard” — the Commission argues it has “provided an opportunity for greater transparency into the platforms’ policies on disinformation as well as a framework for structured dialogue to monitor, improve and effectively implement those policies”, adding: “This represents progress over the situation prevailing before the Code’s entry into force, while further serious steps by individual signatories and the community as a whole are still necessary.”

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

AI

Exploring the Evolution of Language Translation: A Comparative Analysis of AI Chatbots and Google Translate

Published

on

By

A Comparative Analysis of AI Chatbots and Google Translate

According to an article on PCMag, while Google Translate makes translating sentences into over 100 languages easy, regular users acknowledge that there’s still room for improvement.

In theory, large language models (LLMs) such as ChatGPT are expected to bring about a new era in language translation. These models consume vast amounts of text-based training data and real-time feedback from users worldwide, enabling them to quickly learn to generate coherent, human-like sentences in a wide range of languages.

However, despite the anticipation that ChatGPT would revolutionize translation, previous experiences have shown that such expectations are often inaccurate, posing challenges for translation accuracy. To put these claims to the test, PCMag conducted a blind test, asking fluent speakers of eight non-English languages to evaluate the translation results from various AI services.

The test compared ChatGPT (both the free and paid versions) to Google Translate, as well as to other competing chatbots such as Microsoft Copilot and Google Gemini. The evaluation involved comparing the translation quality for two test paragraphs across different languages, including Polish, French, Korean, Spanish, Arabic, Tagalog, and Amharic.

In the first test conducted in June 2023, participants consistently favored AI chatbots over Google Translate. ChatGPT, Google Bard (now Gemini), and Microsoft Bing outperformed Google Translate, with ChatGPT receiving the highest praise. ChatGPT demonstrated superior performance in converting colloquialisms, while Google Translate often provided literal translations that lacked cultural nuance.

For instance, ChatGPT accurately translated colloquial expressions like “blow off steam,” whereas Google Translate produced more literal translations that failed to resonate across cultures. Participants appreciated ChatGPT’s ability to maintain consistent levels of formality and its consideration of gender options in translations.

Advertisement

The success of AI chatbots like ChatGPT can be attributed to reinforcement learning with human feedback (RLHF), which allows these models to learn from human preferences and produce culturally appropriate translations, particularly for non-native speakers. However, it’s essential to note that while AI chatbots outperformed Google Translate, they still had limitations and occasional inaccuracies.

In a subsequent test, PCMag evaluated different versions of ChatGPT, including the free and paid versions, as well as language-specific AI agents from OpenAI’s GPTStore. The paid version of ChatGPT, known as ChatGPT Plus, consistently delivered the best translations across various languages. However, Google Translate also showed improvement, performing surprisingly well compared to previous tests.

Overall, while ChatGPT Plus emerged as the preferred choice for translation, Google Translate demonstrated notable improvement, challenging the notion that AI chatbots are always superior to traditional translation tools.


Source: https://www.pcmag.com/articles/google-translate-vs-chatgpt-which-is-the-best-language-translator

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google Implements Stricter Guidelines for Mass Email Senders to Gmail Users

Published

on

1280x924 gmail

Beginning in April, Gmail senders bombarding users with unwanted mass emails will encounter a surge in message rejections unless they comply with the freshly minted Gmail email sender protocols, Google cautions.

Fresh Guidelines for Dispatching Mass Emails to Gmail Inboxes In an elucidative piece featured on Forbes, it was highlighted that novel regulations are being ushered in to shield Gmail users from the deluge of unsolicited mass emails. Initially, there were reports surfacing about certain marketers receiving error notifications pertaining to messages dispatched to Gmail accounts. Nonetheless, a Google representative clarified that these specific errors, denoted as 550-5.7.56, weren’t novel but rather stemmed from existing authentication prerequisites.

Moreover, Google has verified that commencing from April, they will initiate “the rejection of a portion of non-compliant email traffic, progressively escalating the rejection rate over time.” Google elaborates that, for instance, if 75% of the traffic adheres to the new email sender authentication criteria, then a portion of the remaining non-conforming 25% will face rejection. The exact proportion remains undisclosed. Google does assert that the implementation of the new regulations will be executed in a “step-by-step fashion.”

This cautious and methodical strategy seems to have already kicked off, with transient errors affecting a “fraction of their non-compliant email traffic” coming into play this month. Additionally, Google stipulates that bulk senders will be granted until June 1 to integrate “one-click unsubscribe” in all commercial or promotional correspondence.

Exclusively Personal Gmail Accounts Subject to Rejection These alterations exclusively affect bulk emails dispatched to personal Gmail accounts. Entities sending out mass emails, specifically those transmitting a minimum of 5,000 messages daily to Gmail accounts, will be mandated to authenticate outgoing emails and “refrain from dispatching unsolicited emails.” The 5,000 message threshold is tabulated based on emails transmitted from the same principal domain, irrespective of the employment of subdomains. Once the threshold is met, the domain is categorized as a permanent bulk sender.

These guidelines do not extend to communications directed at Google Workspace accounts, although all senders, including those utilizing Google Workspace, are required to adhere to the updated criteria.

Advertisement

Augmented Security and Enhanced Oversight for Gmail Users A Google spokesperson emphasized that these requisites are being rolled out to “fortify sender-side security and augment user control over inbox contents even further.” For the recipient, this translates to heightened trust in the authenticity of the email sender, thus mitigating the risk of falling prey to phishing attempts, a tactic frequently exploited by malevolent entities capitalizing on authentication vulnerabilities. “If anything,” the spokesperson concludes, “meeting these stipulations should facilitate senders in reaching their intended recipients more efficiently, with reduced risks of spoofing and hijacking by malicious actors.”

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

Google’s Next-Gen AI Chatbot, Gemini, Faces Delays: What to Expect When It Finally Launches

Published

on

By

Google AI Chatbot Gemini

In an unexpected turn of events, Google has chosen to postpone the much-anticipated debut of its revolutionary generative AI model, Gemini. Initially poised to make waves this week, the unveiling has now been rescheduled for early next year, specifically in January.

Gemini is set to redefine the landscape of conversational AI, representing Google’s most potent endeavor in this domain to date. Positioned as a multimodal AI chatbot, Gemini boasts the capability to process diverse data types. This includes a unique proficiency in comprehending and generating text, images, and various content formats, even going so far as to create an entire website based on a combination of sketches and written descriptions.

Originally, Google had planned an elaborate series of launch events spanning California, New York, and Washington. Regrettably, these events have been canceled due to concerns about Gemini’s responsiveness to non-English prompts. According to anonymous sources cited by The Information, Google’s Chief Executive, Sundar Pichai, personally decided to postpone the launch, acknowledging the importance of global support as a key feature of Gemini’s capabilities.

Gemini is expected to surpass the renowned ChatGPT, powered by OpenAI’s GPT-4 model, and preliminary private tests have shown promising results. Fueled by significantly enhanced computing power, Gemini has outperformed GPT-4, particularly in FLOPS (Floating Point Operations Per Second), owing to its access to a multitude of high-end AI accelerators through the Google Cloud platform.

SemiAnalysis, a research firm affiliated with Substack Inc., expressed in an August blog post that Gemini appears poised to “blow OpenAI’s model out of the water.” The extensive compute power at Google’s disposal has evidently contributed to Gemini’s superior performance.

Google’s Vice President and Manager of Bard and Google Assistant, Sissie Hsiao, offered insights into Gemini’s capabilities, citing examples like generating novel images in response to specific requests, such as illustrating the steps to ice a three-layer cake.

Advertisement

While Google’s current generative AI offering, Bard, has showcased noteworthy accomplishments, it has struggled to achieve the same level of consumer awareness as ChatGPT. Gemini, with its unparalleled capabilities, is expected to be a game-changer, demonstrating impressive multimodal functionalities never seen before.

During the initial announcement at Google’s I/O developer conference in May, the company emphasized Gemini’s multimodal prowess and its developer-friendly nature. An application programming interface (API) is under development, allowing developers to seamlessly integrate Gemini into third-party applications.

As the world awaits the delayed unveiling of Gemini, the stakes are high, with Google aiming to revolutionize the AI landscape and solidify its position as a leader in generative artificial intelligence. The postponed launch only adds to the anticipation surrounding Gemini’s eventual debut in the coming year.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS