Connect with us

NEWS

Europe’s top court sets new line on policing illegal speech online

Published

on

europes top court sets new line on policing illegal speech online

Europe’s top court has set a new line for the policing of illegal speech online. The ruling has implications for how speech is regulated on online platforms — and is likely to feed into wider planned reform of regional rules governing platforms’ liabilities.

Per the CJEU decision, platforms such as Facebook can be instructed to hunt for and remove illegal speech worldwide — including speech that’s “equivalent” to content already judged illegal.

Although any such takedowns remain within the framework of “relevant international law”.

So in practice it does not that mean a court order issued in one EU country will get universally applied in all jurisdictions as there’s no international agreement on what constitutes unlawful speech or even more narrowly defamatory speech.

Existing EU rules on the free flow of information on ecommerce platforms — aka the eCommerce Directive — which state that Member States cannot force a “general content monitoring obligation” on intermediaries, do not preclude courts from ordering platforms to remove or block illegal speech, the court has decided.

That decision worries free speech advocates who are concerned it could open the door to general monitoring obligations being placed on tech platforms in the region, with the risk of a chilling effect on freedom of expression.

Facebook has also expressed concern. Responding to the ruling in a statement, a spokesperson told us:

“This judgement raises critical questions around freedom of expression and the role that internet companies should play in monitoring, interpreting and removing speech that might be illegal in any particular country. At Facebook, we already have Community Standards which outline what people can and cannot share on our platform, and we have a process in place to restrict content if and when it violates local laws. This ruling goes much further. It undermines the long-standing principle that one country does not have the right to impose its laws on speech on another country. It also opens the door to obligations being imposed on internet companies to proactively monitor content and then interpret if it is “equivalent” to content that has been found to be illegal. In order to get this right national courts will have to set out very clear definitions on what ”identical” and ”equivalent” means in practice. We hope the courts take a proportionate and measured approach, to avoid having a chilling effect on freedom of expression.”

The legal questions were referred to the CJEU by a court in Austria, and stem from a defamation action brought by Austrian Green Party politician, Eva Glawischnig, who in 2016 filed suit against Facebook after the company refused to take down posts she claimed were defamatory against her.

In 2017 an Austrian court ruled Facebook should take the defamatory posts down and do so worldwide. However Glawischnig also wanted it to remove similar posts, not just identical reposts of the illegal speech, which she argued were equally defamatory.

The current situation where platforms require notice of illegal content before carrying out a takedown are problematic, from one perspective, given the scale and speed of content distribution on digital platforms — which can make it impossible to keep up with reporting re-postings.

Facebook’s platform also has closed groups where content can be shared out of sight of non-members, and where an individual could therefore have no ability to see unlawful content that’s targeted at them — making it essentially impossible for them to report it.

While the case concerns the scope of the application of defamation law on Facebook’s platform the ruling clearly has broader implications for regulating a range of “unlawful” content online.

Specifically the CJEU has ruled that an information society service “host provider” can be ordered to:

  • … remove information which it stores, the content of which is identical to the content of information which was previously declared to be unlawful, or to block access to that information, irrespective of who requested the storage of that information;
  • … remove information which it stores, the content of which is equivalent to the content of information which was previously declared to be unlawful, or to block access to that information, provided that the monitoring of and search for the information concerned by such an injunction are limited to information conveying a message the content of which remains essentially unchanged compared with the content which gave rise to the finding of illegality and containing the elements specified in the injunction, and provided that the differences in the wording of that equivalent content, compared with the wording characterising the information which was previously declared to be illegal, are not such as to require the host provider to carry out an independent assessment of that content;
  • … remove information covered by the injunction or to block access to that information worldwide within the framework of the relevant international law

The court has sought to balance the requirement under EU law of no general monitoring obligation on platforms with the ability of national courts to regulate information flow online in specific instances of illegal speech.

In the judgement the CJEU also invokes the idea of Member States being able to “apply duties of care, which can reasonably be expected from them and which are specified by national law, in order to detect and prevent certain types of illegal activities” — saying the eCommerce Direction does not stand in the way of states imposing such a requirement.

Some European countries are showing appetite for tighter regulation of online platforms. In the UK, for instance, the government laid out proposals for regulating a board range of online harms earlier this year. While, two years ago, Germany introduced a law to regulate hate speech takedowns on online platforms.

Over the past several years the European Commission has also kept up pressure on platforms to speed up takedowns of illegal content — signing tech companies up to a voluntary code of practice, back in 2016, and continuing to warn it could introduce legislation if targets are not met.

Today’s ruling is thus being interpreted in some quarters as opening the door to a wider reform of EU platform liability law by the incoming Commission — which could allow for imposing more general monitoring or content-filtering obligations, aligned with Member States’ security or safety priorities.

“We can trace worrying content blocking tendencies in Europe,” says Sebastian Felix Schwemer, a researcher in algorithmic content regulation and intermediary liability at the University of Copenhagen. “The legislator has earlier this year introduced proactive content filtering by platforms in the Copyright DSM Directive (“uploadfilters”) and similarly suggested in a Proposal for a Regulation on Terrorist Content as well as in a non-binding Recommendation from March last year.”

Critics of a controversial copyright reform — which was agreed by European legislators earlier this year — have warned consistently that it will result in tech platforms pre-filtering user generated content uploads. Although the full impact remains to be seen, as Member States have two years from April 2019 to pass legislation meeting the Directive’s requirements.

In 2018 the Commission also introduced a proposal for a regulation on preventing the dissemination of terrorist content online — which explicitly included a requirement for platforms to use filters to identify and block re-uploads of illegal terrorist content. Though the filter element was challenged in the EU parliament.

“There is little case law on the question of general monitoring (prohibited according to Article 15 of the E-Commerce Directive), but the question is highly topical,” says Schwemer. “Both towards the trend towards proactive content filtering by platforms and the legislator’s push for these measures (Article 17 in the Copyright DSM Directive, Terrorist Content Proposal, the Commission’s non-binding Recommendation from last year).”

Schwemer agrees the CJEU ruling will have “a broad impact” on the behavior of online platforms — going beyond Facebook and the application of defamation law.

“The incoming Commission is likely to open up the E-Commerce Directive (there is a leaked concept note by DG Connect from before the summer),” he suggests. “Something that has previously been perceived as opening Pandora’s Box. The decision will also play into the coming lawmaking process.”

The ruling also naturally raises the question of what constitutes “equivalent” unlawful content? And who and how will they be the judge of that?

The CJEU goes into some detail on “specific elements” it says are needed for non-identical illegal speech to be judged equivalently unlawful, and also on the limits of the burden that should be placed on platforms so they are not under a general obligation to monitor content — ultimately implying that technology filters, not human assessments, should be used to identify equivalent speech.

From the judgement:

… it is important that the equivalent information referred to in paragraph 41 above contains specific elements which are properly identified in the injunction, such as the name of the person concerned by the infringement determined previously, the circumstances in which that infringement was determined and equivalent content to that which was declared to be illegal. Differences in the wording of that equivalent content, compared with the content which was declared to be illegal, must not, in any event, be such as to require the host provider concerned to carry out an independent assessment of that content.

In those circumstances, an obligation such as the one described in paragraphs 41 and 45 above, on the one hand — in so far as it also extends to information with equivalent content — appears to be sufficiently effective for ensuring that the person targeted by the defamatory statements is protected. On the other hand, that protection is not provided by means of an excessive obligation being imposed on the host provider, in so far as the monitoring of and search for information which it requires are limited to information containing the elements specified in the injunction, and its defamatory content of an equivalent nature does not require the host provider to carry out an independent assessment, since the latter has recourse to automated search tools and technologies.

“The Court’s thoughts on the filtering of ‘equivalent’ information are interesting,” Schwemer continues. “It boils down to that platforms can be ordered to track down illegal content, but only under specific circumstances.

“In its rather short judgement, the Court comes to the conclusion… that it is no general monitoring obligation on hosting providers to remove or block equivalent content. That is provided that the search of information is limited to essentially unchanged content and that the hosting provider does not have to carry out an independent assessment but can rely on automated technologies to detect that content.”

While he says the court’s intentions — to “limit defamation” — are “good” he points out that “relying on filtering technologies is far from unproblematic”.

Filters can indeed be an extremely blunt tool. Even basic text filters can be triggered by words that contain a prohibited spelling. While applying filters to block defamatory speech could lead to — for example — inadvertently blocking lawful reactions that quote the unlawful speech.

The ruling also means platforms and/or their technology tools are being compelled to define the limits of free expression under threat of liability. Which pushes them towards setting a more conservative line on what’s acceptable expression on their platforms — in order to shrink their legal risk.

Although definitions of what is unlawful speech and equivalently unlawful will ultimately rest with courts.

It’s worth pointing out that platforms are already defining speech limits — just driven by their own economic incentives.

For ad supported platforms, these incentives typically demand maximizing engagement and time spent on the platform — which tends to encourage users to spread provocative/outrageous content.

That can sum to clickbait and junk news. Equally it can mean the most hateful stuff under the sun.

Without a new online business model paradigm that radically shifts the economic incentives around content creation on platforms the tension between freedom of expression and illegal hate speech will remain. As will the general content monitoring obligation such platforms place on society.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

NEWS

OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models

Published

on

By

OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models

OpenAI has today announced the release of fine-tuning capabilities for its flagship GPT-4 large language model, marking a significant milestone in the AI landscape. This new functionality empowers developers to create tailored versions of GPT-4 to suit specialized use cases, enhancing the model’s utility across various industries.

Fine-tuning has long been a desired feature for developers who require more control over AI behavior, and with this update, OpenAI delivers on that demand. The ability to fine-tune GPT-4 allows businesses and developers to refine the model’s responses to better align with specific requirements, whether for customer service, content generation, technical support, or other unique applications.

Why Fine-Tuning Matters

GPT-4 is a very flexible model that can handle many different tasks. However, some businesses and developers need more specialized AI that matches their specific language, style, and needs. Fine-tuning helps with this by letting them adjust GPT-4 using custom data. For example, companies can train a fine-tuned model to keep a consistent brand tone or focus on industry-specific language.

Fine-tuning also offers improvements in areas like response accuracy and context comprehension. For use cases where nuanced understanding or specialized knowledge is crucial, this can be a game-changer. Models can be taught to better grasp intricate details, improving their effectiveness in sectors such as legal analysis, medical advice, or technical writing.

Key Features of GPT-4 Fine-Tuning

The fine-tuning process leverages OpenAI’s established tools, but now it is optimized for GPT-4’s advanced architecture. Notable features include:

  • Enhanced Customization: Developers can precisely influence the model’s behavior and knowledge base.
  • Consistency in Output: Fine-tuned models can be made to maintain consistent formatting, tone, or responses, essential for professional applications.
  • Higher Efficiency: Compared to training models from scratch, fine-tuning GPT-4 allows organizations to deploy sophisticated AI with reduced time and computational cost.

Additionally, OpenAI has emphasized ease of use with this feature. The fine-tuning workflow is designed to be accessible even to teams with limited AI experience, reducing barriers to customization. For more advanced users, OpenAI provides granular control options to achieve highly specialized outputs.

Implications for the Future

The launch of fine-tuning capabilities for GPT-4 signals a broader shift toward more user-centric AI development. As businesses increasingly adopt AI, the demand for models that can cater to specific business needs, without compromising on performance, will continue to grow. OpenAI’s move positions GPT-4 as a flexible and adaptable tool that can be refined to deliver optimal value in any given scenario.

By offering fine-tuning, OpenAI not only enhances GPT-4’s appeal but also reinforces the model’s role as a leading AI solution across diverse sectors. From startups seeking to automate niche tasks to large enterprises looking to scale intelligent systems, GPT-4’s fine-tuning capability provides a powerful resource for driving innovation.

OpenAI announced that fine-tuning GPT-4o will cost $25 for every million tokens used during training. After the model is set up, it will cost $3.75 per million input tokens and $15 per million output tokens. To help developers get started, OpenAI is offering 1 million free training tokens per day for GPT-4o and 2 million free tokens per day for GPT-4o mini until September 23. This makes it easier for developers to try out the fine-tuning service.

As AI continues to evolve, OpenAI’s focus on customization and adaptability with GPT-4 represents a critical step in making advanced AI accessible, scalable, and more aligned with real-world applications. This new capability is expected to accelerate the adoption of AI across industries, creating a new wave of AI-driven solutions tailored to specific challenges and opportunities.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

This Week in Search News: Simple and Easy-to-Read Update

Published

on

This Week in Search News: Simple and Easy-to-Read Update

Here’s what happened in the world of Google and search engines this week:

1. Google’s June 2024 Spam Update

Google finished rolling out its June 2024 spam update over a period of seven days. This update aims to reduce spammy content in search results.

2. Changes to Google Search Interface

Google has removed the continuous scroll feature for search results. Instead, it’s back to the old system of pages.

3. New Features and Tests

  • Link Cards: Google is testing link cards at the top of AI-generated overviews.
  • Health Overviews: There are more AI-generated health overviews showing up in search results.
  • Local Panels: Google is testing AI overviews in local information panels.

4. Search Rankings and Quality

  • Improving Rankings: Google said it can improve its search ranking system but will only do so on a large scale.
  • Measuring Quality: Google’s Elizabeth Tucker shared how they measure search quality.

5. Advice for Content Creators

  • Brand Names in Reviews: Google advises not to avoid mentioning brand names in review content.
  • Fixing 404 Pages: Google explained when it’s important to fix 404 error pages.

6. New Search Features in Google Chrome

Google Chrome for mobile devices has added several new search features to enhance user experience.

7. New Tests and Features in Google Search

  • Credit Card Widget: Google is testing a new widget for credit card information in search results.
  • Sliding Search Results: When making a new search query, the results might slide to the right.

8. Bing’s New Feature

Bing is now using AI to write “People Also Ask” questions in search results.

9. Local Search Ranking Factors

Menu items and popular times might be factors that influence local search rankings on Google.

10. Google Ads Updates

  • Query Matching and Brand Controls: Google Ads updated its query matching and brand controls, and advertisers are happy with these changes.
  • Lead Credits: Google will automate lead credits for Local Service Ads. Google says this is a good change, but some advertisers are worried.
  • tROAS Insights Box: Google Ads is testing a new insights box for tROAS (Target Return on Ad Spend) in Performance Max and Standard Shopping campaigns.
  • WordPress Tag Code: There is a new conversion code for Google Ads on WordPress sites.

These updates highlight how Google and other search engines are continuously evolving to improve user experience and provide better advertising tools.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

FACEBOOK

Facebook Faces Yet Another Outage: Platform Encounters Technical Issues Again

Published

on

By

Facebook Problem Again

Uppdated: It seems that today’s issues with Facebook haven’t affected as many users as the last time. A smaller group of people appears to be impacted this time around, which is a relief compared to the larger incident before. Nevertheless, it’s still frustrating for those affected, and hopefully, the issues will be resolved soon by the Facebook team.

Facebook had another problem today (March 20, 2024). According to Downdetector, a website that shows when other websites are not working, many people had trouble using Facebook.

This isn’t the first time Facebook has had issues. Just a little while ago, there was another problem that stopped people from using the site. Today, when people tried to use Facebook, it didn’t work like it should. People couldn’t see their friends’ posts, and sometimes the website wouldn’t even load.

Downdetector, which watches out for problems on websites, showed that lots of people were having trouble with Facebook. People from all over the world said they couldn’t use the site, and they were not happy about it.

When websites like Facebook have problems, it affects a lot of people. It’s not just about not being able to see posts or chat with friends. It can also impact businesses that use Facebook to reach customers.

Since Facebook owns Messenger and Instagram, the problems with Facebook also meant that people had trouble using these apps. It made the situation even more frustrating for many users, who rely on these apps to stay connected with others.

During this recent problem, one thing is obvious: the internet is always changing, and even big websites like Facebook can have problems. While people wait for Facebook to fix the issue, it shows us how easily things online can go wrong. It’s a good reminder that we should have backup plans for staying connected online, just in case something like this happens again.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending