NEWS
UK Online Harms Bill, coming next year, will propose fines of up to 10% of annual turnover for breaching duty of care rules
The UK is moving ahead with a populist but controversial plan to regulate a wide range of illegal and/or harmful content almost anywhere online such stuff might pose a risk to children. The government has set out its final response to the consultation it kicked off back in April 2019 — committing to introduce an Online Safety Bill next year.
“Tech platforms will need to do far more to protect children from being exposed to harmful content or activity such as grooming, bullying and pornography. This will help make sure future generations enjoy the full benefits of the internet with better protections in place to reduce the risk of harm,” it said today.
In an earlier partial response to the consultation on its Online Harms white paper ministers confirmed the UK’s media regulator, Ofcom, as its pick for enforcing the forthcoming rules.
Under the plans announced today, the government said Ofcom will be able to levy fines of up to 10% of a company’s annual global turnover (or £18M, whichever is higher) on those that are deemed to have failed in their duty of care to protect impression eyeballs from being exposed to illegal material — such as child sexual abuse, terrorist material or suicide promoting content.
Ofcom will also have the power to block non-compliant services from being accessed in the UK — although it’s not clear how exactly that will be achieved (or whether the legislation will seek to prevent VPNs being used by Brits to access blocked Internet services).
The regulator’s running costs will be paid by companies that fall under the scope of the law, above a threshold based on global annual revenue, per the government, although it’s not yet clear where that pay-bar will kick in (nor how much tech giants and others will have to stump up for the cost of the oversight).
The online safety ‘duty of care’ rules are intended to cover not just social media giants like Facebook but a very wide range of Internet services — from dating apps and search engines to online marketplaces, video sharing platforms and instant messaging tools, as well as consumer cloud storage and even video games that allow relevant user interaction.
P2P services, online forums and pornography websites will also fall under the scope of the laws, as will quasi-private messaging services, according to a government press release.
That raises troubling questions about whether the legal requirements could put pressure on companies not to use end-to-end encryption (i.e. if they face being penalized for not being able to monitor robustly encrypted content for illegal material).
“The new regulations will apply to any company in the world hosting user-generated content online accessible by people in the UK or enabling them to privately or publicly interact with others online,” the government writes in a press release.
The rules will include different categories of responsibility for content and activity — with a top tier (category 1) only applying to companies with “the largest online presences and high-risk features” which the government said is likely to include Facebook, TikTok, Instagram and Twitter.
“These companies will need to assess the risk of legal content or activity on their services with ‘a reasonably foreseeable risk of causing significant physical or psychological harm to adults’. They will then need to make clear what type of ‘legal but harmful’ content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently,” it said.
Category 1 companies will also have a legal requirement to publish transparency reports about the steps they are taking to tackle online harms, per the government’s PR.
While all companies that fall under the scope of the law will be required to have mechanisms so people can easily report harmful content or activity while also being able to appeal the takedown of content, it added.
The government believes that less than three per cent of UK businesses will fall within the scope of the legislation — adding that “the vast majority” will be Category 2 services.
Protections for free speech are also slated as being baked in — with the government saying the laws will not affect articles and comments sections on news websites, for example.
The legislation will contain provisions to impose criminal sanctions on senior managers (introduced by parliament via secondary legislation). On this the government added that it will not hesitate to use the power if companies fail to take the new rules seriously (such as by not responding “fully, accurately and in a timely manner” to information requests from Ofcom).
Commenting on the plans in a statement, digital secretary Oliver Dowden said: “I’m unashamedly pro tech but that can’t mean a tech free for all. Today Britain is setting the global standard for safety online with the most comprehensive approach yet to online regulation. We are entering a new age of accountability for tech to protect children and vulnerable users, to restore trust in this industry, and to enshrine in law safeguards for free speech.
“This proportionate new framework will ensure we don’t put unnecessary burdens on small businesses but give large digital businesses robust rules of the road to follow so we can seize the brilliance of modern technology to improve our lives.”
In another supporting statement, home secretary Priti Patel added: “Tech companies must put public safety first or face the consequences.”
Also commenting, Ofcom CEO, Dame Melanie Dawes, welcomed its new broader oversight remit, adding in a statement that: “Being online brings huge benefits, but four in five people have concerns about it. That shows the need for sensible, balanced rules that protect users from serious harm, but also recognise the great things about online, including free expression. We’re gearing up for the task by acquiring new technology and data skills, and we’ll work with Parliament as it finalises the plans.”
The government has said it will publish Interim Codes of Practice today to provide guidance for companies on tackling terrorist activity and online child sexual exploitation prior to the introduction of legislation — which is unlikely to make it into law before late 2021 at the earliest to allow adequate time for parliamentary debate and scrutiny.
And while a noisy political push to ‘protect kids’ online can expect to enjoy plenty of tabloid-level support, the wide-ranging application of the duty of care rules the government is envisaging — with large swathes of the UK’s tech sector set to be impacted — means ministers can expect to attract plenty of homegrown criticism too, from business groups, entrepreneurs and investors and legal and policy experts, including over specific concerns about knock-on impacts on privacy and security.
Its plan to push ahead with an Online Safety Bill that will impact scores of smaller digital businesses, instead of zeroing in on the handful of platform giants that are responsible for generating high volumes of harms, has already attracted criticism from the tech sector.
Coadec, a digital policy group that advocates for startups and the UK tech sector, branded the plan “a confusing minefield” for entrepreneurs — arguing it will do the opposite of fostering digital competition, counteracting other measures recently announced by the government in response to concerns about market concentration in the digital advertising sphere.
“Last week the Government announced a new unit within the CMA [Competition and Markets Authority] to promote greater competition within digital markets. Days later they have announced regulatory measures that risk having the opposite effect,” said Dom Hallas, Coadec’s executive director in a statement. “86% of UK investors say that regulation aiming to tackle big tech could lead to poor outcomes that damage tech startups and limit competition — these plans risk being a confusing minefield that will have a disproportionate impact on competitors and benefit big companies with the resources to comply.”
“British startups want a safer internet. But it’s not clear how these proposals, which still cover a huge range of services that are nowhere near social media from ecommerce to the sharing economy, are better targeted than the last time government published proposals nearly a year and a half ago,” he added. “Until the Government starts to work collaboratively instead of consistently threatening startup founders with jail time it’s not clear how we’re going to deliver proposals that work.”
One gap in the government’s proposal is financial harms — with issues such as fraud and the sale of unsafe goods explicitly excluded from the framework (as it says it wants the regulations to be “clear and manageable” for businesses and to avoid the risk of duplicating existing rules).
Some “lower-risk” services may also be exempt from the duty of care requirement, per the government, to avoid the law being overly. burdensome.
Email services will also not be in scope, it confirmed.
And while it says some types of advertising will be in scope (such as influencer ads posted on social media) ads placed on an in-scope service via a direct contract between an advertiser and an advertising service (such as Facebook or Google Ads) will be exempt because “this is covered by existing regulation” — which looks set to let the adtech duopoly off the harmful ads hook without good clear reason.
After all, existing UK regulations do not seem to have done much to stem the tide of crypto scam ads running on Facebook (or served via Google’s ad tools) in recent years — which led to a campaign by a consumer advice personality to get Facebook and other companies to clean up their act, for example.
Consumer group Which? has criticized the lack of government attention to financial scams in the Online Safety Bill. In a response statement, Rocio Concha, its director of policy and advocacy, said: “It’s positive that the government is recognising the responsibility of online platforms to protect users, but it would be a big missed opportunity if online scams were not dealt with through the upcoming bill. Our research has shown the financial and emotional toll of scams and that social media firms such as Facebook and search engines like Google need to do much more to protect users.
“We look forward to the detail and hope to see a clear plan to give online platforms greater responsibility for fraudulent content on their sites, including having in place better controls to prevent fake adverts from appearing, so that all users can be confident that they will truly be safe online.”
European Union lawmakers are due to unveil their own pan-EU policy package to regulate illegal and harmful content later today — but the Digital Services Act will tackle the sale of illegal goods online as well as proposing to harmonize rules for reporting troublesome content on online services.
NEWS
OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models
OpenAI has today announced the release of fine-tuning capabilities for its flagship GPT-4 large language model, marking a significant milestone in the AI landscape. This new functionality empowers developers to create tailored versions of GPT-4 to suit specialized use cases, enhancing the model’s utility across various industries.
Fine-tuning has long been a desired feature for developers who require more control over AI behavior, and with this update, OpenAI delivers on that demand. The ability to fine-tune GPT-4 allows businesses and developers to refine the model’s responses to better align with specific requirements, whether for customer service, content generation, technical support, or other unique applications.
Why Fine-Tuning Matters
GPT-4 is a very flexible model that can handle many different tasks. However, some businesses and developers need more specialized AI that matches their specific language, style, and needs. Fine-tuning helps with this by letting them adjust GPT-4 using custom data. For example, companies can train a fine-tuned model to keep a consistent brand tone or focus on industry-specific language.
Fine-tuning also offers improvements in areas like response accuracy and context comprehension. For use cases where nuanced understanding or specialized knowledge is crucial, this can be a game-changer. Models can be taught to better grasp intricate details, improving their effectiveness in sectors such as legal analysis, medical advice, or technical writing.
Key Features of GPT-4 Fine-Tuning
The fine-tuning process leverages OpenAI’s established tools, but now it is optimized for GPT-4’s advanced architecture. Notable features include:
- Enhanced Customization: Developers can precisely influence the model’s behavior and knowledge base.
- Consistency in Output: Fine-tuned models can be made to maintain consistent formatting, tone, or responses, essential for professional applications.
- Higher Efficiency: Compared to training models from scratch, fine-tuning GPT-4 allows organizations to deploy sophisticated AI with reduced time and computational cost.
Additionally, OpenAI has emphasized ease of use with this feature. The fine-tuning workflow is designed to be accessible even to teams with limited AI experience, reducing barriers to customization. For more advanced users, OpenAI provides granular control options to achieve highly specialized outputs.
Implications for the Future
The launch of fine-tuning capabilities for GPT-4 signals a broader shift toward more user-centric AI development. As businesses increasingly adopt AI, the demand for models that can cater to specific business needs, without compromising on performance, will continue to grow. OpenAI’s move positions GPT-4 as a flexible and adaptable tool that can be refined to deliver optimal value in any given scenario.
By offering fine-tuning, OpenAI not only enhances GPT-4’s appeal but also reinforces the model’s role as a leading AI solution across diverse sectors. From startups seeking to automate niche tasks to large enterprises looking to scale intelligent systems, GPT-4’s fine-tuning capability provides a powerful resource for driving innovation.
OpenAI announced that fine-tuning GPT-4o will cost $25 for every million tokens used during training. After the model is set up, it will cost $3.75 per million input tokens and $15 per million output tokens. To help developers get started, OpenAI is offering 1 million free training tokens per day for GPT-4o and 2 million free tokens per day for GPT-4o mini until September 23. This makes it easier for developers to try out the fine-tuning service.
As AI continues to evolve, OpenAI’s focus on customization and adaptability with GPT-4 represents a critical step in making advanced AI accessible, scalable, and more aligned with real-world applications. This new capability is expected to accelerate the adoption of AI across industries, creating a new wave of AI-driven solutions tailored to specific challenges and opportunities.
This Week in Search News: Simple and Easy-to-Read Update
Here’s what happened in the world of Google and search engines this week:
1. Google’s June 2024 Spam Update
Google finished rolling out its June 2024 spam update over a period of seven days. This update aims to reduce spammy content in search results.
2. Changes to Google Search Interface
Google has removed the continuous scroll feature for search results. Instead, it’s back to the old system of pages.
3. New Features and Tests
- Link Cards: Google is testing link cards at the top of AI-generated overviews.
- Health Overviews: There are more AI-generated health overviews showing up in search results.
- Local Panels: Google is testing AI overviews in local information panels.
4. Search Rankings and Quality
- Improving Rankings: Google said it can improve its search ranking system but will only do so on a large scale.
- Measuring Quality: Google’s Elizabeth Tucker shared how they measure search quality.
5. Advice for Content Creators
- Brand Names in Reviews: Google advises not to avoid mentioning brand names in review content.
- Fixing 404 Pages: Google explained when it’s important to fix 404 error pages.
6. New Search Features in Google Chrome
Google Chrome for mobile devices has added several new search features to enhance user experience.
7. New Tests and Features in Google Search
- Credit Card Widget: Google is testing a new widget for credit card information in search results.
- Sliding Search Results: When making a new search query, the results might slide to the right.
8. Bing’s New Feature
Bing is now using AI to write “People Also Ask” questions in search results.
9. Local Search Ranking Factors
Menu items and popular times might be factors that influence local search rankings on Google.
10. Google Ads Updates
- Query Matching and Brand Controls: Google Ads updated its query matching and brand controls, and advertisers are happy with these changes.
- Lead Credits: Google will automate lead credits for Local Service Ads. Google says this is a good change, but some advertisers are worried.
- tROAS Insights Box: Google Ads is testing a new insights box for tROAS (Target Return on Ad Spend) in Performance Max and Standard Shopping campaigns.
- WordPress Tag Code: There is a new conversion code for Google Ads on WordPress sites.
These updates highlight how Google and other search engines are continuously evolving to improve user experience and provide better advertising tools.
Facebook Faces Yet Another Outage: Platform Encounters Technical Issues Again
Uppdated: It seems that today’s issues with Facebook haven’t affected as many users as the last time. A smaller group of people appears to be impacted this time around, which is a relief compared to the larger incident before. Nevertheless, it’s still frustrating for those affected, and hopefully, the issues will be resolved soon by the Facebook team.
Facebook had another problem today (March 20, 2024). According to Downdetector, a website that shows when other websites are not working, many people had trouble using Facebook.
This isn’t the first time Facebook has had issues. Just a little while ago, there was another problem that stopped people from using the site. Today, when people tried to use Facebook, it didn’t work like it should. People couldn’t see their friends’ posts, and sometimes the website wouldn’t even load.
Downdetector, which watches out for problems on websites, showed that lots of people were having trouble with Facebook. People from all over the world said they couldn’t use the site, and they were not happy about it.
When websites like Facebook have problems, it affects a lot of people. It’s not just about not being able to see posts or chat with friends. It can also impact businesses that use Facebook to reach customers.
Since Facebook owns Messenger and Instagram, the problems with Facebook also meant that people had trouble using these apps. It made the situation even more frustrating for many users, who rely on these apps to stay connected with others.
During this recent problem, one thing is obvious: the internet is always changing, and even big websites like Facebook can have problems. While people wait for Facebook to fix the issue, it shows us how easily things online can go wrong. It’s a good reminder that we should have backup plans for staying connected online, just in case something like this happens again.