Connect with us

TECHNOLOGY

We Need to Develop IoT standards and Protocols to Protect Smart Homes

Published

on

We Need to Develop IoT standards and Protocols to Protect Smart Homes

Effective IoT standards and protocols are crucial for smart home safety today.

Devices are increasingly being targeted in cyberattacks, which have resulted in injury, harassment and invasions of user privacy. Security standards and protocols for IoT devices could help address the threats facing smart home product users. 

Rising IoT Security Threats

Smart home devices have increased in popularity significantly over the last few years. They’re becoming more affordable and accessible. Plus, more people are buying homes, causing a national housing shortage in the U.K. Unfortunately, as smart home devices become more widely used, hackers are taking notice. 

Over recent years, there have been more headlines about smart home devices being hacked, remotely accessed, or used to harass and terrorize people in their homes. For instance, Ring came under fire in 2020 when dozens of people reported these smart doorbells and security cameras being remotely controlled. 

Victims of the hacks included babies, children and older adults. Hackers took control of the users’ smart home security cameras and used them to spy on the residents and even talk to them. Sadly, these cases are becoming more common. 

Other hacks on smart home devices, mainly cameras, have resulted in dangerous and disruptive police raid pranks. In these cases, the hacker uses a smart home security camera to livestream a raid on an innocent homeowner after calling in a fake alert to law enforcement. 

The Current State of IoT Standards and Protocols

Something must be done to ensure smart home devices are safe from cyberattacks. IoT items tend to get overlooked in cybersecurity efforts because they are not smartphones or computers. They might seem harmless enough, but they are still connected to the internet, which means they are at risk of being hacked. 

Smart home cybersecurity is still a relatively new field, though. IoT standards and protocols are crucial to securing these devices. However, creating any standardization is a challenge when different device brands are not compatible with one another. Some of the biggest companies are coming together to change that, though. 

Google, Amazon, Apple and other major smart home device manufacturers are forming a standardization platform known as Matter, which will establish some basic compatibility standards across all new smart home devices. Matter is the first major step toward smart home IoT standards and protocols. By committing to these standards, smart home device manufacturers will ensure their IoT devices are compatible with all other compliant items. 

This is great news for consumers, but it could also help strengthen IoT security. Matter has developed a set of privacy principles that all member manufacturers must comply with. These guidelines include minimizing data collection and storage, layering authentication methods, and providing secure firmware updates. The basic standards established by Matter will also make it easier for developers to create effective cybersecurity applications for smart homes. 

Strategies for Protecting IoT Devices

Matter standardization will go a long way toward improving compatibility between smart home devices and strengthening basic security. However, homeowners may wonder what they can do to protect their IoT devices now. Luckily, users can easily implement strategies to improve their smart home security. 

Secure the Home Network

Homeowners should start by securing their home Wi-Fi network. This is especially important for anyone working remotely. Home network security must be a top priority for remote employees since they use their network to send and receive important and often sensitive information. Plus, these workers spend more time in their houses, which may lead them to use their smart home devices more often. 

Similarly, always change the default password on smart home devices. Many IoT items ship with weak default passwords. These are not secure and are often shared among numerous units, making them highly vulnerable to hacking. Hopefully, IoT standards and protocols for security will change this trend in the future. Right now, homeowners must ensure all their smart home devices have strong, unique passwords and use multifactor authentication when possible. 

Segment the Wi-Fi

Homeowners can secure their Wi-Fi networks by giving their network a long, complex password and using a high-quality router. It may also be a good idea to segment home Wi-Fi networks, which involves creating separate branches on the same system with isolated devices on each. People can use their router’s guest network for IoT devices. This way, if an IoT device is hacked, the attacker won’t have access to any PCs, phones or other items on it. 

Know the Signs of a Hack

Hopefully, strong passwords and a secure home network will successfully keep hackers out of homeowners’ smart home devices. It is a good idea to be aware of the signs of a hacked device just in case. This is particularly important with smart cameras and doorbells, which are among the most high-risk targets for device hacks. 

Signs of a compromised smart camera include an active recording light, higher than usual data traffic, unexplained sounds and random unit movement. Suspicious login activity on smart home apps is another dead giveaway. Sometimes hackers will even speak through hacked smart cameras. Homeowners should unplug their devices as soon as they notice any potentially suspicious activity. 

Be Careful About Camera Placement

Lastly, homeowners must be mindful of where they put IoT cameras and speakers around their smart homes. These devices can be useful for things like keeping an eye on children or watching for burglary attempts. However, hackers tend to choose vulnerable targets like children and older adults to harass. For instance, in 2019, a hacker taunted a little girl through the smart camera in her bedroom, telling her he was Santa Claus.  

It may be a good idea to avoid putting smart cameras and speakers in bedrooms and bathrooms. Until more reliably secure items are developed, it is better to be safe than sorry. 

Improving IoT Standards and Protocols

Homeowners can use the strategies above to start protecting their smart home devices from hackers. However, in the long run, what smart homes really need are comprehensive IoT standards and protocols. Matter is a good starting point, but it will take large-scale action to ensure smart devices are using the best security tools and applications possible. 

For instance, IoT security standards could require all devices to ship with unique, secure default passwords. Users should still set their own strong ones, but it is clearly not something many people are doing. They simply aren’t familiar with cybersecurity best practices, so manufacturers need to step up to help protect them. 

Smart camera hacks seem to be one of the largest issues with device security, so these items should be a top priority in IoT standards. 

For instance, maybe smart cameras could come with a feature that records video without any capability to stream it. It might also help to require multifactor authentication to view, stream, or otherwise access a smart camera or speaker. Similarly, smart device manufacturers could mandate users change their passwords regularly, which could help prevent stolen or compromised login credentials from being used to hack smart home devices. 

Ultimately, it might take intervention by a federal organization or other authority to establish strict, universal IoT standards and protocols for smart device security. Some international organizations are already stepping up. 

For example, the EU established a set of standards for consumer IoT devices in 2020 that applies to all member nations. Similarly, the International Standards Organization published IoT security and privacy guidelines in 2022, which can be accessed and used worldwide. In the U.S., the National Institute of Standards and Technology has also established a program to provide cybersecurity guidance for IoT devices. 

Implementing Smart Home Tech Safely

Hackers increasingly target smart home technology as it becomes more popular. Compatibility between brands is a nice quality-of-life feature, but security needs to be the top priority for Matter and other smart home standards organizations. From collaborations among manufacturers to large-scale international standards, IoT devices need baseline security protocols to ensure people can use technology safely, today and into the future. 


Source link

TECHNOLOGY

NLP & Computer Vision in Cybersecurity

Published

on

NLP & Computer Vision in Cybersecurity

Natural language processing (NLP) and computer vision are two branches of artificial intelligence (AI) that are disrupting cybersecurity.

NLP is the ability of computers to understand and process human language, including speech and text. In cybersecurity, NLP can be used for fraud detection by analyzing large amounts of text data, such as emails and chat logs, to identify patterns of malicious activity. NLP can also be used for threat intelligence by analyzing data from various sources, such as news articles and social media, to identify potential security threats.

Computer vision, on the other hand, refers to the ability of computers to interpret and understand images and videos. In cybersecurity, computer vision can be used for password cracking by analyzing images and videos that contain passwords or other sensitive information. It can also be used for facial recognition, which verifies the identity of individuals who access sensitive information or systems.

Cybersecurity is a critical issue in our increasingly connected world, and artificial intelligence (AI) is playing an increasingly important role in helping to keep sensitive information and systems secure. In particular, natural language processing (NLP) and computer vision are two areas of AI that are having a major impact on cybersecurity.

NLP_in_Cybersecurity.png

Source: Masernet

NLP and computer vision have the potential to revolutionize the way organizations approach cybersecurity by allowing them to analyze large amounts of data, identify patterns of malicious activity, and respond to security threats more quickly and effectively. However, it’s important to be aware that AI itself presents new security risks, such as the potential for AI systems to be hacked or misused. As a result, organizations must adopt a comprehensive and well-informed approach to cybersecurity that takes into account the full range of risks and benefits associated with AI technologies. Here are 4 ways NLP & computer vision are useful in cybersecurity.

1. Detecting Fraud

NLP can be used to analyze large amounts of text data, such as emails and chat logs, to identify patterns of fraud and other types of malicious activity. This can help organizations to detect and prevent fraud before it causes significant harm.

2. Analyzing Threats

NLP can also be used to analyze large amounts of text data from a variety of sources, such as news articles and social media, to identify potential security threats. This type of “big data” analysis can help organizations to respond to security threats more quickly and effectively.

3. Preventing Password Cracking

Computer vision can be used to crack passwords by analyzing images and videos that contain passwords or other sensitive information. This type of technology can help organizations to better protect their sensitive information by making it more difficult for attackers to obtain passwords through visual means.

4. Improving Facial Recognition

Computer vision can also be used for facial recognition, which can help organizations to improve their security by verifying the identity of individuals who access sensitive information or systems.

Conclusion

Visual-AI-Workflow-For-Phishing-Detection-1200x883.jpg

Source: Visua

AI technologies like NLP and computer vision are playing an increasingly important role in helping to keep sensitive information and systems secure. These technologies have the potential to revolutionize the way that organizations approach cybersecurity by allowing them to analyze large amounts of data, identify patterns of malicious activity, and respond to security threats more quickly and effectively. However, it’s also important to recognize that AI itself presents new security risks, such as the potential for AI systems to be hacked or misused. As a result, organizations must take a holistic and well-informed approach to cybersecurity that takes into account the full range of risks and benefits associated with these powerful new technologies.

Source link

Continue Reading

TECHNOLOGY

What’s Wrong with the Algorithms?

Published

on

What's Wrong with the Algorithms?

Social media algorithms have become a source of concern due to the spread of misinformation, echo chambers, and political polarization.

The main purpose of social media algorithms is to personalize and optimize user experience on platforms such as Facebook, Twitter, and YouTube.

Most social media algorithms sort, filter, and prioritize content based on a user’s individual preferences and behaviors. Social media algorithms have come under scrutiny in recent years for contributing to the spread of misinformation, echo chambers, and political polarization.

Facebook’s news feed algorithm has been criticized for spreading misinformation, creating echo chambers, and reinforcing political polarization. In 2016, the algorithm was found to have played a role in the spread of false information related to the U.S. Presidential election, including the promotion of fake news stories and propaganda. Facebook has since made changes to its algorithm to reduce the spread of misinformation, but concerns about bias and polarization persist.

Twitter’s trending topics algorithm has also been criticized for perpetuating bias and spreading misinformation. In 2016, it was revealed that the algorithm was prioritizing trending topics based on popularity, rather than accuracy or relevance. This led to the promotion of false and misleading information, including conspiracy theories and propaganda. Twitter has since made changes to its algorithm to reduce the spread of misinformation and improve the quality of public discourse.

YouTube’s recommendation algorithm has been criticized for spreading conspiracy theories and promoting extremist content. In 2019, it was revealed that the algorithm was recommending conspiracy theory videos related to the moon landing, 9/11, and other historical events. Additionally, the algorithm was found to be promoting extremist content, including white nationalist propaganda and hate speech. YouTube has since made changes to its algorithm to reduce the spread of misinformation and extremist content, but concerns about bias and polarization persist.

In this article, we’ll examine the problem with social media algorithms including the impact they’re having on society as well as some possible solutions.

1. Spread of Misinformation

Spread_of_Information.jpg

Source: Scientific American

One of the biggest problems with social media algorithms is their tendency to spread misinformation. This can occur when algorithms prioritize sensational or controversial content, regardless of its accuracy, in order to keep users engaged and on the platform longer. This can lead to the spread of false or misleading information, which can have serious consequences for public health, national security, and democracy.

2. Echo Chambers and Political Polarization

Political_Polarization.jpg

Source: PEW Research Center

Another issue with social media algorithms is that they can create echo chambers and reinforce political polarization. This happens when algorithms only show users content that aligns with their existing beliefs and values, and filter out information that challenges those beliefs. As a result, users can become trapped in a self-reinforcing bubble of misinformation and propaganda, leading to a further division of society and a decline in the quality of public discourse.

3. Bias in Algorithm Design and Data Collection

Bias_in_Algorithm_Design.png

Source: Springer Link

There are also concerns about bias in the design and implementation of social media algorithms. The data used to train these algorithms is often collected from users in a biased manner, which can perpetuate existing inequalities and reinforce existing power structures. Additionally, the designers and developers of these algorithms may hold their own biases, which can be reflected in the algorithms they create. This can result in discriminatory outcomes and perpetuate social injustices.

4. Democracy in Retreat

Derosion_of_Democracy.jpeg

Source: Freedom House

Social media algorithms are vulnerable to manipulation and can spread false or misleading information, which can be used to manipulate public opinion and undermine democratic institutions. The dominance of a few large social media companies has led to a concentration of power in the hands of a small number of organizations, which can undermine the diversity and competitiveness of the marketplace of ideas, a key principle of democratic societies.

How to Improve Social Media Algorithms?

Boost_Social_Media_Posts.jpeg

Source: Tech Xplore

Governments and regulatory bodies have a role to play in holding technology companies accountable for the algorithms they create and their impact on society. This could involve enforcing laws and regulations to prevent the spread of misinformation and extremist content, and holding companies responsible for their algorithms’ biases.

There are several possible solutions that can be implemented to improve social media algorithms and reduce their impact on democracy. Some of these solutions include:

  • Increased transparency and accountability: Social media companies should be more transparent about their algorithms and data practices, and they should be held accountable for the impact of their algorithms on society. This can include regular audits and public reporting on algorithmic biases and their impact on society.

  • Regulation and standards: Governments can play a role in ensuring that social media algorithms are designed and operated in a way that is consistent with democratic values and principles. This can include setting standards for algorithmic transparency, accountability, and fairness, and enforcing penalties for violations of these standards.

  • Diversification of ownership: Encouraging a more diverse and competitive landscape of social media companies can reduce the concentration of power in the hands of a few dominant players and promote innovation and diversity in the marketplace of ideas.

  • User education and awareness: Social media users can be educated and empowered to make informed decisions about their usage of social media, including recognizing and avoiding disinformation and biased content.

  • Encouragement of responsible content creation: Social media companies can work to encourage the creation of high-quality and responsible content by prioritizing accurate information and rewarding creators who produce this content.

  • Collaboration between industry, government, and civil society: Addressing the challenges posed by social media algorithms will require collaboration between social media companies, governments, and civil society organizations. This collaboration can involve the sharing of data and best practices, the development of common standards and regulations, and the implementation of public education and awareness programs.

Conclusion

Social media companies have the power to censor and suppress speech, which can undermine the right to free expression and the democratic principle of an open and inclusive public discourse. It is crucial for technology companies and policymakers to address these issues and work to reduce the potential for harm from these algorithms. Social media platforms need to actively encourage and facilitate community participation in the development and improvement of their algorithms. This would involve setting up forums for discussion and collaboration, providing documentation and support for developers, and engaging with the community to address their concerns and ideas. In order to ensure that the algorithms are fair and unbiased, tech companies need to be transparent about the data they collect and use to train their algorithms. This would involve releasing the data sets used to train the algorithms, along with information about how the data was collected, what it represents, and any limitations or biases it may contain.

Source link

Continue Reading

TECHNOLOGY

Daasity builds ELT+ for Commerce on the Snowflake Data Cloud

Published

on

Cloud Computing News

Modular data platform Daasity has launched ELT+ for Commerce, Powered by Snowflake.

It is thought ELT+ for Commerce will benefit customers by enabling consumer brands selling via eCommerce, Amazon, retail, and/or wholesale to implement a full or partial data and analytics stack. 

Dan LeBlanc, Daasity co-founder and CEO, said: “Brands using Daasity and Snowflake can rapidly implement a customisable data stack that benefits from Snowflake’s dynamic workload scaling and Secure Data Sharing features.

“Additionally, customers can leverage Daasity features such as the Test Warehouse, which enables merchants to create a duplicate warehouse in one click and test code in a non-production environment. Our goal is to make brands, particularly those at the enterprise level, truly data-driven organisations.”

Building its solution on Snowflake has allowed Daasity to leverage Snowflake’s single, integrated platform to help joint customers extract, load, transform, analyse, and operationalise their data. With Daasity, brands only need one platform that includes Snowflake to manage their entire data environment.

Scott Schilling, senior director of global partner development at Snowflake, said: “Daasity’s ELT+ for Commerce, Powered by Snowflake, will offer our joint customers a way to build a single source of truth around their data, which is transformative for businesses pursuing innovation.

“As Snowflake continues to make strides in mobilising the world’s data, partners like Daasity give our customers flexibility around how they build data solutions and leverage data across the organisation.” 

Daasity enables omnichannel consumer brands to be data-driven. Built by analysts and engineers, the Daasity platform supports the varied data architecture, analytics, and reporting needs of consumer brands selling via eCommerce, Amazon, retail, and wholesale. Using Daasity, teams across the organisation get a centralised and normalised view of all their data, regardless of the tools in their tech stack and how their future data needs may change. 

ELT stands for Extract, Load, Transform, meaning customers can extract data from various sources, load the data into Snowflake, and transform the data into actions that marketers can pursue. For more information about Daasity, our 60+ integrations, and how the platform drives more profitable growth for 1600+ brands, visit us at Daasity.com.

Tags: ,

Source link

Continue Reading

Trending

en_USEnglish