Connect with us

TECHNOLOGY

The Democratization of Artificial Intelligence

Published

on

The Democratization of Artificial Intelligence

The democratization of artificial intelligence (AI) refers to the process of making AI tools, technologies, and knowledge more accessible and available to a broader range of individuals and organizations.

It aims to break down barriers to entry and empower people with varying levels of expertise to harness the potential of AI.

4 Pillars of Artificial Intelligence Ethics

Here are key aspects of the democratization of AI:

1. Improved Accessibility: Democratization involves making AI tools and platforms more user-friendly, affordable, and widely available. This includes cloud-based AI services, open-source software, and low-cost AI hardware.

2. Simplified Interfaces: Designing AI interfaces that are intuitive and require minimal coding or technical skills, enabling non-experts to use AI effectively.

3. Better Education and Training: Providing training resources and educational materials to help individuals and businesses build AI competency. This includes online courses, tutorials, and certification programs.

4. Community and Collaboration: Encouraging knowledge sharing and collaboration among AI enthusiasts, professionals, and researchers through forums, open-source projects, and conferences.

5. Diverse Applications: Expanding AI applications across various sectors, from healthcare and finance to agriculture and education, making AI accessible for a wide range of industries and purposes.

6. Customization: Allowing users to tailor AI models and solutions to their specific needs, promoting adaptability and customization.

7. Ethical Considerations: Promoting ethical AI practices and raising awareness of potential biases and risks associated with AI to ensure responsible and fair AI development.

8. Promoting Startups and Innovation: Supporting AI startups and entrepreneurial initiatives, fostering innovation and competition in the AI industry.

9. Establishing Government and Regulatory Frameworks: Implementing policies and regulations that promote responsible AI development and address potential ethical concerns.

10. Improved Data Accessibility: Ensuring data availability and open data initiatives to fuel AI development and research.

Democratizing AI has the potential to democratize innovation, improve decision-making, and drive economic growth. It enables a wider range of individuals, organizations, and communities to benefit from AI’s capabilities, fostering a more inclusive and equitable future for technology and its applications. However, it also raises challenges related to ethics, privacy, and security that must be addressed as AI becomes more accessible.

Marc Andreesen, of Netscape and VC fame, has been receiving a fair amount of negative press after releasing a manifesto about Techno-Optimism, even from Wired magazine, which is admittedly almost the poster child for techno-optimism. For those unfamiliar with the term (as I was until he surfaced it), techno-optimism is the belief that technology, especially computer technology, is inherently good and desirable and should not be held back by Luddites and government regulators. Technology, in this view, moves beyond invention and instead becomes a secular religion, one that will ultimately prove to be the betterment of all mankind, assuming that by mankind you mean those whose net worth can be measured in the billions or high millions.

I don’t know Andreesen (though I have worked sporadically with Netscape and Mozilla over the years). I was at the University of Illinois at Urbana in the math department a few years before Andreesen and cohorts met there to work out the inner workings of Mosaic, and I laugh (while trying to sob) because it was just an accident of timing that I didn’t end up becoming a multi-millionaire because of that association. At the time, the web was in the future, despite being told by one guidance counselor that there was no real future in computers and that I’d be better off going into actuarial science (true story).

When Andreesen and James Clark made history in the late 1980s, I saw the effects of unbridled techno-optimism. I had gone to work for a small typesetting company in Jacksonville, FL. At the start of the year, the company made $15 million in revenue from businesses nationwide. By the time they closed their doors a year later, their income had dropped to about $500,000. The reason? A new program ran on the Macintosh called Aldus Pagemaker, and with it companies could dispense with the whole typesetting ordeal and attendant costs.

2035_Office.jpeg

By 2035, the Office as we know it will not exist.

The Virtualization of Work

The lesson I learned that year was simple – you were only as good as your tech, and you were vulnerable if you didn’t stay up to date. The lesson was reinforced repeatedly – sitting in on a meeting with a large clothing retailer’s accounting department, trying to argue that our consulting team wouldn’t put them out of a job by implementing automation, but I (and they) knew better. In the end, the decision was made (wisely) for them to automate themselves, knowing full well that quite a few people working there had months before they had to find a new job elsewhere.

At the time (the mid-90s) a lot of jobs went away, but there were so many new ones being created that it didn’t really matter. That, of course, changed in 2000. The stock market, which had gone on to record highs, collapse in the tech sector, many paper millionaires (meaning they held stock options) found themselves sleeping in their family’s spare bedrooms and under bridges. After increasing dramatically year-over-year, the tech sector sunk appreciably during that period, and even now tech makes up a smaller part of the economy than it did then.

What automation touches, it transforms, typically by virtualizing it. In the case of AI, it was only a matter of time before we went from providing more efficient processes and assistance to replacement of the people who wrote the programs and the words, who drew the pictures and filmed the videos, usually in the name of efficiency and productivity. Productivity, translated, means the amount of money that a person generates vs. the cost of hiring that person in the first place. There’s a brutal calculus there – at some point, the cost of the automation falls below the cost of employing a person, at which point they are let go to find yet another job, while the employer pockets the difference.

We’ve normalized this process. All that money pocketed goes into the forces that promulgate that normalization. When business demand slows, employers start firing employees who are too close to receiving pensions, who have expensive skill sets, who don’t fit the company culture (usually as defined by management). Publicly, this same management and corporate boards give platitudes about how they are reluctant to do this. Still, privately, their mission has been a success – siphon up the expertise of those they employ so that they can turn them into AIs, capture their knowledge, and, better yet, prevent those so employed from taking those ideas to competitors.

This does not mean that every CEO of every tech company is engaged in some vast conspiracy. Most take their companies and their missions seriously. However, it has become increasingly common for some, often in positions of high visibility and responsibility, to express dismissive and disdainful opinions of their employees, customers, and even their peers. This Tech Bro attitude is especially pervasive in Silicon Valley, an abrasive “I’ve got mine” belief that would not have been out of place during the rail baron era of the 1890s.

The_moat_keeping_competition_out_is_shrinking.jpeg

The moat keeping competition out is shrinking.

AI Democratization and the Diminishing Moat

Ironically, it is likely this time around that the hubris may be short-lived, primarily because the AI revolution is spawning a revolution of a different sort. In the winter of 2023, the big AI models seemed to leap out of nowhere, with an application that looked likely to up-end the established software industry completely. OpenAI became a household name, an epidemic of AI-generated school essays swamped the educational system, and anyone involved in any creative or professional felt a chill as the specter of career death walked over their graves, especially since these companies had essentially used the public Internet (and billions of pages of content and imagery) to train their models (and by extension reproduce content and images that borrowed heavily from this source). The assumption likely was that by doing it fast enough, this would be a fiat accompli.

Unfortunately, it didn’t quite work out that way. The code escaped the lab, quickly becoming something akin to open source. For a while, everyone wanted their large language model until it became evident that such models were, in fact, truly large and monolithic, and then programmers with lots of time on their hands after being let go as spurious labor began reverse engineering what they saw and generally making generative AI more compact, more efficient, and easier to integrate with the rest of the world.

This has breathed new life into the hoary realm of semantic graphs as developers in the machine learning space have begun to comprehend what the semantics people have been saying for a while – you can encode logical inferential data into graphs then use those graphs to generate the associated community language models (LLMs), which resolves several of the bigger headaches that working with LLMs incur.

Once this happens, AI becomes a commodity, not an expensively-priced service. Every company can build configurations that pull together different data sources, regardless of whether those sources are LLMs, knowledge graphs, PDFs, or any other data.

This presents a dilemma for the Data Barons. The intent by several of them was to be the keeper of the only truth, as provided by the single master model, available only by prescription at twenty cents per person per hour. For the individual, this amount was trivial, but for companies, that amount was a direct hit to the bottom line. So, of course, the expectation was that a B2B rate would be worked out while still giving these data barons their tithe.

With AI now in the wild, that equation changes … dramatically. Companies now can create their own AI LLMs for far less than the original systems took to build, and can better reflect specialized content back to the users. While DallE-3 and Midjourney have become the go to platforms for everyday image generation, Stable Diffusion continues to establish itself as the go-to image experimentation community – and is becoming better about policing itself. Similar things are happening on the code generation, video generation, music generation, and related toolsets side.

Medieval castles were often designed with a moat, a deliberate sunken trench surrounding the castle. Most times, the moat was not filled with water, primarily because stagnant water encouraged mosquitoes and other nasties that the local inhabitants had to live with. Still, the real purpose of the moat was to make it difficult for forces to scale the walls otherwise. Moats play a huge role in modern capitalism, typically by forcing competing companies to raise additional capital, hire from a limited workforce, and deal with patents and licensing fees of the first adopter in question. The deeper your moat, the fewer competitors that you’d face.

The Democratization of AI has destroyed the moats that companies use to entrench themselves in a market. The reality is that the development processes involved in creating a start-up require a relatively limited amount of capital. The expensive part comes when a company is forced to scale rapidly in order to achieve net returns as quickly as possible. However, as AI is increasingly disseminated and democratized, that model is changing in favor of one where data – declarative data – subsumes business logic, and where that explosive growth phase instead shifts into a more manageable (and sustainable) climb. It also lays the groundwork for a more comprehensive and equitable sharing of data rather than the distinctly asymmetric relationship that exists today.

The_data_barons_have_benefitted_from_a_system_where_money_speaks_louder_than_talent_creativity_innovation_and_hard_work.png

The data barons have benefitted from a system where money speaks louder than talent, creativity, innovation, and hard work. Investment has been a necessary part of tech, but it is time to re-evaluate if it still is.

Limiting the Data Barons

The rub with all of this is that AI democratization generally is anti-monopolistic. Capitalism requires fair markets to remain viable, and when those markets become monopolistic, capitalism degenerates. To put it in slightly different terms – healthy capitalism is quasi-stable. When capital becomes too concentrated, it becomes monopolistic; when capital becomes too dilute, value cannot be established, and everything requires consensus. Instead, you want a system with some benefits to accrue from investing, but value is reflected by transaction price.

Give everyone the tools to provide data-centric AI; you only need specialized services at the edges. This shouldn’t be a new concept – it is, in essence, what has been happening with open source for the last twenty years. We don’t need to recreate the stack every time. What we do need to do, however, is figure out how to compensate those who contribute to that stack and get the middlemen (the financiers) largely out of the stack.

No doubt this may be seen as heresy, but we don’t need the level of VC funding in the software industry that now exists. Yes, people want to get paid for the code they write, the documentation they create, and the images that can make their way into specialized models for sale. They need to eat, clothe themselves, pay the rent, put their kids through school, take an occasional trip, or go out to see the movies. They want to get compensated for their efforts and do not want to scramble to survive in case of a healthcare crisis or family emergency. These are not unreasonable expectations.

It’s time to re-evaluate the VC model. The cost to create a piece of software has been dropping dramatically over the last several years, to the extent that getting a viable product to a point where it is ready to market represents perhaps 20% of the overall costs associated with that product, typically requiring a small creative development team for about four to six months, and this doesn’t factor in any of the NoCode solutions that have emerged as part of contemporary AI efforts.

What this means in practice is that the development and deployment of business solutions is dropping below the point where it makes sense to invest many millions, if not billions, of dollars into companies. Without those investments, much less money goes to VC firms, investment banks, and large investors, and more of it remains in the hands of the founders and creators. Moving to a model where developers can also participate for points of final profit, an ownership model akin to company ownership would also go a long way toward making the space more equitable.

There are ways of self-funding these projects, but right now, most are blocked because the only access to funding is through VCs that want an unhealthy return on their investments in perpetuity, just as access to these projects is blocked through recruiting agencies. After all, those same VCs want to arbitrage labor rates across different economies, reducing wages (and hence standards of living) in some countries while importing inflation into other countries that cause extreme disruptions and wealth imbalances. On the other hand, if such is not reduced, Work From Home must become the new normal. Workers can also arbitrage wages and job opportunities if labor is not limited to working within a geographically constrained region.

The other problem is that solutions generally transcend business sectors, and AI is no exception. We see the verticals – healthcare, media, transit, finance, agriculture, etc.- and believe each domain has unique problem sets. However, once you break the problem into data and delivery, the delivery is remarkably consistent across sectors because the business logic is, and should be, in the data. This is what a graph solution provides, similar to an LLM. Yes, you need to identify and articulate what that business logic is. Still, any good semanticist fully understands that a knowledge graph is a perfect way to build applications because rules are fundamentally declarative. Express those rules as metadata; it doesn’t matter what vertical you’re in.

Conclusion

This is a long post and admittedly covers a great deal of territory. The upshot, however, is that the current evolution of data technologies, which looked to favor a few large corporations, is increasingly being diverted to smaller companies and individuals as monolithic AI solutions give way to decentralized, distributed ones.

This in turn, is raising questions about whether the multibillion-dollar investments being made in AI companies are doing more harm than good, even as these companies shed the very workers that are developing this technology in the first place. At the same time, those doing the investing are endangering potentially millions of people who will be displaced by this technology, not because the technology is truly that magical, but because it is increasingly used to justify draconian actions that enrich those that have already gamed the system heavily in their favor.

Democratization of AI may very well be one solution to this. Getting the generative components of AI into the hands of individuals and small organizations will open up opportunities both for tool builders and subject matter experts/creators, primarily die to the much lower barrier to entry for independents over established publishing giants.

 

Disclaimer: This is a tl;dr post. It’s an articulation of some frustrations of mine about the tech field in general, the people that often perceive themselves to be the solution but may actually be the problems, and the troubling economics of an AI economy.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

TECHNOLOGY

Next-gen chips, Amazon Q, and speedy S3

Published

on

By

Cloud Computing News

AWS re:Invent, which has been taking place from November 27 and runs to December 1, has had its usual plethora of announcements: a total of 21 at time of print.

Perhaps not surprisingly, given the huge potential impact of generative AI – ChatGPT officially turns one year old today – a lot of focus has been on the AI side for AWS’ announcements, including a major partnership inked with NVIDIA across infrastructure, software, and services.

Yet there has been plenty more announced at the Las Vegas jamboree besides. Here, CloudTech rounds up the best of the rest:

Next-generation chips

This was the other major AI-focused announcement at re:Invent: the launch of two new chips, AWS Graviton4 and AWS Trainium2, for training and running AI and machine learning (ML) models, among other customer workloads. Graviton4 shapes up against its predecessor with 30% better compute performance, 50% more cores and 75% more memory bandwidth, while Trainium2 delivers up to four times faster training than before and will be able to be deployed in EC2 UltraClusters of up to 100,000 chips.

The EC2 UltraClusters are designed to ‘deliver the highest performance, most energy efficient AI model training infrastructure in the cloud’, as AWS puts it. With it, customers will be able to train large language models in ‘a fraction of the time’, as well as double energy efficiency.

As ever, AWS offers customers who are already utilising these tools. Databricks, Epic and SAP are among the companies cited as using the new AWS-designed chips.

Zero-ETL integrations

AWS announced new Amazon Aurora PostgreSQL, Amazon DynamoDB, and Amazon Relational Database Services (Amazon RDS) for MySQL integrations with Amazon Redshift, AWS’ cloud data warehouse. The zero-ETL integrations – eliminating the need to build ETL (extract, transform, load) data pipelines – make it easier to connect and analyse transactional data across various relational and non-relational databases in Amazon Redshift.

A simple example of how zero-ETL functions can be seen is in a hypothetical company which stores transactional data – time of transaction, items bought, where the transaction occurred – in a relational database, but use another analytics tool to analyse data in a non-relational database. To connect it all up, companies would previously have to construct ETL data pipelines which are a time and money sink.

The latest integrations “build on AWS’s zero-ETL foundation… so customers can quickly and easily connect all of their data, no matter where it lives,” the company said.

Amazon S3 Express One Zone

AWS announced the general availability of Amazon S3 Express One Zone, a new storage class purpose-built for customers’ most frequently-accessed data. Data access speed is up to 10 times faster and request costs up to 50% lower than standard S3. Companies can also opt to collocate their Amazon S3 Express One Zone data in the same availability zone as their compute resources.  

Companies and partners who are using Amazon S3 Express One Zone include ChaosSearch, Cloudera, and Pinterest.

Amazon Q

A new product, and an interesting pivot, again with generative AI at its core. Amazon Q was announced as a ‘new type of generative AI-powered assistant’ which can be tailored to a customer’s business. “Customers can get fast, relevant answers to pressing questions, generate content, and take actions – all informed by a customer’s information repositories, code, and enterprise systems,” AWS added. The service also can assist companies building on AWS, as well as companies using AWS applications for business intelligence, contact centres, and supply chain management.

Customers cited as early adopters include Accenture, BMW and Wunderkind.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

TECHNOLOGY

HCLTech and Cisco create collaborative hybrid workplaces

Published

on

By

Cloud Computing News

Digital comms specialist Cisco and global tech firm HCLTech have teamed up to launch Meeting-Rooms-as-a-Service (MRaaS).

Available on a subscription model, this solution modernises legacy meeting rooms and enables users to join meetings from any meeting solution provider using Webex devices.

The MRaaS solution helps enterprises simplify the design, implementation and maintenance of integrated meeting rooms, enabling seamless collaboration for their globally distributed hybrid workforces.

Rakshit Ghura, senior VP and Global head of digital workplace services, HCLTech, said: “MRaaS combines our consulting and managed services expertise with Cisco’s proficiency in Webex devices to change the way employees conceptualise, organise and interact in a collaborative environment for a modern hybrid work model.

“The common vision of our partnership is to elevate the collaboration experience at work and drive productivity through modern meeting rooms.”

Alexandra Zagury, VP of partner managed and as-a-Service Sales at Cisco, said: “Our partnership with HCLTech helps our clients transform their offices through cost-effective managed services that support the ongoing evolution of workspaces.

“As we reimagine the modern office, we are making it easier to support collaboration and productivity among workers, whether they are in the office or elsewhere.”

Cisco’s Webex collaboration devices harness the power of artificial intelligence to offer intuitive, seamless collaboration experiences, enabling meeting rooms with smart features such as meeting zones, intelligent people framing, optimised attendee audio and background noise removal, among others.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: Cisco, collaboration, HCLTech, Hybrid, meetings

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

TECHNOLOGY

Canonical releases low-touch private cloud MicroCloud

Published

on

By

Cloud Computing News

Canonical has announced the general availability of MicroCloud, a low-touch, open source cloud solution. MicroCloud is part of Canonical’s growing cloud infrastructure portfolio.

It is purpose-built for scalable clusters and edge deployments for all types of enterprises. It is designed with simplicity, security and automation in mind, minimising the time and effort to both deploy and maintain it. Conveniently, enterprise support for MicroCloud is offered as part of Canonical’s Ubuntu Pro subscription, with several support tiers available, and priced per node.

MicroClouds are optimised for repeatable and reliable remote deployments. A single command initiates the orchestration and clustering of various components with minimal involvement by the user, resulting in a fully functional cloud within minutes. This simplified deployment process significantly reduces the barrier to entry, putting a production-grade cloud at everyone’s fingertips.

Juan Manuel Ventura, head of architectures & technologies at Spindox, said: “Cloud computing is not only about technology, it’s the beating heart of any modern industrial transformation, driving agility and innovation. Our mission is to provide our customers with the most effective ways to innovate and bring value; having a complexity-free cloud infrastructure is one important piece of that puzzle. With MicroCloud, the focus shifts away from struggling with cloud operations to solving real business challenges” says

In addition to seamless deployment, MicroCloud prioritises security and ease of maintenance. All MicroCloud components are built with strict confinement for increased security, with over-the-air transactional updates that preserve data and roll back on errors automatically. Upgrades to newer versions are handled automatically and without downtime, with the mechanisms to hold or schedule them as needed.

With this approach, MicroCloud caters to both on-premise clouds but also edge deployments at remote locations, allowing organisations to use the same infrastructure primitives and services wherever they are needed. It is suitable for business-in-branch office locations or industrial use inside a factory, as well as distributed locations where the focus is on replicability and unattended operations.

Cedric Gegout, VP of product at Canonical, said: “As data becomes more distributed, the infrastructure has to follow. Cloud computing is now distributed, spanning across data centres, far and near edge computing appliances. MicroCloud is our answer to that.

“By packaging known infrastructure primitives in a portable and unattended way, we are delivering a simpler, more prescriptive cloud experience that makes zero-ops a reality for many Industries.“

MicroCloud’s lightweight architecture makes it usable on both commodity and high-end hardware, with several ways to further reduce its footprint depending on your workload needs. In addition to the standard Ubuntu Server or Desktop, MicroClouds can be run on Ubuntu Core – a lightweight OS optimised for the edge. With Ubuntu Core, MicroClouds are a perfect solution for far-edge locations with limited computing capabilities. Users can choose to run their workloads using Kubernetes or via system containers. System containers based on LXD behave similarly to traditional VMs but consume fewer resources while providing bare-metal performance.

Coupled with Canonical’s Ubuntu Pro + Support subscription, MicroCloud users can benefit from an enterprise-grade open source cloud solution that is fully supported and with better economics. An Ubuntu Pro subscription offers security maintenance for the broadest collection of open-source software available from a single vendor today. It covers over 30k packages with a consistent security maintenance commitment, and additional features such as kernel livepatch, systems management at scale, certified compliance and hardening profiles enabling easy adoption for enterprises. With per-node pricing and no hidden fees, customers can rest assured that their environment is secure and supported without the expensive price tag typically associated with cloud solutions.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: automation, Canonical, MicroCloud, private cloud

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending