Connect with us

TECHNOLOGY

Unlocking the Power of Generative AI with Qlik’s OpenAI Connectors

Published

on

Unlocking the Power of Generative AI with Qlik’s OpenAI Connectors

Generative AI tools have been publicly available for a couple of years, but this class of AI received huge fanfare towards the end of 2022 as OpenAI launched a chatbot called ChatGPT, based on a large language model (LLM). 

Impressive on release, it hinted at how companies large and small will benefit from the power of LLMs. But how do companies harness LLMs in combination with their existing data? That’s where connecting OpenAI capabilities with enterprise databases comes into play – and for that, you need an OpenAI connector.

What is OpenAI, and Open AI Connectors?

OpenAI is a non-profit research laboratory founded in 2015 by Elon Musk, Sam Altman, Ilya Sutskever, and others. The organization developed several powerful artificial intelligence systems, including a chatbot called ChatGPT and image generator DALL-E – both examples of generative AI.

ChatGPT is an LLM chatbot trained on a massive dataset of text and code. It can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. It is classified as generative AI – in other words, ChatGPT creates new, fresh content in response to a prompt the user enters into the ChatGPT console.

In turn, an OpenAI connector is any tool that links OpenAI’s capabilities with another technology platform. In other words, through an OpenAI connector, companies can integrate generative AI tools such as ChatGPT into their everyday workloads. 

Advertisement

Users can then harness ChatGPT capabilities outside of the ChatGPT message box, and it also enables ChatGPT to provide answers based on prompts that contain internal enterprise data. We’ll examine why OpenAI connectors matter so much in a next section, but first let’s see what it is that sets generative AI apart from the AI most likely already included in an enterprise analytics platform.

How Does Generative AI Differ From Other AI Such as ML?

Linking enterprise technology platforms to the external services that offer the leading generative AI models matters, because generative AI is fundamentally different from traditional AI in terms of objectives, use cases, and benefits.

Traditional AI is usually designed to perform specific tasks based on predefined rules and patterns, excelling at pattern recognition and analysis but not creating anything new. Examples include voice assistants, recommendation engines, and search algorithms. 

Generative AI, on the other hand, creates new and original content by generating fresh text, images, music, or computer code. For example, OpenAI’s GPT-4 can produce human-like text almost indistinguishable from text written by a person – based on GPT-4’s language model.

While traditional AI powers chatbots, recommendation systems, and predictive analytics, generative AI has vast potential in art and design, marketing, education, and research, revolutionizing fields where creation and innovation are key. Because these models work so very differently the use cases are also very different too.

Generative AI in the Enterprise

Since ChatGPT’s release there’s been intense focus on the benefits of generative AI in the business environment. Though some of the noise around generative AI is hype, there are also plenty of real-world enterprise use cases that can really deliver in the business setting.

Advertisement

Generative_AI_in_the_Enterprise.jpeg

McKinsey & Company ‘s analysis of a range of use cases suggests that generative AI will add trillions to the world’s economy. It will take time to see exactly where generative AI proves most impactful, but initial indications suggest that generative AI will contribute efficiency enhancement or new capabilities in:

For many of these use cases, of course, companies need to find a way to expose generative AI capabilities to their internal data sets – going beyond the ability to paste simple information into a ChatGPT conversation. That is where OpenAI connectors come in.

Looking at the new AI connectors for Qlik

Qlik has a long track record of offering cutting-edge tools to enhance the data analytics platform it offers businesses. As Qlik celebrates its 30th anniversary, we can see an expansion of the company’s machine learning and AI capabilities to ensure customers can leverage the latest industry innovations.

In June, Qlik released two OpenAI connectors. First, Qlik OpenAI Analytics Connector allows real-time access to generative content in Qlik Sense apps by securely integrating natural language insights from OpenAI into analytics apps. Users can incorporate third-party data into existing models, ask questions to ChatGPT with Qlik data, and more. 

The Qlik OpenAI Connector for Application Automation helps developers improve their workflows using AI and LLM-generated content for creating expressions, commands, or scripts. That includes sentiment analysis across Qlik data, translation, and summarizing external text for internal audiences – for market analysis, for example.

How Do OpenAI Connectors Work in Practice?

Thanks to Qlik’s open platform and the integration capabilities – now including OpenAI – the company’s clients and partners can leverage many ways to employ generative AI in their analytics. This includes having the resources to create and develop their own cutting-edge solutions. 

Advertisement

Additionally, this also provides the capability to have full governance and authority over the interaction with OpenAI, encompassing the data they transmit and the manner in which they opt to utilize it within the Qlik Cloud.

How_Do_OpenAI_Connectors_Work_in_Practice.png

This YouTube video (snapshot above) demonstrates the power of generative AI with ChatGPT by leveraging ChatGPT to extract valuable insights directly from Qlik Cloud. The example, looks at the top 10 agricultural disasters and shows how, through a few simple steps, a user can trigger a natural language query where Qlik communicates with ChatGPT.

Qlik can turn the ChatGPT response into a data set and it will be dropped into the catalog right here. And fully available to anyone who has access to the data sitting in that space. In addition, because Qlik’s imported the ChatGPT response we also have a description of what that data is, so Qlik can understand the new data set within the context of your organization. As the business use cases of Generative AI become more tangible, it’s a fitting development to mark 30 years of innovation at Qlik.

Generative AI Brings New Capabilities to the Enterprise

Writing database scripts can sometimes be challenging for users and one of the advantages of OpenAI connectors is that users can now write queries in natural language. We can apply that capability, bring in enterprise data, and the opportunities become endless with several charts generated by Qlik’s Insight Advisor.  

And this is all just a start! and so I would encourage you to experiment with the OpenAI connectors yourself. Qlik provides a simple tutorial here, or if you’re familiar with JSON code you can view a more technical example here. Happy experimenting! Many thanks, Sally

About the Author

A highly experienced chief technology officer, professor in advanced technologies, and a global strategic advisor on digital transformation, Sally Eaves specialises in the application of emergent technologies, notably AI, 5G, cloud, security, and IoT disciplines, for business and IT transformation, alongside social impact at scale, especially from sustainability and DEI perspectives.

Advertisement

An international keynote speaker and author, Sally was an inaugural recipient of the Frontier Technology and Social Impact award, presented at the United Nations, and has been described as the “torchbearer for ethical tech”, founding Aspirational Futures to enhance inclusion, diversity, and belonging in the technology space and beyond. Sally is also the chair for the Global Cyber Trust at GFCYBER. 

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

TECHNOLOGY

Next-gen chips, Amazon Q, and speedy S3

Published

on

By

Cloud Computing News

AWS re:Invent, which has been taking place from November 27 and runs to December 1, has had its usual plethora of announcements: a total of 21 at time of print.

Perhaps not surprisingly, given the huge potential impact of generative AI – ChatGPT officially turns one year old today – a lot of focus has been on the AI side for AWS’ announcements, including a major partnership inked with NVIDIA across infrastructure, software, and services.

Yet there has been plenty more announced at the Las Vegas jamboree besides. Here, CloudTech rounds up the best of the rest:

Next-generation chips

This was the other major AI-focused announcement at re:Invent: the launch of two new chips, AWS Graviton4 and AWS Trainium2, for training and running AI and machine learning (ML) models, among other customer workloads. Graviton4 shapes up against its predecessor with 30% better compute performance, 50% more cores and 75% more memory bandwidth, while Trainium2 delivers up to four times faster training than before and will be able to be deployed in EC2 UltraClusters of up to 100,000 chips.

The EC2 UltraClusters are designed to ‘deliver the highest performance, most energy efficient AI model training infrastructure in the cloud’, as AWS puts it. With it, customers will be able to train large language models in ‘a fraction of the time’, as well as double energy efficiency.

Advertisement

As ever, AWS offers customers who are already utilising these tools. Databricks, Epic and SAP are among the companies cited as using the new AWS-designed chips.

Zero-ETL integrations

AWS announced new Amazon Aurora PostgreSQL, Amazon DynamoDB, and Amazon Relational Database Services (Amazon RDS) for MySQL integrations with Amazon Redshift, AWS’ cloud data warehouse. The zero-ETL integrations – eliminating the need to build ETL (extract, transform, load) data pipelines – make it easier to connect and analyse transactional data across various relational and non-relational databases in Amazon Redshift.

A simple example of how zero-ETL functions can be seen is in a hypothetical company which stores transactional data – time of transaction, items bought, where the transaction occurred – in a relational database, but use another analytics tool to analyse data in a non-relational database. To connect it all up, companies would previously have to construct ETL data pipelines which are a time and money sink.

The latest integrations “build on AWS’s zero-ETL foundation… so customers can quickly and easily connect all of their data, no matter where it lives,” the company said.

Amazon S3 Express One Zone

AWS announced the general availability of Amazon S3 Express One Zone, a new storage class purpose-built for customers’ most frequently-accessed data. Data access speed is up to 10 times faster and request costs up to 50% lower than standard S3. Companies can also opt to collocate their Amazon S3 Express One Zone data in the same availability zone as their compute resources.  

Companies and partners who are using Amazon S3 Express One Zone include ChaosSearch, Cloudera, and Pinterest.

Advertisement

Amazon Q

A new product, and an interesting pivot, again with generative AI at its core. Amazon Q was announced as a ‘new type of generative AI-powered assistant’ which can be tailored to a customer’s business. “Customers can get fast, relevant answers to pressing questions, generate content, and take actions – all informed by a customer’s information repositories, code, and enterprise systems,” AWS added. The service also can assist companies building on AWS, as well as companies using AWS applications for business intelligence, contact centres, and supply chain management.

Customers cited as early adopters include Accenture, BMW and Wunderkind.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

TECHNOLOGY

HCLTech and Cisco create collaborative hybrid workplaces

Published

on

By

Cloud Computing News

Digital comms specialist Cisco and global tech firm HCLTech have teamed up to launch Meeting-Rooms-as-a-Service (MRaaS).

Available on a subscription model, this solution modernises legacy meeting rooms and enables users to join meetings from any meeting solution provider using Webex devices.

The MRaaS solution helps enterprises simplify the design, implementation and maintenance of integrated meeting rooms, enabling seamless collaboration for their globally distributed hybrid workforces.

Rakshit Ghura, senior VP and Global head of digital workplace services, HCLTech, said: “MRaaS combines our consulting and managed services expertise with Cisco’s proficiency in Webex devices to change the way employees conceptualise, organise and interact in a collaborative environment for a modern hybrid work model.

“The common vision of our partnership is to elevate the collaboration experience at work and drive productivity through modern meeting rooms.”

Advertisement

Alexandra Zagury, VP of partner managed and as-a-Service Sales at Cisco, said: “Our partnership with HCLTech helps our clients transform their offices through cost-effective managed services that support the ongoing evolution of workspaces.

“As we reimagine the modern office, we are making it easier to support collaboration and productivity among workers, whether they are in the office or elsewhere.”

Cisco’s Webex collaboration devices harness the power of artificial intelligence to offer intuitive, seamless collaboration experiences, enabling meeting rooms with smart features such as meeting zones, intelligent people framing, optimised attendee audio and background noise removal, among others.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: Cisco, collaboration, HCLTech, Hybrid, meetings

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

TECHNOLOGY

Canonical releases low-touch private cloud MicroCloud

Published

on

By

Cloud Computing News

Canonical has announced the general availability of MicroCloud, a low-touch, open source cloud solution. MicroCloud is part of Canonical’s growing cloud infrastructure portfolio.

It is purpose-built for scalable clusters and edge deployments for all types of enterprises. It is designed with simplicity, security and automation in mind, minimising the time and effort to both deploy and maintain it. Conveniently, enterprise support for MicroCloud is offered as part of Canonical’s Ubuntu Pro subscription, with several support tiers available, and priced per node.

MicroClouds are optimised for repeatable and reliable remote deployments. A single command initiates the orchestration and clustering of various components with minimal involvement by the user, resulting in a fully functional cloud within minutes. This simplified deployment process significantly reduces the barrier to entry, putting a production-grade cloud at everyone’s fingertips.

Juan Manuel Ventura, head of architectures & technologies at Spindox, said: “Cloud computing is not only about technology, it’s the beating heart of any modern industrial transformation, driving agility and innovation. Our mission is to provide our customers with the most effective ways to innovate and bring value; having a complexity-free cloud infrastructure is one important piece of that puzzle. With MicroCloud, the focus shifts away from struggling with cloud operations to solving real business challenges” says

In addition to seamless deployment, MicroCloud prioritises security and ease of maintenance. All MicroCloud components are built with strict confinement for increased security, with over-the-air transactional updates that preserve data and roll back on errors automatically. Upgrades to newer versions are handled automatically and without downtime, with the mechanisms to hold or schedule them as needed.

Advertisement

With this approach, MicroCloud caters to both on-premise clouds but also edge deployments at remote locations, allowing organisations to use the same infrastructure primitives and services wherever they are needed. It is suitable for business-in-branch office locations or industrial use inside a factory, as well as distributed locations where the focus is on replicability and unattended operations.

Cedric Gegout, VP of product at Canonical, said: “As data becomes more distributed, the infrastructure has to follow. Cloud computing is now distributed, spanning across data centres, far and near edge computing appliances. MicroCloud is our answer to that.

“By packaging known infrastructure primitives in a portable and unattended way, we are delivering a simpler, more prescriptive cloud experience that makes zero-ops a reality for many Industries.“

MicroCloud’s lightweight architecture makes it usable on both commodity and high-end hardware, with several ways to further reduce its footprint depending on your workload needs. In addition to the standard Ubuntu Server or Desktop, MicroClouds can be run on Ubuntu Core – a lightweight OS optimised for the edge. With Ubuntu Core, MicroClouds are a perfect solution for far-edge locations with limited computing capabilities. Users can choose to run their workloads using Kubernetes or via system containers. System containers based on LXD behave similarly to traditional VMs but consume fewer resources while providing bare-metal performance.

Coupled with Canonical’s Ubuntu Pro + Support subscription, MicroCloud users can benefit from an enterprise-grade open source cloud solution that is fully supported and with better economics. An Ubuntu Pro subscription offers security maintenance for the broadest collection of open-source software available from a single vendor today. It covers over 30k packages with a consistent security maintenance commitment, and additional features such as kernel livepatch, systems management at scale, certified compliance and hardening profiles enabling easy adoption for enterprises. With per-node pricing and no hidden fees, customers can rest assured that their environment is secure and supported without the expensive price tag typically associated with cloud solutions.

Want to learn more about cybersecurity and the cloud from industry leaders? Check out Cyber Security & Cloud Expo taking place in Amsterdam, California, and London. Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Advertisement

Tags: automation, Canonical, MicroCloud, private cloud

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending