Connect with us

OTHER

The Pathway to Artificial General Intelligence (AGI) and the Era of Broad AI

Published

on

The Pathway to Artificial General Intelligence (AGI) and the Era of Broad AI

Artificial Intelligence (AI) is set to further improve in 2024, with large language models poised to advance even further.

2023 was an exciting year for AI with Generative AI, in particular those employing Large Language Model (LLM) architecture with the likes of the models from Open AI (GPT 4), Anthropic (Claude), and Open Source Community (Llama 2, Falcon, Mistral, Mixtral, and many more) gaining momentum and rapid adoption.

2024 may turn out to be an even more exciting year as AI takes centre stage everywhere including at CES 2024 and large language models are expected to advance even further. 

What is Artificial Intelligence (AI) and What Stage Are We At?

What_is_Artificial_Intelligence_AI_and_What_Stage_Are_We_At.png

AI deals with the area of developing computing systems that are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

Narrow AI (ANI): the field of AI where the machine is designed to perform a single task, and the machine gets very good at performing that particular task. However, once the machine is trained, it does not generalise to unseen domains. This is the form of AI, for example, Google Translate, represented the era of AI that we were in until recent times.

Advertisement

Broad AI (ABI):  MIT IBM Watson Lab  explain that “Broad AI is next. We’re just entering this frontier, but when it’s fully realized, it will feature AI systems that use and integrate multimodal data streams, learn more efficiently and flexibly, and traverse multiple tasks and domains. Broad AI will have powerful implications for business and society.”

IBM further explain that “Systems that execute specific tasks in a single domain are giving way to Broad AI that learns more generally and works across domains and problems. Foundation models, trained on large, unlabelled datasets and fine-tuned for an array of applications, are driving this shift.”

The emergence of Broad AI capabilities is recent with Francois Chollet (On the Measure of Intelligence) arguing in 2019 that “even the most advanced AI systems today do not belong to this (broad generalization) category…”

For more on the journey of Broad AI see Sepp Hochreiter, Toward a Broad AI  (2022): “A broad AI is a sophisticated and adaptive system, which successfully performs any cognitive task by virtue of its sensory perception, previous experience, and learned skills.”

However, the author clarifies that ABI models do not possess the overall general capabilities of the human brain.

Stages_of_AI_Explained.png

Artificial General Intelligence (AGI): a form of AI that can accomplish any intellectual task that a human being can do. It is more conscious and makes decisions similar to the way humans make decisions. It is also referred to as ‘strong AI’ and IBM describe AGI or Strong AI as possessing an intelligence equal to humans with self-aware consciousness and an ability for problem solving, learning and planning for the future. In effect it would result in ‘intelligent machines that are indistinguishable from the human mind.

Advertisement

AGI remains an aspiration at this moment in time, with various forecasts ranging from 2025 to 2049 or even never in terms of its arrival. It may arrive within the next decade, but it has challenges relating to hardware, the energy consumption required in today’s powerful machines. The author personally believes that the 2030s is a more probable arrival time.

Artificial Super Intelligence (ASI): is a form of intelligence that exceeds the performance of humans in all domains (as defined by Nick Bostrom). This refers to aspects like general wisdom, problem solving, and creativity. The author’s personal view is that humans will use human computer interfaces (possibly a wireless cap or headset) to leverage advanced AI and become the ASI (perhaps a merger between Neuromorphic Computing combined with Quantum capabilities in the future and is referred to as Quantum Neuromorphic Computing).

Where Are We in Terms of AI Today?

The arrival of GPT-4 from Open AI triggered a great deal of debate across social media with some commenting that as GPT-4 was not narrow AI it therefore had to be Artificial General Intelligence (AGI). The author will explain that the latter is not the case.

AGI is unlikely to magically appear overnight and rather it is more likely to arrive via a process of ongoing evolutionary advancement in AI research and development.

We have been in the era of Narrow AI until recently. However, many of the State-of-the-art (SOTA) models can now go beyond narrow AI (ANI) and increasingly we are experiencing Generative AI models utilising LLMs that are in turn applying Transformers with Self-Attention Mechanism architecture and they are able to demonstrate multimodal, multitasking capabilities.

Where_are_we_in_terms_of_AI_today.png

Nevertheless, it would not be accurate to state that the current SOTA models are at the human brain level, or AGI, in particular on logic and reasoning tasks (including common sense).

Advertisement

We are in the era of broad AI (or Artificial Broad Intelligence, ABI) whereby the Generative AI models are neither narrow (Artificial Narrow Intelligence, or ANI) as they can perform more than one task, but neither AGI as they are not at the level of the intelligence and capabilities of the human brain.

advanced_robots_of_the_sci-fi_movies.png

The advanced robots of the sci-fi movies have not been present in our everyday lives, however as AI technology advances AI is increasingly being embedded into advanced robotics and the robot technology is rapidly advancing for example Stanford based researchers introduced Mobile Aloha robot that learns from humans to cook, clean and do the laundry.

Tony_Z._Zhao.png 

Furthermore, Machine Learning engineer Santiago Valdarrama demonstrated how to incorporate ChatGPT into Boston Dynamics robot Spot.

Integrating_ChatGpt_with_robot.png

The Pathway Towards Advanced AI Capabilities

Even when all the above are fully accomplished, the author argues that we don’t have machine intelligence and capabilities to match a human brain due to the computational and resource efficiency challenge, in particular energy consumption.

Joey deVilla in “The best social media post about AGI that you might have missed” (Jan 2024) flagged the post on Threads by Buzz Andersen that sums up the challenge for AGI that genuinely matches the capabilities of the human brain:

Idandersen.png

ChatGPT (3.5) and GPT 4 from Open AI have been extremely impressive models in terms of capabilities. However, we need to address the issue of advancing performance and capabilities of models with energy efficiency,

Advertisement

ChatGPT is reported to consume as much energy as 33,000 US households on a daily basis and as may have consumed as much electricity as 175,000 Danes consumed in January 2023!

In an era of transitioning to a low carbon footprint such consumption we need to move towards greater computational resource efficiency. This is also desirable from an economics perspective as firms seek to scale AI on a cost efficient basis.

Furthermore, in time with the arrival of AGI some researchers warn of the dangers of AGI and energy consumption and in effect a competition with humans for energy supply.

For example, AI researchers have flagged the risks of advanced AI models competing with humans for energy and turning our planet into data centres and solar panels everywhere.

Moreover, Research Scientists at Google DeepMind and University of Oxford authored a paper that an advanced AI would compete with humans for limited energy supply.

However, the flip side as pointed out by the IEA is that AI may in fact play a key role in helping efficiently manage complex power grids.

Advertisement

Hence, the question is how should society manage the negative risks of powerful AI models vs the upside benefits. A starting point would be to make the models more resource efficient including from an energy consumption perspective (which in turn delivers economic cost benefits to companies and users too).

Techniques Being Employed to Make LLMs More Efficient (Non-Exhaustive List)

Both the tech majors and the Open-Source Community have been advancing approaches that make LLM models more efficient. For the Open-Source community, it is important to find solutions on efficiency as many in the community lack the resources of a large major. However, even the tech majors are increasingly aware that scaling massive LLM models to vast number of users results in huge server plus energy costs and as a result is not so good for carbon footprint.

Examples of advances in making Generative AI models more efficient:

LoRA (Low-Rank Adaptation of Large Language Models) 

LoRA: Is a technique that results in a material reduction in the number of parameters during training and is achieved via the insertion into the model of a lesser number of new weights with only these new weights being trained. This in turn results in a training process that is significantly faster, memory efficient and easier to share and store due to the reduced model weights.

Flash Attention is another innovation that is being considered for fast and memory efficient exact attention with Input and Output Awareness.

Advertisement

Model pruning: These non-essential components can be pruned, making the model more compact while maintaining the model’s performance.

LLM Quantization: Quantization, a compression technique, converts these parameters into single-byte or smaller integers, significantly reducing the size of an LLM.

In addition, the hardware solutions may offer computational resource efficiencies that also result in energy and hence carbon footprint savings for example 5th Gen Intel Xeon Scalable Processor, the work that IBM are undertaking with an analog AI chip and others. This will enable the rise of the AIoT whereby AI scales across the edge of the network across devices in power constrained environments where efficiency and low latency is key.    

Intel_stats_5th_Gen_Intel_Xeon_Processors_vs_4th_Gen.png

Source for image above: Intel (stats 5th Gen Intel Xeon Processors vs 4th Gen).

Firms may wish to consider model architectures that balance performance capabilities alongside resource costs (computational costs including energy and carbon footprint) and hardware the net present value returns (NPV) or return on investment (ROI) that efficient hardware such as 5th Gen Intel Xeon Scalable Processors may deliver in particular for inferencing and /or fine tuning models with low latency in models with less than 20Bn parameters in size as proposed by the author previously.

The author believes in the longer-term that Quantum Computing may provide potential a pathway to advancing AI towards ASI, however, a possible pathway towards AGI (that is also energy efficient) may be provided by  Spiking Neural Networks combined with Dendritic Computing, with Neuromorphic Computing.

Advertisement

Spiking Neural Networks (SNNs) in particular when combined with Dendritic Computing are more closely aligned to our own human brains relative to a typical Artificial Neural Network (ANN) architecture in Deep Learning. SNNs are more energy efficient relative to ANNs, and can be architected to be ultra-low latency, may engage in continuous learning and as they can be deployed on the edge of the network (referring here the device itself) they can be more data secure too.

Neuroscientists have found that Dendrites help explain our brain’s unique computing power and it was reported for the first time that scientists observed a form of cell messaging within the human brain that was considered unique and potentially indicates that our human brain possesses greater computational power than previously believed.

The_Dendrites_are_central_to_understanding_the_brain.png

“The Dendrites are central to understanding the brain because they are at the core of what determines the computational power of single neurons,” neuroscientist (based at Humboldt University) Matthew Larkum stated to Walter Beckwith (Dendrite Activity May Boost Brain Processing Power) at the American Association for the Advancement of Science in January 2020.

Research has set out the potential computational advantages of dendritic amplification and the potential for leveraging dendritic properties to advance Machine Learning and neuro- inspired computing.

Moreover, research has also shown that the Dendrite alone can perform complex computations, and hence the multi-parallel processing power of a single neuron is far beyond what was conventionally assumed.

In addition, researchers are seeking to better understand how memory is stored in Dendritic spines within the brain and the potential to treat diseases such as Alzheimer’s.  This points to Dendrites playing an important role in the human brain and yet ANN architectures don’t possess Dendrites (nor do a number of SNN architectures that have been emerging in recent years albeit the author believes this will change going forwards).

Advertisement

Furthermore, Dendrite pre-processing has been shown to reduce the size of network required for a threshold performance. Furthermore, SNNs with Dendritic Computing could entail running on watts instead of Megawatts.

By leveraging analog signals and continuous dynamics, neuromorphic computing can improve the speed, accuracy, and adaptability of AI applications while overcoming traditional computing’s limitations, such as latency, power consumption, and scalability.

This will lead to the Internet of Everything (IoE) where efficient AI agents will transcend locally across all internet connected devices providing intelligent responses and hence mass hyper-personalisation at scale across all interactions, in turn referred to as the AIoE.

The AIoT and then following on the AIoE is a world where devices communicate to each other and dynamically interact with humans.

Examples of Neuromorphic Computing include:

Moreover, Neuromorphic Computing may play an essential role in enabling mixed reality glasses given the powerful computational needs along with the energy constraints and need for low latency.

Advertisement

mixed_reality_glasses.png

The author applauds the work of the likes of Jason Eshraghian et al. (2023) for their work on SpikeGPT and this article also serves as an invitation to  Jason Eshraghian and other researchers to explore collaborations in this area.

The author believes that in time Neuromorphic Computing entailing Spiking Neural Networks (SNNs) with Dendritic Computing will enable the emergence of efficient, advanced AI that is closer to the human brain and human level intelligence. The author suggests that the realistic timeline for the emergence of ‘genuine AGI’ (that truly matches the capabilities of the human brain) is likely to occur in the 2030s. Foresight Bureau predicts the arrival of AGI in 2030 and Demis Hasibis of DeepMind stated that “I don’t see any reason why that progress is going to slow down. I think it may even accelerate. So I think we could be just a few years, maybe within a decade away.”

However, AI pioneer Herbert A. Simon had predicted that AGI would arrive by 1985 and hence it is worth noting that no one can really state with certainty when AGI will actually arrive with Geoffrey Hinton stating that “I now predict 5 to 20 years but without much confidence… Nobody really knows…”

This article is a strategic analysis of the current state of AI the author’s strategic vision of what may come next, however, it is not intended to be used on a reliance basis nor provide any representations or warranties implicitly or explicitly and for those seeking to venture into the field of AI and LLMs it is important to conduct their own due diligence and assessments as to which models, pipelines and technical hardware solutions are appropriate for their use case and situation as results may vary.

The next article in this series will consider how we may lay the foundations for the successful emergence of AGI and indeed for scaling ABI in terms of ethics and ensuring ESG values (including data governance).

About the Author

Imtiaz Adam is a Hybrid Strategist and Data Scientist. He is focussed on the latest developments in artificial intelligence and machine learning techniques with a particular focus on deep learning. Imtiaz holds an MSc in Computer Science with research in AI (Distinction) University of London, MBA (Distinction), Sloan in Strategy Fellow London Business School, MSc Finance with Quantitative Econometric Modelling (Distinction) at Cass Business School. He is the Founder of Deep Learn Strategies Limited, and served as Director & Global Head of a business he founded at Morgan Stanley in Climate Finance & ESG Strategic Advisory. He has a strong expertise in enterprise sales & marketing, data science, and corporate & business strategist.

Advertisement



Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

OTHER

Why Malia Obama Received Major Criticism Over A Secret Facebook Page Dissing Trump

Published

on

Why Malia Obama Received Major Criticism Over A Secret Facebook Page Dissing Trump

Given the divisive nature of both the Obama and Trump administrations, it’s unsurprising that reactions to Malia Obama’s alleged secret Facebook account would be emotional. Many online users were quick to jump to former President Donald Trump’s defense, with one user writing: “Dear Malia: Do you really think that anyone cares whether you and/or your family likes your father’s successor? We’re all trying to forget you and your family.”

Others pointed out the double standard held by those who condemn Trump for hateful rhetoric but praise people like Malia who speak out against her father’s successor in what they believe to be hateful rhetoric. Some users seemed bent on criticizing Malia simply because they don’t like her or her father, proving that the eldest Obama daughter couldn’t win for losing regarding the public’s perception of her or her online presence. 

The secret Facebook situation is not all that dissimilar to critics who went after Malia for her professional name at the 2024 Sundance Film Festival. In this instance, people ironically accused Malia of using her family’s name to get into the competitive festival while also condemning her for opting not to use her surname, going by Malia Ann instead.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

OTHER

Best Practices for Data Center Decommissioning and IT Asset Disposition

Published

on

By

Best Practices for Data Center Decommissioning and IT Asset Disposition

Data center decommissioning is a complicated process that requires careful planning and experienced professionals.

If you’re considering shutting down or moving your data center, here are some best practices to keep in mind:

Decommissioning a Data Center is More than Just Taking Down Physical Equipment

Decommissioning_a_Data_Center_is_More_than_Just_Taking_Down_Physical_Equipment.jpg

Decommissioning a data center is more than just taking down physical equipment. It involves properly disposing of data center assets, including servers and other IT assets that can contain sensitive information. The process also requires a team with the right skills and experience to ensure that all data has been properly wiped from storage media before they’re disposed of.

Data Centers Can be Decommissioned in Phases, Which Allows For More Flexibility

When you begin your data center decommissioning process, it’s important to understand that it’s not an event. Instead, it’s a process that takes place over time and in phases. This flexibility allows you to adapt as circumstances change and make adjustments based on your unique situation. For example:

  • You may start by shutting down parts of the facility (or all) while keeping others running until they are no longer needed or cost-effective to keep running.

  • When you’re ready for full shutdown, there could be some equipment still in use at other locations within the company (such as remote offices). These can be moved back into storage until needed again.

Data Center Decommissioning is Subject to Compliance Guidelines

Data center decommissioning is subject to compliance guidelines. Compliance guidelines may change, but they are always in place to ensure that your organization is following industry standards and best practices.

Advertisement
  • Local, state and federal regulations: You should check local ordinances regarding the disposal of any hazardous materials that were used in your data center (such as lead-based paint), as well as any other applicable laws related to environmental impact or safety issues. If you’re unsure about how these might affect your plans for a decommissioned facility, consult an attorney who specializes in this area of law before proceeding with any activities related to IT asset disposition or building demolition.

  • Industry standards: There are many industry associations dedicated specifically toward helping businesses stay compliant with legal requirements when moving forward with projects such as data center decommissioning.

  • Internal policies & procedures: Make sure everyone on staff understands how important it is not just from a regulatory standpoint but also from an ethical one; nobody wants their name associated with anything inappropriate!

Companies Should Consider Safety and Security During the Decommissioning Process

Data center decommissioning is a complex process that involves several steps. Companies need to consider the risks associated with each step of the process, and they should have a plan in place to mitigate these risks. The first step of data center decommissioning is identifying all assets and determining which ones will be reused or repurposed. At this point, you should also determine how long it will take for each asset to be repurposed or recycled so that you can estimate how much money it will cost for this part of your project (this can be done through an estimate based on previous experience).

The second step involves removing any hazardous materials from electronic equipment before it’s sent off site for recycling; this includes chemicals used in manufacturing processes like lead-free solder paste adhesives used on circuit boards made from tin-based alloys containing up 80% pure tin ingots stamped out into flat sheets called “pucks”. Once these chemicals have been removed from whatever device needs them taken off their surfaces then those devices can safely go through any other necessary processes such as grinding away excess plastic housing material using high pressure water jets until only its bare frame remains intact without any cracks where moisture might collect inside later causing corrosion damage over time due too much moisture exposure.

With Proper Planning and an Effective Team, You’ll Help Protect Your Company’s Future

Data center decommissioning is a complex process that should be handled by a team of experts with extensive experience in the field. With proper planning, you can ensure a smooth transition from your current data center environment to the next one.

The first step toward a successful data center decommissioning project is to create a plan for removing hardware and software assets from the building, as well as documenting how these assets were originally installed in the facility. This will allow you or another team member who may inherit some of these assets later on down the line to easily find out where they need to go when it’s time for them to be moved again (or disposed).

Use Professional Data Center Decommissioning Companies

In order to ensure that you get the most out of your data center decommissioning project, it’s important to use a professional data center decommissioning company. A professional data center decommissioning company has experience with IT asset disposition and can help you avoid mistakes in the process. They also have the tools and expertise needed to efficiently perform all aspects of your project, from pre-planning through finalizing documentation.

Proper Planning Will Help Minimize the Risks of Data Center Decommissioning

Proper_Planning_Will_Help_Minimize_the_Risks_of_Data_Center_Decommissioning.jpg

Proper planning is the key to success when it comes to the data center decommissioning process. It’s important that you don’t wait until the last minute and rush through this process, as it can lead to mistakes and wasted time. Proper planning will help minimize any risks associated with shutting down or moving a data center, keeping your company safe from harm and ensuring that all necessary steps are taken before shutdown takes place.

Advertisement

To Sum Up

The key to a successful ITAD program is planning ahead. The best way to avoid unexpected costs and delays is to plan your ITAD project carefully before you start. The best practices described in this article will help you understand what it takes to decommission an entire data center or other large facility, as well as how to dispose of their assets in an environmentally responsible manner.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

OTHER

Massive Volatility Reported – Google Search Ranking Algorithm Update

Published

on

Google Logo Exploding Cracking

I am seeing some massive volatility being reported today after seeing a spike in chatter within the SEO community on Friday. I have not seen the third-party Google tracking tools show this much volatility in a long time. I will say the tracking tools are way more heated than the chatter I am seeing, so something might be off here.

Again, I saw some initial chatter from within the SEO forums and on this site starting on Friday. I decided not to cover it on Friday because the chatter was not at the levels that would warrant me posting something. Plus, while some of the tools started to show a lift in volatility, most of the tools did not yet.

To be clear, Google has not confirmed any update is officially going on.

Well, that changed today, and the tools are all superheated today.

Google Tracking Tools:

Let’s start with what the tools are showing:

Semrush:

Advertisement

Semrush

SimilarWeb:

Similarweb

Mozcast:

Mozcast

SERPmetrics:

Serpmetrics

Advanced Web Rankings:

Advancedwebranking

Accuranker:

Accuranker

Wincher:

Advertisement

Wincher

Mangools:

Mangools

SERPstat:

Serpstat

Cognitive SEO:

Cognitiveseo

Algoroo:

Algoroo

So most of these tools are incredibly heated, signaling that they are showing massive changes in the search result positions in the past couple of days.

SEO Chatter

Here is some of the chatter from various comments on this site and on WebmasterWorld since Friday:

Advertisement

Speaking of, is anyone seeing some major shuffling going on in the SERPs today? It’s a Friday so of course Google is playing around again.

Something is going on.

Pages are still randomly dropping out of the index for 8-36h at a time. Extremely annoying.

Speaking of, is anyone seeing some major shuffling going on in the SERPs today? It’s a Friday so of course Google is playing around again

In SerpRobot I’m seeing a steady increase in positions in February, for UK desktop and mobile, reaching almost the ranks from the end of Sep 2023. Ahrefs shows a slight increase in overall keywords and ranks.

In the real world, nothing seems to happen.

yep, traffic has nearly come to a stop. But exactly the same situation happened to us last Friday as well.

USA traffic continues to be whacked…starting -70% today.

In my case, US traffic is almost zero (15 % from 80%) and the rest is kind of the same I guess. Traffic has dropped from 4K a day to barely scrapping 1K now. But a lot is just bots since payment-wise, the real traffic seems to be about 400-500. And … that’s how a 90% reduction looks like.

Something is happening now. Google algo is going crazy again. Is anyone else noticing?

Since every Saturday at 12 noon the Google traffic completely disappears until Sunday, everything looks normal to me.

This update looks like a weird one and no, Google has not confirmed any update is going on.

What are you all noticing?

Forum discussion at WebmasterWorld.

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending