top of page
  • Writer's pictureLars Nordenlund, CEO

Mind Amplifiers: The Cognitive Revolution Ascending to New Frontiers Beyond AI

Updated: Aug 2

This is a summary of Chapter 2: Disruptive Shifts in Technology: AI Automation & Generation. Published in Survival of The Strategic Fittest on Medium. Read the full version HERE

Few would argue that Artificial Intelligence is completely changing the business landscape across most industries. To better the impact and radical shifts the transformative AI can have on businesses and customer behavior in the global market place, we need to understand the fundamental drivers behind the AI technologies and the intent which it’s created, developed and applied thought centuries of evolution.

Let’s get to it..

AI’s Evolution To Get To Today’s Disruptive Revolution

The evolution of AI's long history has been marked by significant milestones, leading us to the era of today’s AI Cognitive computing. AI emerged as a field of research way back in the 1950s, focusing initially on rule-based systems and symbolic reasoning. Over the years, advancements in computer processing power, data availability, and algorithmic breakthroughs have propelled AI into new groundbraking territories.

The convergence of these factors has set the stage for the tech revolution we are witnessing today. AI Cognitive computing has the potential to transform industries and societies, with applications ranging from personalized healthcare and autonomous vehicles to smart cities and virtual assistants. As research and development in AI continue to progress, we can expect further advancements that will shape our future and drive the next wave of innovation.

AI Cognitive Computing Is Mimicking Human Functions Like Mind Amplifiers

AI Cognitive Computing aims to replicate human thought processes and decision-making capabilities. It involves the development of algorithms and models that can perceive, reason, learn, and interact with the environment in a manner similar to human cognition. Like cognitive mind amplifiers.

In recent years, the rise of deep learning and neural networks has revolutionized the field. Deep learning algorithms, inspired by the structure and function of the human brain, enable machines to learn from vast amounts of data and make complex recommendations. This breakthrough has fueled the development of AI Cognitive computing, where machines possess the ability to understand, reason, and learn like humans.

Scientifically proven example of AI cognitive computing is Generative AI which include natural language processing systems that can understand and generate human language generator like ChatGPT which by it’s own words is “an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to generate human-like responses to text inputs in a conversational manner.

Cognitive Computing vs Traditional AI/Machine Learning Approaches

Cognitive computing is all about interaction with humans, often providing insights that are discovered or “learned” autonomously (we call these insights “unsupervised” — more on that later) which can then be used by humans to make better decisions. These could be broad-reaching business decisions, such as choosing an emerging market to increase portfolio investment; or decisions related to safety, such as locking down accounts that appear fraudulent; or even life-saving decisions, such as choosing the right treatment for a cancer patient.

Traditional approaches to machine learning and artificial intelligence (which do not involve AI cognitive computing) learn from labeled training data. Humans must manually add these labels and then test whether the model can use them to accurately predict the labels on new unlabeled data. We call this type of learning “supervised”, because humans are supplementing the data provided to the machine, rather than the machine supplementing the data a human needs to make a decision.

In the case of fraud analysis, humans would supervise the model by manually labeling transactions as either fraudulent/nonfraudulent, and then test whether the model can accurately predict the label for new transactions. A recent, example of this in the medical field involved researchers labeling thousands of female breast biopsy slides with benign and malignant cancerous tissue to successfully train a neural network to recognize them on new patient biopsies. While this traditional approach may help detect cancer earlier and lead to better treatment, it is also tedious. New cognitive computing approaches are proving to be even more helpful because they don’t require labeled data.

“Unsupervised” Neural Networks Shouldn’t Make Decisions But Provide Strong Data-driven Recommendations

Cognitive computing models are commonly implemented using neural networks because such systems learn to perform tasks by considering examples- similar to the way the human brain would learn to perform a task. The difference is that AI can scale the data complexity and time processing to superpowers on inhuman levels.

Complex data sets are fed in, and these models must automatically learn certain characteristics of the data which may be too complex, tedious, or costly for humans to do themselves. Without labels the neural network discovers information about the underlying structure of the data. We call this learning “unsupervised” learning. Just like if a human observed certain situations and correlated present with historic data (experience), the high processing AI model can just handle and correlate unlimited amount of data compared to a human. Again, just like a human mind amplifier.

Going back to our fraud detection example, it might look like this- the neural network learns the underlying structure of data that represents “regular” behavior in a dataset of transactions. When abnormal behavior occurs, the neural network has no ability to recognize it. Since these abnormal transactions are not understood by the neural network, it automatically labels them as fraud (or suspicious to be reviewed). In the case of biopsy tissue analysis, a neural network would learn the “structure” of healthy tissue slides, so when it sees tissue it doesn’t recognize, it automatically labels that tissue as cancerous (or irregular to be reviewed).

As you can imagine, this approach is not foolproof. Accuracy of the labels, predictions, or classifications that a neural network can assign to data in an unsupervised way is often lower than supervised methods- prompting the need for human “review” or decision-making. While removing the need for tedious human-labeling of thousands (or even millions) of datapoints is enticing with unsupervised approaches, the outcome of their analysis can often only be a suggestion to the user, who must then make a decision. This is where cognitive computing comes in.

AI Cognitive Computing Brings It All Together

Let’s look at a very practical example of cognitive computing, using anomaly detection in physical spaces.

Consider a large oil refinery, where hard hats are worn around most spaces of the site. Cameras placed around the refinery could use machine vision models to detect whether people within each space are wearing hardhats, and then feed that information into a neural network which learns whether each space requires a hardhat or not and gather historic data that correlate accident occurrence in each areas.

Then, when people are recognized not wear hardhats in spaces that generally require them or in a high risk area, the neural network alerts users, who decide how to ensure the regulations are met within those spaces and minimize risk for the workers and lower insurance cost for the business.

Now let’s look at a much more radical and game-changing AI cognitive computing use case in the field of entertainment; the development of highly realistic virtual actors. With AI algorithms capable of analyzing and understanding human behavior, speech patterns, and facial expressions, filmmakers can now create entirely digital characters that are indistinguishable from real actors.

This AI technology has opened up a world of possibilities in filmmaking, allowing directors to bring back beloved actors from the past or create entirely new characters that push the boundaries of imagination.

How Do You Get On This Big Wave of AI Cognitive Computing?

The AI revolution is already high impact and disruptive for most industries. The strategic opportunity opens up when you understand the underlying patterns of a big wave like AI to be able to ride it instead of being crushed.

First it is about impact assessment of your specific context of industry, business model and organizational capabilities to understand the near horizon for emerging AI technologies and what the disruptive shifts means for the cognitive behaviors with customers and markets.

Secondly you can start exploring the transformational vision for what the new market position and business architecture looks like, rather than traditional strategy focusing on yesterday’s obvious or tomorrow’s distant future.

The you are ready to design the business model by translating insights into winning strategies, innovation excellence and pragmatic achievable plans for transformation and become the next category leader.

Let's start with a conversation.

Read more and follow me in my full Medium Publishing series "Survival of The Strategic Fittest" HERE

30 views0 comments
bottom of page