top of page

“Dad, Alexa is acting up again”


Alexa did what? Did Alexa even know what it was doing?

Not really. Are you disappointed by that answer? You should not be. We are only victims of stretching the reality of what is in these voice-assistants to accommodate our fundamental need for companionship from these technologies. But, is Alexa equipped with all that a real human companion has? Did you hear about the most recent debacle?

For those who do not know what we are talking about, here is the recent story…

It was reported on May 24th, 2018 by CNN that, an Amazon Echo user in Portland, Oregon, says she was shocked to learn her Echo had recorded a conversation with her husband without them knowing, and then sent the audio file to one of his employees in Seattle. The audio recording included the couple talking about hardwood floors. The user said they turned off their multiple Echo smart speakers, contacted Amazon and spoke to an Alexa engineer, who apologized multiple times. Amazon confirmed the error in a statement and explained the improbable series of events that took place for it to happen. It wasn't a hack or a bug with the device, but a case of Alexa's always-listening microphones mishearing a series of words and mistakenly sending a voice message. While voice technology is increasingly popular, there are lingering concerns about privacy issues associated with having an internet-connected microphone in the home.

As much as we, as humans, yearn for companionship in this recent surge of AI-technology propagated by technology companies like Amazon, Apple, Google, Microsoft and IBM, they are nowhere close to having the levels of human cognition that we expect it to have. Another recent clip on CNN showed humanoid robots named Atlas and Spotmini built by Boston Dynamics showing off some new features like running and autonomous navigation. It is amusing to watch them, but they are nowhere close to being similar to all that humans can do. It may be possible to train one of these to go and fetch a drink from the refrigerator, or something specific like that. And then again, at what cost? If you see these robots, they can not be called as cost-effective, to just fetch a drink from the refrigerator. Significant gap still exists between AI and human beings in the area of cognition.

What is cognition really?

Cognition has many meanings, in common parlance as well as academic circles of human psychology. Here we will use it in its every day usage, as intelligence, a process of knowing and perceiving, the individual’s accumulation and use of knowledge about the world, both outside and inside of themselves. The Latin roots are co (together) and gnoscere (know). Concepts such as reason, self-reflection, and mindfulness are related to cognition. Although our brain is an epicenter with varying connections, understanding the decision making process the brain uses to accept or reject various types of information depends on the relationship between sense and meaning. Sense refers to the idea that we make sense of what we learn and how it fits with what we already know. Meaning reflects whether the item is relevant and personal to the individual. Our attention may play a role in working memory use from our internal reservoir of retained information. Attention is often captured in a delicate balance between the enormous amounts of sensory information in the environment (both conscious and unconscious), and the cognitive functions from stored material to cognitions of various kinds. Attention also allows room for cognitions to monitor, bridge and control individual actions and behaviors.

AI, ML, DL, and Cognitive Computing

Artificial Intelligence (AI) is typically defined as the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, and even exercising creativity. Machine-learning (ML) algorithms detect patterns and learn how to make predictions and recommendations by processing large data-sets and experiences, rather than by receiving explicit programming instruction. The algorithms also adapt in response to new data and experiences to improve efficacy over time.

Deep learning (DL) is a type of machine learning that can process a wider range of data resources, requires less data preprocessing by humans, and can often produce more accurate results than traditional machine-learning approaches (although it requires a larger amount of data to do so). In deep learning, interconnected layers of software-based calculators that are known as “neurons” form a neural network that processes information in layers where the result/output from one layer becomes the input for the next one. The network can ingest vast amounts of input data and process them through multiple layers that learn increasingly complex features of the data at each layer. The network can then make a determination about the data, learn if its determination is correct, and use what it has learned to make determinations about new data.

Essentially, cognitive computing systems analyze the huge amount of data which may be created by connected devices (not just the IoT) with diagnostic, predictive and prescriptive analytics tools which observe, learn and offer insights, suggestions and even automated actions. The term ‘cognitive computing’ strictly speaking is a conundrum. Cognition, for instance, also includes the subconscious which is in fact a major part of cognition. Although this would bring us too far it needs to be said that IBM does make exaggerated claims about what its ‘cognitive’ platform Watson can do. Marketing indeed!

Does AI, ML, DL and/or cognitive computing deliver anything close to the real thing, human cognition?

Not even close; at least not yet. None of these by themselves or in combination with other technologies is able to derive sense and meaning with reasoning, self-reflection and mindfulness, that the level of cognition our human brain is able to achieve. With better algorithms and increased stores of data, the error rate for computer calculations is now often similar to or better than those of human beings in some areas, like for image recognition and several other cognitive functions. Hardware performance has also improved drastically, allowing machines to process this unprecedented amount of data. That has been a major driver of the improvement in the accuracy of AI models. However, knowing where the pitfalls are is critical. AI models are statistical representations of the world. They provide answers based on their learning, but they are imperfect. Algorithms can be and are being exploited. As algorithms take on broader roles—setting a price on an e-commerce site, determining a car insurance rate, hiring someone—cause for concern increases. Now managers must anticipate how an algorithm might be manipulated and adjust accordingly.

Current generation of AI applications are based on what we call machine learning—in the sense that we’re not just programming computers, but we’re training them; we’re actually teaching them using large volume of data. Most AI machines learn by studying examples in curated data sets. The way we train them is to give them this labeled data. We use the data to teach a computer to recognize an object within an image, or to teach the computer to recognize an anomaly within a data stream. Ask questions like, is the data actually available? Is it labeled? How good is the quality if the data? AI experts may understand how an algorithm reached its conclusion, or it may be a black box that is mysterious even to experts in the field. This lack of transparency raises concerns about bias, since any algorithm trained on historical data will logically come to conclusions that reflect bias present in that data.

As more companies use bots and other machines for consumer interactions, organizations run the risk of losing touch with their customers. AI can contribute to a loss of empathy. The convenience and speed of AI-driven decision making are attractive, but sometimes humans need to be involved. Top executives need to be involved in establishing the goals and guardrails around the AI that is increasingly enabling their businesses. A human must review and test the algorithm, audit its outcomes, and assess and improve its performance. Ask questions like, can we actually explain what the algorithm is doing? Can we interpret why it’s making the choices and the outcomes and predictions that it’s making? Empathy must guide the management and deployment of any algorithm. The organization must be able to recognize when a reset is necessary.

Conclusion

We have no doubt about the fact that AI is positioned to disrupt our world. Well aware of AI’s massive potential, leading high-tech companies have taken early steps to win across several markets. But the industry is still nascent and a clear recipe for success hasn’t emerged. Staying ahead in the accelerating artificial-intelligence race requires executives to make nimble, informed decisions about where and how to employ AI in their business. The pervasiveness and scalability of AI means that algorithms can rapidly affect millions. Competition and progress require its use, but technology is neither necessarily moral nor intrinsically improving. That’s up to the humans who leverage it. In a world shaped by AI, human leadership matters more than ever. You cannot have any version of an “Alexa” taking off by doing its own thing that might be detrimental to its users in anyway. The implications of experiencing anything like it, using similar AI-based solutions in business can be catastrophic.

Featured Posts
Recent Posts