AI’s Buzz Is Louder Than Its Critics. Here’s How To Cut Through The Noise

Share:

AI has stepped into the limelight, with recent developments like the rollout of ChatGPT and countless AI art generators (including OpenAI’s other darling, DALL-E)  alongside more tenured tools like chatbots and facial recognition software. In fact, there are numerous real-life, everyday uses for AI, some of which have been gaining popularity for years. 

But, like the term “Cloud” and “VoIP” before it, AI has different meanings and connotations for everyone. We think it’s important to level-set what AI is before you get excited about tools and technologies that boast an AI component. 

The first thing you should know is that there are different types of AI, each with their own strengths, flaws, use cases, and applications. For example, ChatGPT utilizes natural language processing technology– and while it falls under the “artificial intelligence” umbrella, one could argue that ChatGPT is not really an example of intelligence at all, but rather a synthesis of content and information provided to it by its developers. The term “AI” is an umbrella, and possibly a misnomer, with distinctions between our expectations and the realities of its capabilities and processes.


Artificial intelligence vs. machine learning – is there a difference? 

Artificial intelligence is defined by Coursera  as “computer software that mimics the ways that humans think in order to perform complex tasks, such as analyzing, reasoning, and learning.” The keyword here is “mimics”– while AI can skillfully imitate human cognition, it cannot perform said cognition. 

Machine learning is often a function or tool for AI, not a different type of intelligence. Machine learning involves training algorithms to synthesize information to “learn” to perform a task. 

Other subsets of AI include:

  • Deep learning, the systematic training of a machine to “think” like a human using artificial neural networks.
  • NLP (natural language processing), training machines to process and understand language similarly to how humans do. 
  • AGI (artificial general intelligence), a machine that can process a wide breadth of information, mimicking human brain function. 

The latter is likely what most people assume AI actually is– but most artificial intelligence is not AGI. It’s also not deep learning. In fact, ChatGPT is a highly sophisticated NLP tool– but that’s precisely what it is. ChatGPT isn’t writing essays (hopefully… do your own homework, kids!) or providing intelligent insights… it’s chopping, mashing, and reforming responses from pre-existing bodies of text, using a transformer-based neural network incomparable to the nuance of human linguistics. 

As AI tools gain popularity, these nuances become increasingly important. By understanding what AI can and can’t do, and how it works and doesn’t work, you can make clear decisions about the technology you are considering acquiring without getting wowed by, what could be, a meaningless marketing term.

1: AI is a great talking point…. But for many, it’s not much more than that. 

It’s very en vogue for IT vendors and providers to include a statement about their platforms or services leveraging “AI”.  But, they often aren’t transparent about their technology’s current capabilities, what it’s learned, or where it learned these things from. There’s a saying I heard a long time ago, “there’s margin in magic”, so it’s no wonder companies are quick to include AI in their message without too much detail. So… how do you evaluate its value to the product you are buying and the outcome you are trying to achieve?

The short answer is that you can’t, and most of the companies touting AI as a strength are very much hoping that you won’t. Providers tout how much internet data they see, and scour, as a point of advantage for their AI. But, the internet is a big place, full of misinformation, bias, prejudice, and various opinions… and without understanding how– or even if– AI discerns value, we in turn can’t ascribe a value to what it produces. In fact, ChatGPT specifically has been known to lie as well as display highly offensive prejudice and bias– not to mention, it could really use a math tutor

Many vendors tout AI solutions as a point of strength– but might they be a weakness? We don’t know for sure, because most people stop short of asking the hard questions, possibly because they’re simply dazzled by the technology itself. And, even if you ask, prepare to be met with a vague answer. Regardless of the promises made by vendors touting AI, transparency is necessary– and when it comes to IT services, it’s in your best interest to demand the full story.

What should you ask? If a provider is pitching AI as a cornerstone value proposition of their service, ask them 1) What kind of AI is it? 2) What is its purpose? 3) How does it work? 4) Why is this solution better with AI than without? 

Use this information to help you make a decision about the solution holistically, for example, is this really a value-add over a comparable solution? Or am I paying a premium for this technology, and is it ultimately worth it? 

2. AI may “Learn” the wrong lessons

Do you remember when social media platforms started to gain in popularity? The algorithms they used were primitive compared to how they function today. Back then, you followed people or businesses and you would see their content in your stream. The platforms started to simplistically identify your interests based on your activity and would recommend new content or accounts for you to see. 

What they evolved into is unrecognizable from the early days. The algorithms now are oriented toward getting you to engage. That’s often driven by content or accounts that you don’t follow, and don’t even like. Because the algorithm has realized that anger, hate, and disagreement trigger engagement more than things that you like or agree with. 

Or, maybe you’ve accidentally or curiously clicked on something that’s outside of your scope of interest only to find that the algorithm quickly shifts to serve you more of that content even though you really don’t want to see more of it. 

AI in the context of IT solutions could have similar issues. What is it really learning for the behaviors it monitors? We don’t know. We have no way of knowing if, for example, it making call flow changes in a CCaaS context based on an aggregation of data across many call centers are actually going to be appropriate for your specific business. If it’s localized to your specific activity, we don’t know how heavily it is weighing anomalous behaviors, for example. 

In a cybersecurity context, much of the perceived benefit of utilizing AI in a security context stems from its pattern-matching and recognition capabilities. This sounds promising in areas such as threat detection and malware prevention, but there’s one glaring hitch: AI can’t learn from seeing things once, and has to see things multiple times to grasp them. By then, it could be a little too late to recognize a threat. Or, perhaps it predicts threats based on past behaviors or bad actors or bots, but fails to predict changes in actual user behavior and causes too much friction for your users.

None of this is to say that AI can’t be beneficial in an IT service as a tool or an input in the larger scheme of things, but that is to say, let’s not put too much emphasis on AI as a feature and be sure we consider the solution holistically and with open eyes not clouded by the hazy hype of AI. 

3. AI Isn’t as Intelligent As You May Think 

We know that ChatGPT has serious math deficiencies and that it lies– sometimes egregiously. Other AI tools and applications have made mistakes ranging from offensive– like the time that Amazon’s recruitment AI was discovered to be biased against women– to deadly, as was the case in the fatal crash of a self-driving Tesla car. When we trust AI too much, the consequences can be irreparable. A missed turn, an implicit bias, or another, simpler error can lead to large-scale and highly damaging consequences. 

AI– and specifically NLP– technology is not comparable to the neural capabilities of a human, and thus lacks human sensitivities. Machines commit faux pas, spread misinformation, and display insensitivities… a compounding issue as the technology grows in utilization for things like email generation, simple conversation, and even journalism, all of which are vectors for consequences stemming from misinformation and factual errors as well as the spreading of hateful ideas. The emotional sensitivity of a machine is nonexistent because the machine is not sentient. It has no code of ethics to prevent it from spreading misinformation and dangerous conspiracies. Without that emotional element, any “intelligence” a machine possesses is incomplete, especially when compared with the intelligence built through socialization, perspective, and a prefrontal cortex.

Image-based AI has its own host of issues, especially in copyright. An AI art generator tool recently went viral on Tiktok. The tool allowed users to upload “selfies” and then generated portraits of the user across different time periods and art styles. However, the tool didn’t “create” these images– it stole the works of independent artists and sold the Frankensteined creations for spare change. 

While the saying goes that “all art is derivative”– how do we know exactly how derivative AI-generated images are, and what constitutes copyright in these intricate scenarios? Whether it’s books, articles, or images, anyone planning on using AI-generated content needs to understand where these creations come from. It’s not thin air, but rather pre-existing, copyrighted works… and you don’t want to face the liability, legally or in brand equity, from their misuse. 

The lesson here is that AI tools being deployed by lines of business need oversight and understanding to mitigate any current or future risks that include legal or potential mistakes or eros that could cause embarrassment or damage to the brand. 

The Bottom Line: Buyer Beware

AI isn’t really smart, and it’s far from prodigious. It’s riddled with biases, factual errors, and security risks. But, it’s also a helpful tool, when applied with and metered by real humans. 

Technologies like chatbot and call monitoring are applications of AI that have elevated the capabilities of CCaaS, providing insights and allowing customers to get answers faster and more efficiently than ever before. In the world at large, AI tools have revolutionized healthcare by providing potentially life-saving insights and alerts, aided aerospace professionals in mapping planets, and lent a helping hand to those with writer’s block (although not without red flags).

The common denominator in AI’s triumphs has been, perhaps disappointingly, the monitoring and proper use of the tools by a real-life human being. Though AI has numerous uses, impressive developments on the books, and the potential to be a powerful tool across industries, it’s just that– a tool for humans to use, not a replacement for their work. 

You don’t need to eschew AI altogether for its flaws, and you probably shouldn’t. To really maximize the potential of AI in your business, you need to get curious and get patient. Don’t be dazzled or take claims at face value. Ask the hard questions, do the lengthy research, and most importantly, don’t put too much faith in the technology. Even the most sophisticated tools are only useful in the hands of an educated user, and most of us have a long way to go to understand what AI knows, and how we can actually use it without risking security, offense, or copyright violations, in the worst case. In a less severe mishap, you may just end up with a plain old “epic fail” like these

Curious which IT providers and services have deployed AI in a beneficial way? TAP TMG’s experts to find out.

Subscribe to get our most recent articles, case studies, events, and more delivered to your inbox: