7 AI Terms Every Internal Communicator Must Know

7 AI Terms Every Internal Communicator Must KnowIn the past year, the popularity and awareness of Artificial Intelligence (AI) has skyrocketed, but many corporate communicators do not have a good handle on what it is, how it works, or what the terminology actually means.

Like any new or emerging technology, AI has both positive and negative potential, but before those can be discussed, its important to understand basic terms and definitions.

AI

Artificial Intelligence is a computer simulation of human intelligence, produced by a software program. When a computer system is designed to mimic human mental processes of thinking, knowing, learning, and understanding, and performs the type of work that normally requires a human such as producing content, apparent reasoning, and decision-making, and solving problems, it is considered an AI.

Generative AI

A form of artificial intelligence which produces content such as text, images, audio, or code.

Traditionally, AI software was preprogrammed with rules to perform a specific task, usually identifying patterns and relationships as a result of processing, categorizing, and indexing large data sets. This is how Netflix makes movie recommendations, as the AI makes predictions or inferences on what you may like to watch by comparing what you have watched to what others have watched.

A generative AI also processes and analyzes a large set of data  yet doesn’t apply it to a specific task. It uses machine learning techniques to build models which can respond to inputs and then output its predictions in the format of the data itself. A generative AI will produce text and images based upon the large volume of text and images it has processed and learned from.

Machine Learning

This is the development of computer programs which are able to adapt and change outputs based upon new data inputs, without human reprogramming interventions.

A Model is a logical framework which mimics reality.

Like an architect’s model of a building on a table used to study traffic patterns, structure and building requirements. Having a model allows stakeholders to make more accurate predictions about how the building will look and function when built. An AI model takes new data inputs and predicts a logical outcome based on what it has learned.

Training is an education process which informs the model.

We’ve all likely participated in a Captcha process to somehow prove we are a human by clicking boxes within a picture. This is a form of directed learning, where we are training the machine to better recognize images. A model can be trained with labeled or categorized data, telling it which are correct or incorrect enabling it to learn which are which. This is how a program can tell the difference between a picture of a cat and a dog, after processing thousands of such images and being told which is which. Another form of training is basically trial and error.  The data is not categorized beforehand, but as the system makes guesses, it is told which answers are correct or incorrect, thus reinforcing the answers so the guesses improve over time. This is how AI machines learn to play games like Chess or Go.

Machine learning algorithms apply statistics and probability, linear algebra, and multi-variate calculus to the training data, in order to make educated guesses (predictions or inferences) whenever presented with new data.  All this complicated math requires fast and efficient calculation power, which is why graphical display chip companies like Nvidia have boomed with AI, because their math optimized GPUs are ideal for performing both graphical and AI calculations.

You experience machine learning AI’s when your credit card company detects fraudulent transactions or when your face gets recognized to give you access to your iPhone.

Large Language Model

This is a computer program which understands human languages. Imagine you have a friend with a photographic memory who has read the entire internet, every word, every phrase, and every paragraph ever recorded online. So, when you ask this friend a question, you would expect a very good, well-articulated answer.

You could also ask your friend to guess the rest of your sentence based on a partial sentence, given their vast knowledge and recall ability of words and their associations. Like an auto-correct feature, but instead of correcting words, an LLM predicts words. LLM even completes entire paragraphs, given that it has read lots of books and papers and knows which words and sentences commonly come next. LLM can quickly edit work, instantly producing variations and replacing words with appropriate synonyms, or rewrite in a particular style of a specific author or genre. They might even translate work from one language to another with relative ease.

It’s like doing a Google search and instead of getting a list of related links, you get an answer. And this is in fact what search engines using LLM’s are now providing.

Prompts

This is like texting that super-smart friend of yours. You communicate in natural language and provide a set of instructions and constraints. A prompt both selects the topic and filters the results. It’s like asking a robot to follow a recipe. The more detail you provide about what to do or not do, what ingredients to use or not use, and how you want it cooked, the more the dish will be to your liking.

It is essential for internal communicators to familiarize themselves with key AI terms such as Artificial Intelligence, Generative AI, Machine Learning, and Large Language Models to understand the potential impact and applications of these technologies in the corporate world. By grasping these fundamental concepts, communicators can better navigate the evolving landscape of AI and harness its capabilities for effective communication strategies.

Leave a comment

You must be logged in to post a comment.