Map of the brain as a computer chip in reference to artificial intelligence

Is AI Really the New, New Thing?

Artificial intelligence is the new, new thing — or is it? While seemingly every software solution touts the wonders of AI, it’s only now starting to deliver on the promises that have been made for decades.

Consider this sentence from a Fortune 100 company lauding its power: “Artificial intelligence promises to open new dimensions in the ways that machines serve people.” Sounds current, right? However, it’s from a 1984 brochure on the Texas Instruments Explorer Computer System.

My agency worked for Texas Instruments early in my career, and AI was among the key technologies we focused on. We produced a quarterly newsletter and successfully pitched stories to the media on the topic, wrote press releases on solutions built on it and developed a case study on one of TI’s early adopters, Campbell Soup. TI and Campbell’s developed an expert system (an early AI application) to capture the knowledge of valuable engineers that were nearing retirement. Without the system, that knowledge could have vanished forever.

Unfortunately, early applications of AI were ahead of their time, and the market faded away for several decades. Why did it take so long for the technology to catch on? Here are some personal observations from someone who was captivated by the promise of AI three decades ago.

Early AI solutions were expensive and slow.

Early artificial intelligence systems used a language called LISP that required specialized, expensive computing systems. Additionally, the limitations of existing systems (slow and limited processors, as well as a lack of storage), constrained early PC versions of AI software. The computing power necessary for robust AI applications simply didn’t exist. However, as predicted by Moore’s Law, computing power doubled every two years as the cost fell — ultimately making it possible to deliver the raw computing power required for AI. But that alone couldn’t solve the problem.

The cloud didn’t exist.

Can you imagine (or remember) a time when the Internet didn’t exist? In the early-to-mid 80’s, it was still nearly a decade away from becoming a household word.

That meant there was no way to deliver AI or any other applications on the cloud, because there was no cloud! And since most such applications today are delivered on cloud platforms such as Amazon Web Services or Microsoft Azure, which can provide thousands of processors on demand for AI applications, there was no way for it to reach its full potential.

Storage was limited.

A lack of data storage was yet another limiting factor. The first 1 GB hard drive was introduced in 1980, and the era of terabytes, petabytes and exabytes was decades away. Without immense amounts of data storage, it’s impossible to house all the data required for robust AI applications.

Thankfully, that’s all changed, and we’re entering the beginnings of the golden age of artificial intelligence. As real-world AI applications continue entering the mainstream, Ketner Group is fortunate to represent several clients delivering AI applications that solve tough business problems and deliver significant savings in time and cost.

None of this would be possible, though, without the pioneering companies that helped pave the way decades ago. So, here’s a salute to Texas Instruments, Intel and other visionaries that are finally seeing their vision come to life.