top of page
Search
Writer's pictureFounder ConnectedWorldTech

Why are we seeing a buzz in AI recently? Is this all hype?

It is true that AI has been around for a long time. The word 'Artificial Intelligence' was coined by John McCarthy in 1956 as "The science and engineering of making intelligent machines".


AI progressed slowly in the 50's and 60's. Most of the development in AI during this time was theoretical. But a lot of the mathematical foundations for AI were laid during this time. But many were skeptical whether AI will really live up to its hype.


In the 90's, a few breakthrough applications for AI emerged. For example, in the late 90's IBM's Deep Blue beat the world chess champion Gary Kasparov. People started to take notice.


From 2000, many important breakthrough applications in AI emerged from the industry. The robotic vacuum cleaner iRoomba in 2005, Google's self driving car in 2009, voice assistants Siri, Alexa, Cortana and Google Voice between 2010-2014, Deep Mind's AlphaGO champion in 2015 and so on. In 2022, we had ChatGPT, which made AI more popular.


There are several reasons why AI has grown in popularity

  • Faster and more powerful processors, which led to better AI algorithms

  • Cheaper storage, which led to more data storage

  • The rise of Internet, social networks, digitization and so on, which led to more data being made available for AI.

First, let us look at the growth of processors and memory.

In 1970, Intel 4004 microprocessor was released with 2300 transistors.

In 2023, NVIDIA 100 (AI Chip) was released with 54 Billion transistors!


The number of transistors on a chip has been doubling approximately every two years, following Moore's Law (Fig 1).


Why is this useful? To put things in perspective, the processor on your phone today is 100,000 times more powerful than the one that was used for the Apollo moon landing in 1969! (see figure).


Faster and more powerful processors means better algorithms and more number crunching capabilities. These are particularly useful for building complex neural networks, image recognition algorithms and so on. It led to the rise of Deep Learning and Generative AI, for example. You have an app on your phone now that can recognize your face.


Semiconductors are also used for memory chips, besides processors. Thanks to Moore's law, we have memory chips that can store several hundred Gigabytes in RAM today!


This has led to the growth of Large Language Models that need billions of parameters in memory. For example, ChatGPT 3.5's model has over 175 billion parameters. ChatGPT 4 may have more than a trillion parameters!


Next, let us look at the decreasing cost of storage (Fig 2).


In 1970, 1TB of storage space cost $1 Million.

In 2023, 1TB of storage costs less than $75!


Why is this useful?


Cheaper storage means more data for AI to train on. With the emergence of technologies like OCR (Optical Character Recognition), Analog to Digital conversion and so on, we are able to digitize vast quantities of data and store them. And of course, the Internet and social networks make it possible to generate data in petabyte scale.


With more data, AI systems can become more accurate and provide better responses.

For example, the most recent release of ChatGPT was trained on the entire corpus of human knowledge available in digital form, till about September 2021.


Today's AI systems can read, write, speak, recognize objects, images, draw pictures, beat humans in board games, drive cars and so on. ChatGPT has already passed many medical and law exams. Most importantly, with LLMs (Large Language Models), machines have grown by leaps and bounds, in their ability to understand human language.


So we are seeing a buzz in AI recently because of some very real advancements and tangible results.


This time it looks more than just another hype cycle. We may be at an inflection point in AI. But no one can say far sure how big this is going to be.



Fig 1


Fig 2



5 views0 comments

Recent Posts

See All

Yorumlar


bottom of page