AI in 2023

Just to get a feel of how important the year 2023 was for Artificial Intelligence, one can look at the number of articles published on AI in Time Magazine. For the first time, Time Magazine published the list of 100 most influential people in AI. In the last few months they published articles on; how the AI landscape has shifted during the year, three most important innovations in AI in 2023, and the biggest AI policy developments of 2023, just indicating how big an influence AI has had on the technology landscape. The pace has been so frantic that it has nearly been impossible to keep up with the updates and new entrants in each specific domain of AI. This has created a lot of buzz and concern among the general public about AI. The three most interesting debates that have materialized are around the stepping of AI out of labs and into the real world, fear of AI, and the difference in perception of generalists and specialists about the power of current state-of-the-art AI systems.

Graduation of AI into the real world:

Up until 2022, most of the news in AI was found in articles published by research groups around the world. However, the revelation of OpenAI’s ChatGPT gave the world a taste of practical use of AI. This was quickly followed by AI models/systems that covered the complete gamut of multimedia technologies including but not limited to text-to-speech, text-to-image, text-to-video, speech-to-text, voice cloning, image-to-image, in-painting, creative content generation. Just alone in the domain of large language models (LLMs), OpenAI’s ChatGPT, Meta’s LLaMa2, Google’s Bard and Gemini, Baidu’s Ernie bot, Mistral, Falcon were introduced. Some of these models outperformed others in different applications; however, the general consensus is that ChatGPT and Mistral generally produce excellent results, with LLaMa2 not being far behind. These LLMs found applications in chatbots that could answer user questions and can be enhanced further using (Retrieval Augmented Generation) RAG to answer questions related to specialized topics. All of these models have great potential to power the systems of the future.


Doomsday and Job Losses:

Discussions around doomsday scenarios and job losses took center stage during the year. Hundreds of researchers joined Geoffrey Hinton, Yoshua Bengio in raising concerns about the impact of AI on jobs and the safety of current AI systems. There are some concerns about the actual doomsday scenarios where discussions were focused on AI taking over the world; however, other discussions focused on job losses, retraining of the workforce, and how to prepare for the future with AI. The ideology that one day AI systems will be smarter than humans is universally accepted by most researchers, including Ilya Sutskever of OpenAI and Yann LeCunn of Meta. However, Yann LeCun believes that the risk is not of machines taking over but about people having to retrain themselves to adapt to the new jobs that would be created. Nevertheless, a lot of chatter has been around AI policy to protect jobs and people from risks posed by rapid AI adoption. This was the case when the Writers Guild of America went on strike in Hollywood to ensure that GenerativeAI did not replace them as writers and artists. The threat from Generative AI is real as recent rumors have surfaced that Google might be firing 30,000 employees in the wake of upcoming Generative AI-based solutions.

A long way to go:

Although the general perception is that AI is smart and will quickly overtake humans in their functionality and efficiency, the researchers are still arguing the case that the two are not comparable. The biggest argument in this case is the requirement for the training effort required to train a single LLM. The researchers argue that it takes humans only a few examples to learn, and they are capable of learning a variety of things and not just one specific task, whereas the current state-of-the-art AI system requires thousands of examples if not millions, to learn a specific task and it is not possible to extend these systems to learn and solve a variety of tasks. Therefore, the current methodology might not be suitable to produce unified systems that can supersede the performance levels of humans or have Automatic General Intelligence (AGI).

In summary, it was an intriguing year for AI enthusiasts. The pace of development was unprecedented, sparking both excitement and fear. The momentum shows no signs of slowing down, and 2024 holds the promise of even more AI advancements, with an increasing number of applications embracing the technology. However, the formulation of AI policies worldwide is expected to decelerate the overall impact of AI technology. This is a positive development for those concerned about data privacy, but it doesn’t negate the inevitability of AI permeating various aspects of our lives.