Artificial intelligence (AI) has transformed the way people work in recent years, allowing machines to learn from data and perform tasks that previously required human intelligence.
2023 was a banner year for AI, with significant advances in the development of sophisticated language processing systems, generative AI models and edge computing solutions. However, the 12 month period also brought concerns around AI’s moral, legal and social implications, leading to important conversations about responsible usage.
OpenAI’s ChatGPT makes generative AI mainstream
“The key trend last year was the rise of generative AI, and 2023 will go down as one of the most exciting years for AI yet! With the launch of ChatGPT late in 2022, the true scale of its disruptive potential was more realized across the world in 2023,” he said. “Its success has sparked a wave of generative and chat AI models, from Midjourney to Grok.”
The rapid adoption of AI can largely be attributed to the benefits organizations have experienced from using generative AI models, which can create original output using machine-learning algorithms. These models have allowed businesses to automate various processes, improve decision making and gain deeper insights into their data.
AI also became more accessible and contextual in 2023. Within healthcare, AI is increasingly being used to improve patient care and diagnosis and to develop new drugs. Meanwhile, AI-enabled cybersecurity can help financial institutions detect fraud and offer personalized financial advice. According to a December 2023 survey conducted by the Data Foundation in collaboration with Deloitte, over half of the federal chief data officers polled were already using some form of AI technology, and 95 percent were planning to implement AI solutions within the next year.
Edge computing, which involves processing and analyzing data at the source instead of at a data center, has played a vital role in letting more organizations adopt and implement AI solutions by providing more scalable, efficient and cost-effective ways to deploy AI systems. In August, Meta Platforms (NASDAQ:META) and Qualcomm (NASDAQ:QCOM) partnered to integrate the LLaMA 2 AI model directly into edge devices, reducing latency and dependency on the cloud.
Just a month later, Australia’s BrainChip Holdings (ASX:BRN,OTCQX:BCHPY) and VVDN Technologies developed the Edge Box, a hardware platform that can process data and perform computations using neuromorphic technology, which is digital architecture inspired by the action potential of real-life neurons.
Natural language processing takes big strides
In 2023, natural language processing technology continued to evolve at a rapid pace, with many leading tech companies driving innovation through their investments in AI startups and their own research efforts.
According to CBInsights’ State of AI Q3’23 report, the average deal size for AI companies increased in 2023 by over a third compared to 2022. 2023 also saw the emergence of new AI unicorns, two of which were generative AI companies that develop large language models: A121 and Imbue topped valuations of US$1 billion in 2023.
Advances in natural language processing technology can largely be attributed to the powerful combination of graphics processing units (GPUs) and machine-learning algorithms, which have been used to build sophisticated language processing systems that can understand and interact with human language in a more “natural” way.
NVIDIA (NASDAQ:NVDA) has been at the forefront of AI innovation, providing powerful GPU solutions that enable researchers and developers to train and deploy language processing models. In May, the company released its DGX GH200 AI Supercomputer, which is powered by its Grace Hopper Superchips and NVLink Switch System.
The DGX GH200 has an impressive amount of memory, 500 times more than NVIDIA’s previous supercomputer, which it released in 2020. This capacity will continue to help researchers develop and enhance various AI applications, and will make the computer much more efficient, accurate and effective at real-time language processing.
At the same time, the China-US chip war has created new challenges and uncertainties for AI companies and investors.
“In 2023, the Biden administration has increased restrictions on semiconductor sales, further upping the ante between the US and China on chips,” Husain said. “This approach could mean that the early lead on AI development might remain firmly with Silicon Valley. However, just like the space race of the 20th century, it could ultimately spur advancements by both sides that could catalyze the broader AI space and opportunity.”
AI ethics conversations increase
Unsurprisingly, 2023 also saw an uptick in discourse around responsible AI, with stakeholders from various sectors engaging in conversations about the ethical, legal and social implications of this rapidly evolving technology.
Husain pointed to the Hollywood writers’ strike as a major development that reflects the growing tension between the pursuit of efficiency through AI and society’s desire to protect the rights and interests of individuals.
Ethical conversations about the development and use of AI can be split into two camps. On one side, there are generative AI accelerationists, who advocate for rapid innovation and deployment of AI systems even if it means potentially disregarding ethical considerations such as bias, transparency and privacy. On the other side are the generative AI decelerationists, who argue for more caution and increased AI regulation in order to prevent potential harm to society.
In response to these concerns, governments and regulatory bodies around the world have increased their scrutiny of AI technology and how it is applied. In October, the Biden administration issued an executive order for the responsible development of AI, the first of its kind in history. The year also brought the first leader-level summits dedicated specifically to addressing AI safety risks. The Responsible AI Leadership Summit was held in April, and the AI Governance Summit, hosted by the World Economic Forum, took place in November.
Barely a month after the November summit, the EU passed the Artificial Intelligence Act, a comprehensive framework for the development and use of AI in the EU. The act covers a wide range of applications and establishes rules for transparency, accountability and risk management; it intent is to ensure that AI is developed in a way that is safe and ethical, while respecting fundamental human rights such as privacy.
2023 was a pivotal year that brought AI to the forefront, with advances in machine learning and natural language processing technologies taking the spotlight. The increased adoption of AI models and edge computing solutions has enabled businesses to automate processes and improve decision making, among other positives.
However, with the benefits of AI come concerns around its ethical, legal and social implications. Discussions about responsible AI and decisions supporting increased regulation of the sector have begun and look set to continue.
Securities Disclosure: I, Meagen Seatter, hold no direct investment interest in any company mentioned in this article.