Four Trends That Changed Artificial Intelligence in 2023: A Wild Year for AI and Future Development Trends
-
2023 was a unique and turbulent year for artificial intelligence (AI). It witnessed countless product launches, internal power shifts within companies, intense policy debates about AI disasters, and a race to find the next major innovation. However, we also saw concrete tools and policies emerge aimed at making the AI industry more responsible and holding powerful players accountable. All of this brings great hope for the future of AI.
Here are the key takeaways from AI in 2023:
At the beginning of this year, major tech companies have invested heavily in the research and development of generative AI. OpenAI's ChatGPT has achieved tremendous success, prompting every major tech company to release its own version. Despite the emergence of various AI models, such as Meta's LLaMA 2, Google's Bard chatbot, Gemini, Baidu's Ernie Bot, OpenAI's GPT-4, etc., we have yet to witness any AI application becoming an overnight sensation. The AI-driven search features launched by Microsoft and Google have not become the killer applications as expected.
Although tech companies are rapidly rolling out large language model products, we still know very little about how they work. These models often fabricate information and exhibit serious gender and racial biases. This year's research has also found that different language models produce different political biases, and they can be used to hack people's private information.
Discussions about the potential existential risks AI may pose to humanity have become prevalent this year. From deep learning pioneers Geoffrey Hinton and Yoshua Bengio, to CEOs of top AI companies like Sam Altman and Demis Hassabis, as well as numerous scientists, business leaders, and policymakers such as California Congressman Ted Lieu and former Estonian President Kersti Kaljulaid, have all participated in the debate.
Thanks to ChatGPT, this year has seen discussions on AI policies and regulations from the U.S. Senate to the G7. European legislators reached a consensus on the AI Act early this year, introducing binding rules and standards to more responsibly develop higher-risk AI while prohibiting certain 'unacceptable' AI applications.
One specific policy proposal that has garnered attention is watermarking—invisible signals in text and images that can be detected by computers to mark AI-generated content. Watermarks can be used to track plagiarism or help combat misinformation, and this year we've seen successful research applying watermarks to AI-generated text and images.
It's not just legislators who have been busy; lawyers have too. Artists and writers believe that AI companies have used their intellectual property without consent and without providing any compensation, leading to a record number of lawsuits.
In an exciting counterattack, researchers at the University of Chicago have developed a new data poisoning tool called 'Nightshade', which enables artists to fight back against generative AI by disrupting training data, thereby potentially causing significant damage to image-generating AI models. This rebellion is brewing, and we can expect more grassroots efforts next year to shift the balance of power in technology.
As 2023 comes to a close, we look forward to the future of AI with anticipation. Despite numerous challenges, this year has deepened our understanding of AI and prompted more reflection on how to better utilize this technology. The coming year will be crucial in determining the true value of generative AI.