AI Futures and Trends 2024
In the ever-evolving realm of artificial intelligence, the year 2023 has been nothing short of extraordinary. From groundbreaking product launches to intense policy debates, the AI landscape has witnessed a flurry of activity that has kept tech enthusiasts and policymakers alike on the edge of their seats. As the year unfolded, it became increasingly apparent that the AI sector was at a critical juncture, grappling with both unprecedented opportunities and daunting challenges. In this comprehensive review, we delve into the key developments that shaped the narrative of AI in 2023.
The year kicked off with a resounding focus on generative AI, with major tech players placing significant bets on this transformative technology. OpenAI’s ChatGPT set the stage for a series of launches from industry giants, including Meta’s LLaMA 2, Google’s Bard chatbot and Gemini, Baidu’s Ernie Bot, and OpenAI’s GPT-4, among others. The proliferation of AI models prompted discussions about their potential applications, but the initial excitement was met with a reality check. Despite the plethora of launches, the quest for a breakthrough AI application that resonates with users remained elusive. Microsoft and Google, for instance, touted powerful AI-powered search capabilities, only to face setbacks due to language model flaws, leading to comical and, at times, embarrassing outcomes.
The focus now shifts to the next phase as OpenAI and Google experiment with enabling companies and developers to create customised AI chatbots. The potential integration of generative AI into practical, productivity-enhancing tools, such as AI assistants with voice capabilities and coding support, could define the technology’s true value in the coming year.
The rapid deployment of large language models into products has prompted a surge in research aimed at understanding their intricacies. These models, while powerful, exhibit flaws ranging from making things up to severe gender and ethnic biases. The year 2023 saw revelations about the varying political biases present in different language models and their susceptibility to hacking for the extraction of private information.
Efforts to curb these issues included techniques like reinforcement learning from human feedback, aiming to guide language models towards more desirable outputs. However, many of these attempts proved to be quick fixes rather than permanent solutions. The year also shed light on the environmental impact of generative AI, with studies revealing the considerable energy consumption associated with generating images using powerful AI models.
AI doomerism, the idea that AI could pose existential risks to humanity, gained prominence in 2023. Leading figures in the AI community, including Geoffrey Hinton, Yoshua Bengio, and top AI CEOs, voiced concerns about the potential consequences of developing superintelligent AI. While some dismissed these fears as “ridiculous,” the increased attention on AI’s potential to cause extreme harm prompted crucial conversations about AI policy. Lawmakers worldwide took action, with European lawmakers finalising the AI Act, introducing binding rules and standards to ensure responsible AI development and banning certain “unacceptable” AI applications.
The widespread adoption of ChatGPT brought AI policy and regulation to the forefront in 2023. The U.S. Senate and the G7 engaged in discussions on AI policy, while European lawmakers concluded a busy year by agreeing on the AI Act. The White House introduced an executive order on AI, emphasising transparency and standards. Notably, the proposal of watermarks in AI-generated content for tracking and accountability gained attention. Simultaneously, the legal landscape saw a surge in lawsuits as artists and writers challenged AI companies over alleged intellectual property infringements.
Amidst these developments, a notable resistance emerged, epitomised by the University of Chicago’s Nightshade, a data-poisoning tool empowering artists to combat generative AI by disrupting training data. The year witnessed a shift from the AI Wild West to a more regulated environment, signifying a growing awareness of the need for accountability.
OpenAI’s dedication to preventing a superintelligence from going rogue became apparent with the results from its super alignment team. Led by Chief Scientist Ilya Sutskever, the team introduced techniques allowing less powerful language models to supervise more potent ones, offering a glimpse into potential methods for human oversight of advanced AI systems.
Google made significant strides in AI integration in 2023, unveiling PaLM 2, its latest AI language model, integrated into over 25 products, including Maps, Docs, Gmail, and the Bard chatbot. This move, albeit high-risk, was deemed necessary in the face of fierce competition. Despite the known flaws in AI language models, Google’s strategy aims to position itself as a one-stop-shop for AI-powered products and features. The introduction of Bard to the general public and the promise of image-based prompts underscore Google’s determination to dominate the AI landscape.
However, the risks associated with AI language models, including making things up, prompt injection attacks, and hallucinations, pose challenges for Google. Critics argue that the emphasis on product releases may compromise the scientific rigour needed to ensure AI safety and reproducibility. As regulatory scrutiny increases, the delicate balance between innovation and responsible AI deployment remains a focal point for Google.
The race to find the killer app for generative AI led major players like OpenAI, Meta, and Google to enhance their AI chatbots’ capabilities, turning them into more versatile AI assistants. OpenAI introduced features enabling ChatGPT to engage in lifelike spoken conversations and search the web, while Google’s Bard seamlessly integrated into its ecosystem, offering users the ability to ask questions and retrieve information across various platforms. Meta joined the fray, extending AI chatbots’ reach to WhatsApp, Messenger, and Instagram.
Despite the promises of enhanced user experiences, concerns loom over the security and privacy implications of AI assistants accessing sensitive information like emails, calendars, and private messages. Indirect prompt injection attacks, a known vulnerability, pose a significant threat, raising questions about the robustness of the safeguards implemented by tech giants.
In a noteworthy experiment, Anthropic’s AI lab explored the self-correction potential of large language models. By simply prompting these models to produce unbiased content without explicitly defining bias, researchers observed a positive impact on the models’ outputs. The phenomenon, observed in models with over 22 billion parameters, suggests a form of self-correction influenced by the expansive training datasets containing both biassed and counteracting examples.
The implications of this self-correction mechanism prompt discussions about integrating such capabilities into language models from the outset. The concept of “constitutional AI,” where models automatically test outputs against human-written ethical principles, emerges as a potential avenue to embed ethical considerations into AI systems.
As 2023 draws to a close, the AI landscape stands at the intersection of innovation, accountability, and ethical considerations. The coming year holds the promise of unveiling the true value of generative AI, navigating the evolving policies and regulations, and addressing the persistent challenges associated with language models. As we step into the uncharted territory of AI in 2024, the industry’s ability to embrace responsible practices, transparency, and sustainability will play a pivotal role in shaping the future of artificial intelligence. The journey continues, and the narrative of AI unfolds with each breakthrough, setback, and ethical consideration, charting a course toward a more nuanced and conscientious AI landscape.
for all my daily news and tips on AI, Emerging technologies at the intersection of humans, just sign up for my FREE newsletter at www.robotpigeon.beehiiv.com