Experts Predict 4 Possible AI Outcomes in 2024, and Why OpenAI Might Face Challenge

Photo of author
Written By Blog Lit

We Provides readers with insights into the background and expertise of the writer.

4 things experts say could happen with AI in 2024  — and why it could be bad news for OpenAI
  • From OpenAI to Gemini, AI has taken over Silicon Valley and the world in 2023.
  • AI experts gave Business Insider their predictions for what 2024 has in store for the technology.

The AI industry has had a wild 2023.

The year started with ChatGPT becoming the fastest-growing app ever. Now, Google has introduced Gemini as its response to OpenAI’s successful AI model. Throughout the year, AI has changed many aspects of the tech industry and raised concerns about existential threats.

Experts say that in 2024, AI will probably become an even more significant part of our lives. Companies like Google and individuals like Elon Musk are competing to outdo OpenAI.

Here are their big predictions for the next 12 months:

1. AI will be everywhere


Google finished the year strongly by introducing Gemini, an AI model that they claim can compete with OpenAI’s GPT-4. They are also planning to release more advanced versions in the coming months.

Sundar Pichai

Google CEO Sundar Pichai has launched Gemini, a competitor to OpenAI. In response, OpenAI plans to open a GPT store in early 2024, allowing users to create and sell their own versions of ChatGPT. Experts believe that AI will become more prevalent in our daily lives as tech companies incorporate it into various products, leading to widespread adoption in 2024. Charles Higgins, co-founder of AI-training startup Tromero, sees accessibility as a key factor, with tools like Gemini already integrated into familiar products, making the use of AI tools the norm rather than the exception.

Yann LeCun

Meta’s top AI scientist, Yann LeCun, really likes open-source models. This means models like Llama 2 from Meta can be used and changed by anyone because they’re free.

Meta is a big fan of open-source AI. They share their Llama 2 model and team up with other companies for “open science.” But, making AI models is super expensive. So, even though open-source is cool, it’s still mainly big companies like Meta that can afford to train these models.

A person named Sophia Kalanovska, who works with Tromero, says, “Training models is really, really expensive.” She thinks the open-source community still needs big companies like Meta to share their models because only they have enough money to train them.

2. OpenAI will feel the heat

OpenAI has been using ChatGPT since it became successful in late 2022. However, recently, users have noticed some problems with the chatbot. People have complained that ChatGPT’s performance has gotten worse, and it’s even refusing to do some tasks. OpenAI is investigating reports that the chatbot is becoming “lazier.”

“I think ChatGPT has not been good for the past three weeks. There have been constant network errors, and the responses are much shorter,” said Kalanovska.

The chatbot’s strange behavior shows that there’s still a lot we don’t know about how large language models like ChatGPT work. This has also added more pressure on OpenAI, which has already been dealing with chaos in recent weeks.

Elon Musk

Sam Altman’s sudden departure and return as CEO have made OpenAI’s leadership in the AI competition seem shaky. Some startup customers are switching to competitors, and Microsoft is developing its own AI systems to reduce dependence on OpenAI. With rivals like Gemini and Elon Musk’s Grok entering the scene, OpenAI might face more challenges next year in the crowded AI industry.

“There’s a weakness in their position at the moment due to the recent drama,” said Higgins. “It caused disruption, and I believe other major players are eager to seize the opportunity.”

3. AI companies face looming copyright battle

There’s a big legal question about whether it’s okay to train AI models using data that has copyrighted content. Lawsuits, like Getty Images against Stability AI in the UK and Sarah Silverman against OpenAI in the US, focus on this issue. Dr. Andres Guadamuz, an expert in intellectual property law, says this question is still open in most countries. He predicts that in 2024, there might be a decision or two to clarify things, but it could take four to five years to fully settle. This legal uncertainty is a serious problem for the AI industry. Tech companies admit that paying for copyrighted data could make it impossible to train large and complex models like GPT-4. If AI companies lose these lawsuits, they might have to pay significant amounts. Even though it could set back the AI revolution, it might not stop it entirely. Guadamuz suggests that AI development might just move to countries with more relaxed rules.

4. Regulation urgently needed

US politicians famously failed to introduce new laws curbing the influence of social media — and now history seems to be in danger of repeating itself with AI.

Sam Altman

The EU recently agreed on rules for AI tools, but the US has not made progress in regulating AI, despite discussions. Experts say that in 2024, there must be a focus on regulating AI due to its impact on jobs and concerns about AI-generated content. Vincent Conitzer, a computer science professor, emphasizes the importance of detailed regulations, acknowledging the challenge of keeping up with fast-paced AI developments. Guadamuz suggests regulators should act promptly rather than waiting for legal decisions on AI issues.