Dark Mode
More forecasts: Johannesburg 14 days weather
  • Thursday, 21 November 2024
New AI Development

Major Players Seek to Forge New AI Development Deals

As we enter the next stage of AI development, pressing questions about the safety implications of AI systems are emerging. Simultaneously, companies are scrambling to establish exclusive data deals to ensure their models are well-equipped to meet expanding use cases.

Meta has also established its own AI product advisory council, which includes a range of external experts who will advise on evolving AI opportunities.

The Importance of AI Safety

With many large, well-resourced players looking to dominate the next stage of AI development, it is crucial that safety implications remain a primary concern. These agreements and accords provide additional protections based on assurances from participants and collaborative discussions on next steps.

The looming fear is that AI could eventually surpass human intelligence and potentially enslave the human race. However, we are not close to that stage yet. While the latest generative AI tools are impressive, they do not "think" for themselves. They are essentially sophisticated mathematical models without consciousness.

Understanding AI Limitations

Meta’s chief AI scientist, Yann LeCun, one of the most respected voices in AI development, recently explained:

"[LLMs] have a very limited understanding of logic, don’t understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term, and cannot plan hierarchically."

In other words, these systems cannot replicate human or even animal brains, despite producing increasingly human-like content. It’s mimicry and smart replication within the parameters of their systems.

The Pursuit of AGI

Several groups, including Meta, are working on Artificial General Intelligence (AGI), which simulates human-like thought processes. However, we are still far from achieving this. As LeCun mentioned in an interview earlier this year:

"Once we have techniques to learn “world models” by just watching the world go by and combine this with planning techniques and short-term memory systems, then we might have a path towards cat-level intelligence. Before we reach human-level intelligence, we will have to go through simpler forms of intelligence. And we’re still very far from that."

The Importance of Accurate Data

Even though AI systems do not understand their outputs, they are increasingly used on informational surfaces like Google Search and X trending topics. This makes AI safety important, as these systems can produce wholly false reports. It is essential that all AI developers agree to safety agreements. However, not all platforms developing AI models are listed in these programmes yet. For instance, X, which is focusing heavily on AI, is notably absent from several initiatives, preferring to pursue its AI projects independently.

The Data Race

Given the need for more data to fuel evolving AI projects, platforms are now seeking data agreements to access human-created information. Using AI models to create content and then using that content to train their own models could lead to a diluted internet filled with derivative, repetitive, and non-engaging bot-created junk.

Human-created data is a hot commodity. Reddit, for example, has restricted access to its API but has made deals with Google and OpenAI. X, on the other hand, is keeping its user data in-house to power its own AI models. Meta is also seeking deals with big media entities, while OpenAI has reached an agreement with News Corp., the first of many expected publisher deals in the AI race.

The future of AI development

The current wave of generative AI tools is only as good as the language model behind each. It will be interesting to see how these agreements evolve as each company tries to secure its future data stores. Larger players who can afford to cut deals with providers will likely lead the way, making it increasingly difficult for smaller projects to compete. With more regulations being enacted on AI safety, it is crucial to monitor these shifts as we progress towards the next stage of AI development.

Comment / Reply From