The Myth of ‘Open Source’ AI


A new analysis shows that “open source” AI tools like Llama 2 are still controlled by big tech companies in a number of ways.

ChatGPT made it possible for anyone to play with powerful artificial intelligence, but the inner workings of the world-famous chatbot remain a closely guarded secret.

In recent months, however, efforts to make AI more “open” seem to have gained momentum. In May, someone leaked a model from Meta, called Llama, which gave outsiders access to its underlying code as well as the “weights” that determine how it behaves. Then, this July, Meta chose to make an even more powerful model, called Llama 2, available for anyone to download, modify, and reuse. Meta’s models have since become an extremely popular foundation for many companies, researchers, and hobbyists building tools and applications with ChatGPT-like capabilities.

“We have a broad range of supporters around the world who believe in our open approach to today’s AI … researchers committed to doing research with the model, and people across tech, academia, and policy who see the benefits of Llama and an open platform as we do,” Meta said when announcing Llama 2. This morning, Meta released another model, Llama 2 Code, that is fine-tuned for coding.

It might seem as if the open source approach, which has democratized access to software, ensured transparency, and improved security for decades, is now poised to have a similar impact on AI.

Not so fast, say a group behind a research paper that examines the reality of Llama 2 and other AI models that are described, in some way or another, as “open.” The researchers, from Carnegie Mellon University, the AI Now Institute, and the Signal Foundation, say that models that are branded “open” may come with catches.

Read More at Wired

Read the rest at Wired