Originally published on October 2023 and written by John White
42% of CEOs say artificial intelligence could destroy humanity in the next five to 10 years. Elon believes that “there should be a regulatory body established for overseeing AI to make sure that it does not present a danger to the public.” And Sam Altman, CEO of OpenAI, is on record stating that “if this technology goes wrong, it can go quite wrong…”
So it sounds like the AI apocalypse is around the corner…
Except, despite the headlines, that’s not everyone’s view. It may not sell as many newspapers, but there are plenty of experts who have a much more optimistic opinion of the technology and its potential impacts.
One of these is Marc Andreessen the co-founder of VC firm, Andreessen Horowitz (a16z).
“The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.
The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and our future.”
VC and Founder of a16z
In Andreessen’s view – and he’s far from alone – AI could make everything better. He argues that human intelligence is responsible for the progress that we’ve made thus far, and AI isn’t going to undo all that. It’s only going to supercharge human intelligence.
Concrete examples of how AI could augment human intelligence and bring about unforeseen progress are already plentiful. A quick look at the latest batch of start-ups to receive Y Combinator backing is replete with a host of AI-related firms.
Reworkd enables anyone to create a personalised “AI Agent” that can assist them with daily or mundane tasks. A variety of AI co-pilots are being developed that will help us quickly learn any new software, aid in managing international shippers and carriers, assist in contract negotiation, facilitate viral content creation, monitor your pregnancy for complications, and even help manage your entire life.
“You’ll notice AI has a big presence in the [Summer 2023] class — this is no accident. Recent developments in AI have unlocked an entire universe of possibilities, presenting a resoundingly clear answer to the question of ‘Why now?’. There has never been a better time to start an AI company than now.”
CEO of Y Combinator
Even away from YC, it’s hard to argue with Tan’s claim. When looking at Andreessen’s positive outlook on AI, there is a lot of evidence to support his claims.
He believes shortly every child will have a personalized AI tutor to support them throughout their entire academic career. Recent announcements such as the Khan Academy’s launch of their AI tutor and teaching assistant, Khanmigo back up this claim. Additionally, there are many other AI-focused education initiatives around the world, such as China’s Squirrel AI and the UK’s CENTURY Tech, which further validate Andreessen’s perspective.
Andreessen also argues that “the creative arts will enter a new golden age.” Anyone who’s heard some of the AI-generated tracks on YouTube or read about the “thousands of songs” that Spotify has removed from its platform may be sceptical – especially given existing intellectual property rights. But, the technological and regulatory landscape is likely to move fast around AI-generated art. At least, that’s the belief of Scott Belsky of Adobe.
“The level of creative confidence in all of us is going to go up materially. If you think about it, our peak creative confidence for most of us is kindergarten – when you’re proudly showing whatever drawing you’ve made to whoever is watching and you feel so confident in what you’re capable of.
It’s really exciting to see technology that raises the tide of creative confidence for everyone. We all have creativity and a desire to express ourselves but we struggle to do so. [AI] is going to be great for humanity.”
Chief Product Officer and Executive Vice President of Creative Cloud at Adobe
Andreesen’s article “Why AI will save the world” provides example upon example of how AI will lead to significant advancements in scientific discoveries, productivity, and even the decision-making power of our elected officials.
One of the most significant predictions he makes is the potential for AI to support us all on a very personal level.
Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
VC and Founder of a16z
An open-and-shut case?
If Andressen is right, it feels crucial to comprehend the underlying foundations of these models.
When you scratch the surface it appears that there is a conflict between two systems: one being a closed and extensively monitored system developed and managed by a single organization (such as OpenAI and Microsoft), and the other being a fully open-sourced version (like Meta’s Llama 2) where the code is accessible to all – software developers, scientists, and academics – to utilize and develop on any topics of their choosing.
Many have assumed that open source will win the day – where code is publicly available to all – over closed proprietary platforms. The reality, however, is a little more nuanced. Although a number of the best-known AI models appear to be open, accessible and malleable to the whims of the average user, the foundations that underpin them are often closed-off and impenetrable – even when the AI tool itself is described as “open.”
A recent report by WIRED’s Will Knight found that Meta’s open-source large language model, Llama 2, and others like it may not be as open as they initially appear. Llama 2’s Community Licence Agreement is not a conventional open-source one. Instead, it stipulates that Llama 2 cannot be used to train other language models, and if developers want to use it within an app or service with more than a certain amount of daily users, a special licence will be required.
The added control that Llama 2’s licence gives Meta hints at the fact that “open” AI is not always what it seems. Although some AI platforms, such as the non-profit EleutherAI, run under a standard open-source licence, many others – especially those owned and operated by the tech giants – could benefit from the input of other developers. Given the scale of these organisations, this might end up harming competition and innovation.
“What our analysis points to is that openness not only doesn’t serve to ‘democratise’ AI. Indeed, we show that companies and institutions can and have leveraged ‘open’ technologies to entrench and expand centralised power.”
Researcher into the political economy of open AI
The reasons why open-source AI may not be as open as it seems are many. The first is cost. Running an open-source AI platform is a major financial commitment – the kind that Google or Microsoft can easily absorb but which is a tougher ask for a start-up.
Similarly, the computing power required to house and process the quantities of data needed to train these AI programs is vast. Sourcing the human expertise required is another hurdle. Coming regulations, which big tech has the resources to influence politically, could further undermine the potential of open-source AI.
“This is a really big issue, especially as larger companies, organisations with a lot of political power and money are typically centred in these conversations. It’s really a challenge for open-source developers, for academic researchers and for hobbyists. To do the kind of advocacy in the political arena, I think is really essential to having the kind of robust legal structures and support that we need to be able to continue to make these technologies widely accessible to the public.”
Lead scientist at Booz Allen Hamilton
Keeping AI behind closed doors may be self-defeating in the end, however. As the recently leaked Google doc indicates, “directly competing with open source is a losing proposition” for major corporations. One where consumers lose out, too.
Unlocking the true innovative potential of AI will happen much quicker if the technology is open to developers, scientists and technology hobbyists everywhere – not just a closed group.
Understanding the bias
There have already been many examples of AI displaying political bias, but the research is getting deeper.
Researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University recently found that some large language models (LLMs) were significantly more liberal and progressive vs. others more socially conservative.
Interestingly, BERT models – the AI language models created by Google – exhibited more socially conservative tendencies compared to OpenAI’s GPT models. The researchers suggest that BERT’s social conservatism may be attributed to the fact that the older models were trained on books, which tend to be more conservative, while the newer GPT models are trained on more liberal internet texts.
Bias is unavoidable in humans, of course, so it stands to reason that it would crop up in AI models, too. But if it is present in AI, the workings that deliver that bias shouldn’t be a black box. Fortunately, the fact that research in this area is already being undertaken is a good sign – particularly as this is such a fast-moving field.
Just a few months after Llama 2 was released as the biggest LLM in the world with 70 billion parameters, it was overtaken by Falcon 180B, an open-source model with 180 billion parameters. That places it somewhere between GPT 3.5 and GPT4, depending on the evaluation benchmark you’re using. So, open-source AI may not quite match commercial models just yet, but the gap is closing. And the possibilities this will deliver are worth getting excited about.
As Andreessen points out, tech of all shapes and sizes has always had to deal with naysayers. Sure, there will be missteps along the way as AI develops, but that shouldn’t take away from the significant benefits that AI will unlock.
“The AI cat is obviously already out of the bag. You can learn how to build AI from thousands of free online courses, books, papers, and videos, and outstanding open-source implementations are proliferating by the day. AI is like air – it will be everywhere.”
VC and Founder of a16z