OpenAI now says that it has made progress towards solving alignment problems by creating a new version of GPT.
The term refers to the difficulty of making sure that an A.I. system does what humans want it to do. In traditional software, alignment wasn’t much of an issue, because humans both chose the goal they wanted the software to accomplish and wrote a very specific instruction set, or code, detailing every step the computer should take to achieve it. If the program did something wrong along the way, it was because the instructions were faulty.
With A.I., alignment is harder. While humans might specify the goal, the software itself now learns how best to achieve it. Often, the logic behind the software’s decision in any particular case is opaque, even to the person who created the software. And this problem becomes more challenging the more capable an A.I. system becomes.
OpenAI is interested in alignment because its founding mission is the creation of artificial general intelligence (AGI). That’s the kind of super-intelligent software that, for now, remains the stuff of science fiction—a single system that can perform most cognitive tasks as well or better than a human.
Read More at Fortune
Read the rest at Fortune