OpenAI Improves the “Behavior” of Language Models


In a study published today, OpenAI claims it’s discovered a way to improve the “behavior” of language models with respect to ethical, moral, and societal values.

The approach, OpenAI says, can give developers the tools to dictate the tone and personality of a model depending on the prompt that the model’s given.

Despite the potential of natural language models like GPT-3, many blockers exist. The models can’t always answer math problems correctly or respond to questions without paraphrasing training data, and it’s well-established that they amplify the biases in data on which they were trained. That’s problematic in the language domain, because a portion of the data is often sourced from communities with pervasive gender, race, and religious prejudices.

Read More at Venture Beat

Read the rest at Venture Beat