Sophisticated natural language processing systems like OpenAI’s GPT-2 can craft speech that’s impressively humanlike, but those same AI often struggle with generating stories
This shortcoming motivated scientists at Carnegie Mellon University’s School of Computer Science to devise a method that creates more “diverse” endings for a given story. The key, they say, was training models to focus attention on important phrases of the story and promoting the generation of non-generic words.
“A story context is a sequence of sentences connecting characters and events. This task is challenging as it requires modeling the characters, events, and objects in the context, and then generating a coherent and sensible ending based on them. Generalizing the semantics of the events and entities and their relationships across stories is a non-trivial task,” wrote the coauthors. “We show that the combination of the two leads to more diverse and interesting endings.”
Read More at Venture Beat
Read the rest at Venture Beat