This is part six of a six-part series on the history of natural language processing.

In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text. 

But as is often the case with NLP technology, the tool…

This is only a snippet of article written by Oscar Schwartz

Read Full Article

Content Disclaimer 

This Content is Generated from RSS Feeds, if your content is featured and you would like to be removed, please Contact Us With your website address and name of site you wish to be removed from.

Note:

You can control what content is distributed in your RSS Feed by using your Website Editor.

Tech Shop Offers