Google AI’s ALBERT claims top spot in multiple NLP performance benchmarks


Researchers from Google AI (formerly Google Research) and Toyota Technological Institute of Chicago have created ALBERT, an AI model that achieves state-of-the-art results that exceed human performance. ALBERT now claims first place on major NLP performance leaderboards for benchmarks like GLUE and SQuAD 2.0, and high RACE performance score.

On the Stanford Question Answering Dataset benchmark (SQUAD), ALBERT achieves a score of 92.2, on General Language Understanding Evaluation (GLUE) benchmark, ALBERT achieves a score of 89.4, and on ReAding Comprehension from English Examinations (RACE) benchmark, ALBERT gets a score of 89.4%.

ALBERT is a version of Transformer-based BERT that “uses parameter reduction techniques to lower memory consumption and increase the training speed of BERT,” according to a paper published Wednesday on OpenReview.net. The paper was published alongside other papers accepted for publication as part of the International Conference of Learning Representations, which will take place in April 2020 in Addis Ababa, Ethiopia. ICLR will be the first international AI community conference held in Africa.

“Our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs,” the paper reads.

ALBERT is the latest derivative of BERT to claim a top spot in major benchmark tests. In late July, Facebook AI Research introduced RoBERTa, a model that achieved state-of-the-art results, and in May, Microsoft AI researchers introduced Multi-Task Deep Neural Network (MT-DNN), a model that achieved top marks in 7 of 9 GLUE benchmarks.

Each of the models achieves performance that outpaces average human performance.

READ  WWW = Woeful, er, winternet wendering? CERN browser rebuilt after 30 years barely recognizes modern web

In other Transformer-related news, Hugging Face, a startup whose PyTorch library for easy use of major Transformer models like BERT, Open AI’s GPT-2 and Google’s XLNet today made that library available for TensorFlow. PyTorch-Transformers has seen more than 500,000 Pip installs since the start of year, Hugging Face CEO Clément Delangue told VentureBeat.

More to come.



READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here