Nearest Michelin-starred restaurant to your hotel? Check. The meaning of life? Check. When it comes to the all-encompassing task of answering any question we could have, search engines such as Google and Baidu never fail to deliver. At the heart of this all-knowingness is the ability to read and understand both the question and the encyclopaedic volume of information on the internet. While reading comprehension comes naturally to humans, designing machines that achieve similar capabilities remains a challenging task.
Current reading comprehension technology hinges upon artificial neural networks—computer programs mimicking the human brain—which comprise several ‘layers.’ More layers enable the network to provide more complex and informative results. For example, layers allow a network to construct an image of a person’s face by detecting a series of clustering pixels. However, having more layers comes at a price—specifically, slower information flow, and a need for more computational power.
To overcome the constraints associated with machine reading comprehension, researchers from the Institute of Infocomm Research (I2R) and Nanyang Technological University, Singapore, have designed a new architecture for training neural networks to read. Named Densely Connected Attention Propagation for Reading Comprehension, or DECAPROP for short, their model could yield faster and improved learning, producing more accurate and efficient machine reading.
The study’s lead authors, Yi Tay and his supervisor, Anh Tuan Luu, highlighted one of the key elements implemented in DECAPROP: the bidirectional attention connector (BAC), which enables the network to build contextual relationships between words in a given text. For instance, the word ‘cold’ could refer to an illness, the temperature or someone’s behavior. Arriving at the correct interpretation would require a machine to recognize the context by processing the entire text.
“DECAPROP also increases the number of interaction interfaces, by matching layers in an asynchronous, cross-hierarchical fashion that leads to an improvement in performance,” added Luu.
The researchers then put the model to the test using five datasets—NewsQA, Quasar-T, SearchQA, NarrativeQA and SQuAD—comprising hundreds of thousands of question-answer pairs. These datasets allow scientists to assess a neural network’s ability to extract accurate answers to questions relating to long and complex text, such as news articles and even books or movie scripts.
“DECAPROP achieved exceptional performance on four datasets, achieving a significant gain of 2.6–14.2 percent absolute improvement in F1 score over the existing state-of-the-art,” said Luu, the F1 score being a measure of a neural network’s precision and recall ability.
The results, which have been presented in a paper at the 32nd Conference on Neural Information Processing Systems, pave the way for enhanced reading comprehension skills in machines, which could be applied in diverse fields such as healthcare, customer service and language translation.
“The modularity of the BAC allows easy equipping to other models and domains, thus enabling a wider usage of this model in reading comprehension applications,” explained Luu. “DECAPROP can, therefore, be used for any application that requires machine comprehension or question answering.”
The A*STAR-affiliated researchers contributing to this research are from the Institute of Infocomm Research (I2R).