Knowledge Graphs in the Era of Neural Nets
Abstract: When I started working on the knowledge graph ConceptNet, the pitch for it was easy to make: Computers don't understand the meanings behind the words we use. They don't understand the common-sense assumptions that we rely on to communicate with each other. They need a knowledge resource where they can look up the basic things people know that aren't in their input data. A crowd-sourcing and linked-data project was needed to provide that knowledge.
Now, the landscape of NLP has been changed by neural models with enormous amounts of input data, such as contextualized word embeddings and Transformer models, and the evidence is that these systems do know a thing or two about common sense and the human experience. They can even generate mostly-coherent stories and articles. This makes the need for external knowledge less clear, as similar knowledge appears implicitly in trained models. But in this new landscape, knowledge graphs have taken on new roles. We can now see a duality between neural models and knowledge graphs, where each representation can partially describe the other, and each also appears to fill in knowledge that is difficult to represent in the other.
Knowledge graphs are particularly helpful in domains with less available data and low-resource languages. Recent results that I'll discuss have shown the benefits, and the potential for further exploration, of systems that combine neural nets and knowledge graphs.
Bio: Robyn Speer is Chief Scientist at Luminoso and lead maintainer of ConceptNet.