semantic web

Introduction to Wikidata using SPARQL

You’re certainly familiar with Wikipedia, but you may not be aware of Wikidata, which is an ongoing effort to structure some of the data underlying Wikipedia. Traditionally, facts (e.g. the population of New York City) are embedded in the text of a wiki, and there’s no easy way to automatically extract them. Wikipedia has a little more structure than this, but it’s still really designed for humans rather than machines.

Wikidata is the opposite – designed for machines, not humans.

It’s part of the broader semantic web movement, which aims to make the web more and more machine readable. Most of the time you don’t notice this, but when you run a query like “spouse of George Washington” and see this, rather than just a collection of links, that’s Google taking advantage of semantic web data (probably – they might also be using machine learning to infer it from unstructured text).

Screenshot 2017-09-11 16.45.07