ELEVANT: Entity Linking Evaluation and Analysis Tool

A complete documentation can be found on GitHub.
Also check out our demo video.
The paper introducing ELEVANT can be found here and a paper using ELEVANT for an in-depth evaluation can be found here.

Evaluation results

Evaluation mode

Result categories

Select a checkbox to see evaluation results for that category. Hover over a checkbox label for an explanation of the category.

Mention types:
Error categories:
Entity types:

Group table rows

Filter table rows

Linking results

Select a cell in the evaluation results table above to see the linking results of the selected experiment here.

Select article:
Underlined text corresponds to a groundtruth mention, highlighted text to a predicted mention.
If a groundtruth mention and a predicted mention overlap, these annotations overlap, too.
Mention Text True positive: Predicted mention span and entity match a groundtruth mention span and entity
Mention Text False positive: Predicted mention either does not match a groundtruth mention span or the predicted entity does not match the groundtruth entity
Mention Text False negative: Groundtruth mention either does not match a predicted mention span or the predicted entity does not match the groundtruth entity
Mention Text Unknown predicted entity that could not be mapped to an entity in the knowledge base (= unknown) and is unevaluated
Mention Text Groundtruth entity that could not be mapped to an entity in the knowledge base (= unknown) and is unevaluated
Mention Text Prediction is unevaluated either because it matches an optional groundtruth mention span and entity or because it is outside of the evaluation span
Mention Text Optional groundtruth mention
Some Text Hyperlink in the groundtruth text