Skip to content

Krippendorfs Alpha for position - unplausible resultsΒ #5305

@bgittel

Description

@bgittel

Describe the bug
I tried to understand how Krippendorfs Alpha unitizing for position is implemented and annotated a test doc by two annotators. One annotator has 5 annotations, the other just one. If I have 1 span with exact match, the score calculated is 0,4, if I have one span with overlap match (the span differs by one token) I get 0,42. How is this possible?
In fact, I would like to understand better how KA is implemented, especially how the aggreement matrix is calculated, because I observed implausible results for other docs in my corpus as well. Also, I would like to know if it were possible to implement another metric (i.e. gamma) that seems more suitable to deal with overlapping spans.

Please complete the following information:

  • Version and build ID: 35.2 (2025-02-04 07:13:24, build 18f5fdc)
  • OS: Win
  • Browser: Chrome

Thanks!

Metadata

Metadata

Assignees

Labels

No labels
No labels

Projects

Status

πŸ”– To do

Relationships

None yet

Development

No branches or pull requests

Issue actions