N L P Tasks

Natural Language Processing

 

 

 

Task 1

Main evaluation results

 

Participant S I D M P SER Recall Precision F1 
LIPN 98.92136100308.085070.6610.610.610.61
Boun112.7014189305.305200.6760.600.590.60
LIMSI  187.6612144 175.342830.6780.350.620.44
IRISA-TexMex 95.3833146 365.627670.9320.720.480.57

 

Legend

  • S: substitutions; see Evaluation algorithm below

 

  • D: deletions; there is no predicted habitat corresponding to the reference habitat (false negative)

 

  • I: insertion; there is no reference habitat corresponding to the predicted habitat (false positive)

 

  • M: matches; see Evaluation algorithm below

 

  • P: predicted; number of predicted habitats

 

  • SER = (S + D + I) / N, where N is the number of habitats in the reference

 

  • Recall = M / N

 

  • Precision = M / P

 

  • F1: harmonic mean of Precision and Recall
 Participant S M SER Recall Precision F1 
 IRISA-TexMex 36.07424.930.350.840.550.67
 Boun 44.90373.100.350.740.720.73
 LIPN 38.77368.230.370.730.730.73
 LIMSI156.76206.240.600.410.730.52
        
 Participant S M SER Recall Precision F1 
 IRISA-TexMex46.91414.090.370.820.540.65
 Boun70.78347.220.400.680.670.68
 LIPN57.35349.650.410.690.690.69
 LIMSI187.80175.200.660.350.620.44
        
 Participant S M SER Recall Precision F1 
 IRISA-TexMex46.91414.090.370.820.540.65
 Boun70.78347.220.400.680.670.68
 LIPN57.35349.650.410.690.690.69
 LIMSI187.80175.200.660.350.620.44
        
 Participant S M SER Recall Precision F1 
 Boun 38.64379.360.340.750.730.74
 IRISA-TexMex 30.72430.280.340.850.560.68
 LIPN 33.19373.810.360.740.740.56
 LIMSI142.05220.950.570.440.780.56
        
 Participant S M SER Recall Precision F1 
 LIPN 42.88364.120.5500.720.720.72
 Boun 50.95367.050.5540.720.710.71
 LIMSI 167.13195.870.6370.390.690.50
 IRISA-TexMex 
 
35.68425.320.8140.840.550.67
        
 Participant S M SER Recall Precision F1 
 LIMSI80.91282.090.470.561.000.71
 Boun82.71335.290.620.660.640.65
 LIPN82.91324.090.630.640.640.64
 IRISA-TexMex76.77384.230.900.760.500.60

 

Evaluation algorithm

The evaluation performs a pairing between each reference habitat to a predicted habitat. The pairing maximizes a score defined as:

J . W

J is the Jaccard index between the reference and predicted entity as defined in [Bossy et al, 2012]. J measures the boundaries accuracy of the predicted entity.

W is the semantic similarity between ontolgy concepts attributed to the reference entity and to the predicted entity. We use the semantic similarity described in [Wang et al, 2006]. This similarity is exclusively based on the is-a relationships between concepts, we set the wis-a parameter to 0.65 in order to penalize favor ancestor/descendent predictions rather than sibling predictions.

Habitat entities in the reference that have no corresponding entity in the prediction are Deletions (D column).

Habitat entities in the prediction that have no corresponding entity in the reference are Insertions (I column).

The sum of the scores for all successful pairings is the Matches (M column). The difference between the number of pairings and the Matches is the Substitutions (S column).


Entity boundaries evaluation

In this evaluation, the Matches are re-defined as the sum of the J component of the score for each pairing. In this way the scores measure the boundaries accuracy of predicted entities, without taking into account the semantic categorization.

Note however that the pairing still maximizes J.W. Therefore columns I, D and P remain unchanged.


Ontology categorization evaluation

In this evaluation, the Matches are re-defined as the sum of the W component of the score for each pairing. In this way the scores measure the semantic categorization accuracy of predicted entities, without taking into account the entities boundaries.

Note however that the pairing still maximizes J.W. Therefore columns I, D and P remain unchanged.

In the following evaluations, the semantic weight attributed to the is-a relation has been altered:

w = 1 --> With a weight of 1, the score approches a "Manhattan distance" between the reference category and the predicted category; it is nearly equivalent to step counting semantic distances. It is more forgiving if the prediction is "in the vicinity" of the references, even though it is not an ancestor or descedent. It is more severe for predictions that further from the reference.

w = 0.1 --> With a weight of 0.1, the score favours predictions in the "lineage" of the reference, that is to say ancestors and descendants. It severly penalizes predictions of siblings. However, since the ontology root is the ancestor of all possible concepts, this score does not penalize predictions that are too general.

w = 0.8 --> 0.8 is the value recommended by the authors of the semantic distance. It is shown for reference and bears no particular interest for the task.