User Tools

Site Tools


shared_task_description

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
shared_task_description [2013/09/16 01:59]
dseddah
shared_task_description [2015/02/20 19:10] (current)
dseddah
Line 105: Line 105:
 === Evaluating All Scenarios === === Evaluating All Scenarios ===
  
-  * For constituent evaluation on gold word segmentation of bracketed output (eg. PTB), we will use a modified version of Parseval'​s evalb: ​{{:dldata:​evalb_spmrl2013.tar.gz}}. Add -fPIC to gcc to compile for Linux.+  * For constituent evaluation on gold word segmentation of bracketed output (eg. PTB), we will use a modified version of Parseval'​s evalb: ​[[http://​pauillac.inria.fr/​~seddah/​evalb_spmrl2013.tar.gz]]. Add -fPIC to gcc to compile for Linux. 
 +**update *February 2014: the evalb package that was available on Djame'​s site was not the correct one. if your version doesn'​t have the -X switch, it's the buggy one.** 
   *    * 
   * For dependency evaluation on gold word segmentation,​ we will use the CoNLL 2007 evaluation: [[http://​nextens.uvt.nl/​depparse-wiki/​SoftwarePage#​eval07.pl]]   * For dependency evaluation on gold word segmentation,​ we will use the CoNLL 2007 evaluation: [[http://​nextens.uvt.nl/​depparse-wiki/​SoftwarePage#​eval07.pl]]
Line 114: Line 116:
   * For the fully raw scenario, we will use tedeval in the unlabeled condition: ([[http://​www.tsarfaty.com/​unipar/​index.html]]) wrapper here: {{:​dldata:​tedeval_wrapper_08192013.tar.bz2}}   * For the fully raw scenario, we will use tedeval in the unlabeled condition: ([[http://​www.tsarfaty.com/​unipar/​index.html]]) wrapper here: {{:​dldata:​tedeval_wrapper_08192013.tar.bz2}}
  
-* French MWE Evaluation+  ​* French MWE Evaluation ​
 On top of classical evalb and eval07.pl evaluation, we will also provide results on multiword expression. On top of classical evalb and eval07.pl evaluation, we will also provide results on multiword expression.
 Thanks to Marie Candito, the evaluator for dependencies output is provided on tools. (see test/​tools/​do_eval_dep_mwe.pl) Thanks to Marie Candito, the evaluator for dependencies output is provided on tools. (see test/​tools/​do_eval_dep_mwe.pl)
Line 152: Line 154:
    [ -mwe_pos_feat <​MWE_POS_FEAT>​ ] use to define the feature name that marks heads of MWEs. Default = mwehead    [ -mwe_pos_feat <​MWE_POS_FEAT>​ ] use to define the feature name that marks heads of MWEs. Default = mwehead
    [ -help ]     [ -help ] 
- 
shared_task_description.1379289565.txt.gz · Last modified: 2013/09/16 01:59 by dseddah