Skip to content

Commit 5eefe50

Browse files
authored
Update README.md
fixing .md
1 parent 09bc558 commit 5eefe50

File tree

1 file changed

+5
-8
lines changed

1 file changed

+5
-8
lines changed

wmt20/README.md

+5-8
Original file line numberDiff line numberDiff line change
@@ -14,24 +14,21 @@ for **Task 2**, use the following scripts:
1414
* sentence-level HTER: `python sent_evaluate.py -h`
1515
* word-level HTER: `python word_evaluate.py -h`
1616

17-
for **Task 3**, use the following scripts[^1]:
17+
for **Task 3**, use the following scripts (under MT License, source: [Deep-Spin](https://github.com/deep-spin/qe-evaluation)):
1818
* MQM **score**: `python eval_document_mqm.py -h`
1919
* MQM **annotations**: `python eval_document_annotations.py -h`
2020

21-
Once you have checked that your system output on the dev data is correctly read by the right script, you can submit it using the CODALAB page corresponding to your subtask.
21+
Once you have checked that your system output on the dev data is correctly read by the right script, you can submit it using the CODALAB page corresponding to your subtask (see below).
2222

2323
### Submission platforms
2424

2525
Predicitons should be submitted to a CODALAB page for each subtask:
2626

27-
Task 1, [sentence-level DA](https://competitions.codalab.org/competitions/24447)
27+
Task 1, [sentence-level DA](https://competitions.codalab.org/competitions/24447)
2828
Task 1, [sentence-level DA **multilingual**](https://competitions.codalab.org/competitions/24447)
2929

30-
Task 2, [sentence-level HTER](https://competitions.codalab.org/competitions/24515)
30+
Task 2, [sentence-level HTER](https://competitions.codalab.org/competitions/24515)
3131
Task 2, [word-level HTER](https://competitions.codalab.org/competitions/24728)
3232

33-
Task 3, [doc-level MQM **score**](https://competitions.codalab.org/competitions/24762)
33+
Task 3, [doc-level MQM **score**](https://competitions.codalab.org/competitions/24762)
3434
Task 3, [doc-level **annotations**](https://competitions.codalab.org/competitions/24763)
35-
36-
[^1]: under MT License (source: [Deep-Spin](https://github.com/deep-spin/qe-evaluation))
37-

0 commit comments

Comments
 (0)