Best Practice PolicyThe provided training data of each challenge may be used for learning the parameters of the algorithms. The test data should be used strictly for reporting of results alone - it should not be used in any way to train or tune systems, for example by evaluating multiple parameter or feature choices and reporting the best results obtained. It is the participants responsability to divide the training set into proper training and validation splits, e.g., using 10-fold cross-validation. The tuned algorithms should then be run only once on the test data.
We strongly discourage multiple submissions to the server (and indeed the number of submissions for the same algorithm is strictly controlled), as the evaluation server should not be used for parameter tuning. If participants would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features) they can do so on the training data and only submit the best performing setting for evaluation to our server.
We require a minimum of 72 hours between submissions. If a single person would like to submit earlier, e.g., due to a corrupt upload or evaluating multiple algorithms we kindly ask them to contact us by email and we will unlock the account. It is NOT allowed to register multiple times to the server using different email addresses. When registering, we ask all participants to provide their full name, institution and institutional email address (i.e., .edu).
If you have any question regarding this best practice policy, please don't hesitate to get in contact with us!
No account yet? Register here!