Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing experiments and finding different scores after Stage 1 #182

Open
suttergustavo opened this issue Jan 23, 2023 · 0 comments
Open

Comments

@suttergustavo
Copy link

Hello,

I've been trying to reproduce the results presented on the paper with the provided code, but the result that I obtained is (slightly) different from the ones provided after Stage I. Those are my results on BEA-2019

Model Precision Recall F0.5
RoBERTa from the paper (Table 10) 40.8 22.1 34.9
RoBERTa from my run 42.7 19.8 34.7

It was mentioned in previous issues that your best model is from epoch 18 on Stage 1, but my best epoch was epoch 16. In addition, my training was considerably faster than the one reported by you on other issues, taking 2.5 days on one RTX 6000.

I question whether these differences should be expected given the randomness in initialization and data order, or maybe there's something wrong with how I'm running the code.

Please find my training command:

python3 train.py --train_set=../PIE/a1/a1_train.gector \
                 --dev_set=../PIE/a1/a1_val.gector \
                 --model_dir="$ckpt" \
                 --cold_steps_count=2 \
                 --accumulation_size=4 \
                 --updates_per_epoch=10000 \
                 --tn_prob=0 \
                 --tp_prob=1 \
                 --transformer_model=roberta \
                 --special_tokens_fix=1 \
                 --tune_bert=1 \
                 --skip_correct=1 \
                 --skip_complex=0 \
                 --n_epoch=20 \
                 --patience=3 \
                 --max_len=50 \
                 --batch_size=64 \
                 --tag_strategy=keep_one \
                 --cold_lr=1e-3 \
                 --lr=1e-5 \
                 --predictor_dropout=0.0 \
                 --lowercase_tokens=0 \
                 --pieces_per_token=5 \
                 --vocab_path=data/output_vocabulary \
                 --label_smoothing=0.0

Thank you for you time :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant