Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, There'S A Particular Metric (Perplexity) I'D Like To Track, But Clearml Didn'T Seem To Catch It. Specifically, This "Evaluation" Section Of Run_Mlm.Py In The Transformers Repo:

Hello, there's a particular metric (perplexity) I'd like to track, but clearML didn't seem to catch it.

Specifically, this "Evaluation" section of run_mlm.py in the transformers repo: https://github.com/huggingface/transformers/blob/f6e254474cb4f90f8a168a599b9aaf3544c37890/examples/pytorch/language-modeling/run_mlm.py#L515

I ran a training job on clearML, and it tracked loss curves, iterations, etc., but I can't find where or if it tracked perplexity. I tried searching through the metrics (see attached picture - I searched for "perplexity" and "ppl" as well) and cannot find it, nor is it listed within the scalars.

I can confirm that the section of code ran, because in the console it printed out
07/20/2021 23:44:24 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:521] 2021-07-20 23:44:24,496 >> The following columns in the evaluation set don't have a corresponding argument inRobertaForMaskedLM.forwardand have been ignored: special_tokens_mask. [INFO|trainer.py:2154] 2021-07-20 23:44:24,498 >> ***** Running Evaluation ***** [INFO|trainer.py:2156] 2021-07-20 23:44:24,498 >> Num examples = 3371 [INFO|trainer.py:2159] 2021-07-20 23:44:24,498 >> Batch size = 8 6% 26/422 [00:00<00:03, 122.31it/s]2021-07-20 23:44:24,790 - clearml.Task - INFO - Completed model upload to Phonemes/HuggingFace Swahili Dataset, seed 420.26e5db1d4e5a4032a7692bce5e69ebb7/models/training_args.bin 100% 422/422 [00:03<00:00, 118.55it/s] ***** eval metrics ***** epoch = 40.0 eval_loss = 2.5067 eval_runtime = 0:00:03.56 eval_samples = 3371 eval_samples_per_second = 945.746 eval_steps_per_second = 118.394 perplexity = 12.2649...any advice?

Edit: bottom line solution was:
from clearml import Task Task.current_task().get_logger().report_scalar(title='eval', series='perplexity', value=perplexity, iteration=metrics["epoch"])

  
  
Posted 2 years ago
Votes Newest

Answers 13


Clearml automatically gets these reported metrics from TB, since you mentioned see the scalars , I assume huggingface reports to TB. Could you verify? Is there a quick code sample to reproduce?

  
  
Posted 2 years ago

Hi SmallDeer34
Can you see it in TB ? and if so where ?

  
  
Posted 2 years ago

Reproduce the training:
# How to run `

You need to pip install requirements first. I think the following would do: transformers datasets clearml tokenizers torch

CLEAR_DATA has train.txt and validation.txt, the .txt files just need to have text data on separate lines. For debugging, anything should do.

For training you need tokenizer files as well, vocab.json, merges.txt, and tokenizer.json.

you also need a config.json, should work.

export CLEAR_DATA="./data/dataset_for_modeling"
python3 run_mlm.py
--line_by_line
--seed 420
--config_name "$CLEAR_DATA"
--tokenizer_name "$CLEAR_DATA"
--train_file "$CLEAR_DATA/train.txt"
--validation_file "$CLEAR_DATA/validation.txt"
--max_seq_length 512
--do_train
--do_eval
--evaluation_strategy steps
--eval_steps 500
--save_strategy epoch
--num_train_epochs 3
--per_device_train_batch_size 8
--per_device_eval_batch_size 8
--output_dir ./output/mlm_training_output `Let me get you a dataset

  
  
Posted 2 years ago

Yeah that should work. Basically in --train_file it needs the path to train.txt, --validation_file needs the path to validation.txt, etc. I just put them all in the same folder for convenience

  
  
Posted 2 years ago

This should work. It has the tokenizer files, the train.txt, the validation.txt and a config.json

  
  
Posted 2 years ago

quick question:
CLEAR_DATA="./data/dataset_for_modeling"Should I pass the folder of the extracted zip file (assuming train.txt is the training dataset) ?

  
  
Posted 2 years ago

Thanks SmallDeer34 !
This is exactly what I needed

  
  
Posted 2 years ago

Hopefully it works for you, getting run_mlm.py to work took me some trial and error the first time. There is a --help option for the command line I believe. Some of the things aren't really intuitive

  
  
Posted 2 years ago

Then I gave that folder a name.

  
  
Posted 2 years ago

TB = Tensorboard? No idea, I haven't tried to run it with tensorboard specifically. I do have tensorboard installed in the environment, I can confirm that.

  
  
Posted 2 years ago

I know it's running these lines, which get defined in https://github.com/huggingface/transformers/blob/f6e254474cb4f90f8a168a599b9aaf3544c37890/src/transformers/trainer_pt_utils.py#L828
trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics)

  
  
Posted 2 years ago

AgitatedDove14 yes I see the scalars. Attached screenshot

Code to reproduce: I'll try to come up with a sample you will be able to run. But the code we're using is basically just https://github.com/huggingface/transformers/blob/f6e254474cb4f90f8a168a599b9aaf3544c37890/examples/pytorch/language-modeling/run_mlm.py

  
  
Posted 2 years ago

Oh, I forgot to mention: pip install tensorboard also

  
  
Posted 2 years ago
574 Views
13 Answers
2 years ago
one year ago
Tags