Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmallDeer34
Moderator
21 Questions, 155 Answers
  Active since 10 January 2023
  Last activity 27 days ago

Reputation

0

Badges 1

132 × Eureka!
0 Votes
7 Answers
569 Views
0 Votes 7 Answers 569 Views
Question about https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html#clearml.task.Task.upload_artifact : Let's say I g...
2 years ago
0 Votes
0 Answers
603 Views
0 Votes 0 Answers 603 Views
So, this is something I've noticed, this line always seems to crash my Colab Notebooks: Task.current_task().completed()
2 years ago
0 Votes
21 Answers
589 Views
0 Votes 21 Answers 589 Views
2 years ago
0 Votes
7 Answers
599 Views
0 Votes 7 Answers 599 Views
Is there any way to get just one dataset folder of a Dataset? e.g. only "train" or only "dev"?
2 years ago
0 Votes
30 Answers
530 Views
0 Votes 30 Answers 530 Views
Hello! Getting credential errors when attempting to pip install transformers from git repo, on a GPU Queue. fatal: unable to write credential store: Device o...
2 years ago
0 Votes
3 Answers
627 Views
0 Votes 3 Answers 627 Views
So, I'm trying to do a several-step process, but it needs to run on a GPU queue in ClearML. How would I do that? Specifically, here's what I'm trying to do, ...
2 years ago
0 Votes
3 Answers
587 Views
0 Votes 3 Answers 587 Views
OK, we've got a GPU Queue setup on one of our local machines. I managed to run a script on it, which was intended to download a clearML dataset stored in s3....
2 years ago
0 Votes
13 Answers
570 Views
0 Votes 13 Answers 570 Views
How, if at all, should we cite ClearML in a research paper? Would you like us to? How about a footnote/acknowledgement?
2 years ago
0 Votes
0 Answers
585 Views
0 Votes 0 Answers 585 Views
Here's the original Colab notebook. It can import torch without error: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how...
2 years ago
0 Votes
30 Answers
669 Views
0 Votes 30 Answers 669 Views
2 years ago
0 Votes
4 Answers
635 Views
0 Votes 4 Answers 635 Views
OK, next question, I've got some training args that I'd like to manually upload and have them show up in the attached place, under Configuration. It is a Hug...
2 years ago
0 Votes
8 Answers
659 Views
0 Votes 8 Answers 659 Views
So I'm in a Colab notebook, and after running my Trainer(), how do I upload my test metrics to ClearML? ClearML caught these metrics and uploaded them: train...
2 years ago
0 Votes
10 Answers
588 Views
0 Votes 10 Answers 588 Views
Hello! I'm just starting out with ClearML, and I seem to be having some sort of conflict between clearml and torch , at least in Colab In this guide ( https:...
2 years ago
0 Votes
10 Answers
605 Views
0 Votes 10 Answers 605 Views
So, I did a slew of pretrainings, then finetuned those pretrained models. Is there a way to go backwards from the finetuning Task ID to the pretraining Task ...
2 years ago
0 Votes
3 Answers
559 Views
0 Votes 3 Answers 559 Views
Question: has anyone done anything with Ray or RLLib, and ClearML? Would ClearML be able to integrate with those out of the box? https://medium.com/distribut...
2 years ago
0 Votes
6 Answers
696 Views
0 Votes 6 Answers 696 Views
Currently trying to figure out how to extend clearML's automagical reporting to JoeyNMT. https://github.com/joeynmt/joeynmt/blob/master/joey_demo.ipynb is a ...
2 years ago
0 Votes
18 Answers
678 Views
0 Votes 18 Answers 678 Views
Is there any way to: within the UI, select and compare the scalars for more than 10 experiments? I'd like to do something like: select these 10 run in such a...
2 years ago
0 Votes
28 Answers
596 Views
0 Votes 28 Answers 596 Views
2 years ago
0 Votes
18 Answers
535 Views
0 Votes 18 Answers 535 Views
Second: is there a way to take internally tracked training runs and publish them publicly, e.g. for a research paper? "Appendix A: training runs can be found...
2 years ago
0 Votes
9 Answers
640 Views
0 Votes 9 Answers 640 Views
2 years ago
0 Votes
13 Answers
609 Views
0 Votes 13 Answers 609 Views
Hello, there's a particular metric (perplexity) I'd like to track, but clearML didn't seem to catch it. Specifically, this "Evaluation" section of run_mlm.py...
2 years ago
0 Ok, Next Question, I'Ve Got Some Training Args That I'D Like To Manually Upload And Have Them Show Up In The Attached Place, Under Configuration. It Is A Huggingface Trainingarguments Object, Which Has A To_Dict() And To_Json Function

OK, I guess
` training_args_dict = training_args.to_dict()

Task.current_task().set_parameters_as_dict(training_args_dict) `works, but how to change the name from "General"?

2 years ago
0 Ok, Next Question, I'Ve Got Some Training Args That I'D Like To Manually Upload And Have Them Show Up In The Attached Place, Under Configuration. It Is A Huggingface Trainingarguments Object, Which Has A To_Dict() And To_Json Function

So for example:
` {'output_dir': 'shiba_ner_trainer', 'overwrite_output_dir': False, 'do_train': True, 'do_eval': True, 'do_predict': True, 'evaluation_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 16, 'per_device_eval_batch_size': 16, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'learning_rate': 0.0004, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam...

2 years ago
0 Hello, There'S A Particular Metric (Perplexity) I'D Like To Track, But Clearml Didn'T Seem To Catch It. Specifically, This "Evaluation" Section Of Run_Mlm.Py In The Transformers Repo:

Reproduce the training:
# How to run `

You need to pip install requirements first. I think the following would do: transformers datasets clearml tokenizers torch

CLEAR_DATA has train.txt and validation.txt, the .txt files just need to have text data on separate lines. For debugging, anything should do.

For training you need tokenizer files as well, vocab.json, merges.txt, and tokenizer.json.

you also need a config.json, should work.

export CLEAR_DATA="./data/dataset_for...

2 years ago
0 Hello, There'S A Particular Metric (Perplexity) I'D Like To Track, But Clearml Didn'T Seem To Catch It. Specifically, This "Evaluation" Section Of Run_Mlm.Py In The Transformers Repo:

TB = Tensorboard? No idea, I haven't tried to run it with tensorboard specifically. I do have tensorboard installed in the environment, I can confirm that.

2 years ago
0 So I'M In A Colab Notebook, And After Running My Trainer(), How Do I Upload My Test Metrics To Clearml? Clearml Caught These Metrics And Uploaded Them:

AgitatedDove14 yes, I called init and tensorboard is installed. It successfully uploaded the metrics from trainer.train(), just not from the next cell where we do trainer.predict

2 years ago
0 So I'M In A Colab Notebook, And After Running My Trainer(), How Do I Upload My Test Metrics To Clearml? Clearml Caught These Metrics And Uploaded Them:

This seems to work:

` from clearml import Logger
for test_metric in posttrain_metrics:
print(test_metric, posttrain_metrics[test_metric])

#report_scalar(title, series, value, iteration)
Logger.current_logger().report_scalar("test", test_metric, posttrain_metrics[test_metric], 0) `

2 years ago
0 Hello! I'M Just Starting Out With Clearml, And I Seem To Be Having Some Sort Of Conflict Between

But then I took out all my additions except for pip install clearml and
from clearml import Task task = Task.init(project_name="project name", task_name="Esperanto_Bert_2")and now I'm not getting the error? But it's still install 1.02. So I'm just thoroughly confused at this point. I'm going to start with a fresh cop of the original colab notebook from https://huggingface.co/blog/how-to-train

2 years ago
0 Hello! I'M Just Starting Out With Clearml, And I Seem To Be Having Some Sort Of Conflict Between

Did a couple tests with Colab, moving the installs and imports up to the top. Results... seem to suggest that doing all the installs/imports before actually running the tokenization and such might fix the problem too?

It's a bit confusing. I made a couple cells at the top, like thus:
!pip install clearmland
from clearml import Task task = Task.init(project_name="project name", task_name="Esperanto_Bert_2")and
# Check that PyTorch sees it import torch torch.cuda.is_available()and
...

2 years ago
0 Two Questions Today. First, Is There Some Way To Calculate The Number Of Gpu-Hours Used For A Project? Could I Select All Experiments And Count Up The Number Of Gpu-Hours/Gpu-Weeks? I Realize I Could Do This Manually By Looking At The Gpu Utilization Grap

CostlyOstrich36 nice, thanks for the link. I know that in "info" on the experiments dashboard it includes gpu_type and started/completed times, I'll give it a go based on that

2 years ago
2 years ago
0 So, Here'S A Question. Does Clearml Automatically Save Everything Necessary To Continue Training A Pytorch Language Model? Specifically, I'Ve Been Looking At The Checkpoint Folders Created When I'M Training A Huggingface Robertaformaskedlm. I Checked What

OK, neat! Any advice on how to edit the training loop to do that? Because the code I'm using doesn't offer easy access to the training loop, see here: https://github.com/huggingface/transformers/blob/040283170cd559b59b8eb37fe9fe8e99ff7edcbc/examples/pytorch/language-modeling/run_mlm.py#L469

trainer.train() just does the training loop automagically, and saves a checkpoint once in a while. When it saves a checkpoint, clearML uploads all the other files. How can I hook into... whatever ...

2 years ago
0 So, Here'S A Question. Does Clearml Automatically Save Everything Necessary To Continue Training A Pytorch Language Model? Specifically, I'Ve Been Looking At The Checkpoint Folders Created When I'M Training A Huggingface Robertaformaskedlm. I Checked What

My other question is: how does it decide what to upload automatically? It picked up almost everything, just not trainer_state.json. Which I'm actually not quite sure is necessary

2 years ago
Show more results compactanswers