Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmallDeer34
Moderator
21 Questions, 155 Answers
  Active since 10 January 2023
  Last activity 3 months ago

Reputation

0

Badges 1

132 × Eureka!
0 Votes
7 Answers
734 Views
0 Votes 7 Answers 734 Views
Is there any way to get just one dataset folder of a Dataset? e.g. only "train" or only "dev"?
2 years ago
0 Votes
30 Answers
798 Views
0 Votes 30 Answers 798 Views
3 years ago
0 Votes
10 Answers
698 Views
0 Votes 10 Answers 698 Views
Hello! I'm just starting out with ClearML, and I seem to be having some sort of conflict between clearml and torch , at least in Colab In this guide ( https:...
3 years ago
0 Votes
6 Answers
807 Views
0 Votes 6 Answers 807 Views
Currently trying to figure out how to extend clearML's automagical reporting to JoeyNMT. https://github.com/joeynmt/joeynmt/blob/master/joey_demo.ipynb is a ...
3 years ago
0 Votes
18 Answers
824 Views
0 Votes 18 Answers 824 Views
Is there any way to: within the UI, select and compare the scalars for more than 10 experiments? I'd like to do something like: select these 10 run in such a...
2 years ago
0 Votes
21 Answers
712 Views
0 Votes 21 Answers 712 Views
2 years ago
0 Votes
28 Answers
725 Views
0 Votes 28 Answers 725 Views
3 years ago
0 Votes
0 Answers
717 Views
0 Votes 0 Answers 717 Views
So, this is something I've noticed, this line always seems to crash my Colab Notebooks: Task.current_task().completed()
2 years ago
0 Votes
10 Answers
717 Views
0 Votes 10 Answers 717 Views
So, I did a slew of pretrainings, then finetuned those pretrained models. Is there a way to go backwards from the finetuning Task ID to the pretraining Task ...
2 years ago
0 Votes
30 Answers
663 Views
0 Votes 30 Answers 663 Views
Hello! Getting credential errors when attempting to pip install transformers from git repo, on a GPU Queue. fatal: unable to write credential store: Device o...
3 years ago
0 Votes
3 Answers
665 Views
0 Votes 3 Answers 665 Views
Question: has anyone done anything with Ray or RLLib, and ClearML? Would ClearML be able to integrate with those out of the box? https://medium.com/distribut...
2 years ago
0 Votes
0 Answers
712 Views
0 Votes 0 Answers 712 Views
Here's the original Colab notebook. It can import torch without error: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how...
3 years ago
0 Votes
18 Answers
657 Views
0 Votes 18 Answers 657 Views
Second: is there a way to take internally tracked training runs and publish them publicly, e.g. for a research paper? "Appendix A: training runs can be found...
2 years ago
0 Votes
13 Answers
676 Views
0 Votes 13 Answers 676 Views
How, if at all, should we cite ClearML in a research paper? Would you like us to? How about a footnote/acknowledgement?
2 years ago
0 Votes
7 Answers
670 Views
0 Votes 7 Answers 670 Views
Question about https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html#clearml.task.Task.upload_artifact : Let's say I g...
2 years ago
0 Votes
13 Answers
730 Views
0 Votes 13 Answers 730 Views
Hello, there's a particular metric (perplexity) I'd like to track, but clearML didn't seem to catch it. Specifically, this "Evaluation" section of run_mlm.py...
3 years ago
0 Votes
8 Answers
787 Views
0 Votes 8 Answers 787 Views
So I'm in a Colab notebook, and after running my Trainer(), how do I upload my test metrics to ClearML? ClearML caught these metrics and uploaded them: train...
2 years ago
0 Votes
9 Answers
786 Views
0 Votes 9 Answers 786 Views
2 years ago
0 Votes
3 Answers
779 Views
0 Votes 3 Answers 779 Views
So, I'm trying to do a several-step process, but it needs to run on a GPU queue in ClearML. How would I do that? Specifically, here's what I'm trying to do, ...
3 years ago
0 Votes
3 Answers
732 Views
0 Votes 3 Answers 732 Views
OK, we've got a GPU Queue setup on one of our local machines. I managed to run a script on it, which was intended to download a clearML dataset stored in s3....
3 years ago
0 Votes
4 Answers
772 Views
0 Votes 4 Answers 772 Views
OK, next question, I've got some training args that I'd like to manually upload and have them show up in the attached place, under Configuration. It is a Hug...
2 years ago
0 How, If At All, Should We Cite Clearml In A Research Paper? Would You Like Us To? How About A Footnote/Acknowledgement?

Oh, and good job starting your reference with an author that goes early in the alphabetical ordering, lol:

2 years ago
0 How, If At All, Should We Cite Clearml In A Research Paper? Would You Like Us To? How About A Footnote/Acknowledgement?

Or we could do
@misc{clearml, title = {ClearML - Your entire MLOps stack in one open-source tool}, year = {2019}, note = {Software available from }, url={ }, author = {Allegro AI}, }

2 years ago
0 Two Questions Today. First, Is There Some Way To Calculate The Number Of Gpu-Hours Used For A Project? Could I Select All Experiments And Count Up The Number Of Gpu-Hours/Gpu-Weeks? I Realize I Could Do This Manually By Looking At The Gpu Utilization Grap

I suppose the flow would be something like:
select all experiments from project x with iterations greater than y, pull runtime for each one add them all up. I just don't know what API calls to make for 1 and 2

2 years ago
0 Currently Trying To Figure Out How To Extend Clearml'S Automagical Reporting To Joeynmt.

It seems to create a folder and put things into it, I was hoping to just observe the tensorboard folder

3 years ago
0 Hello! Does Someone Have A Huggingface Integration Example?

Hello! integration in what sense? Training a model? Uploading a model to the hub? Something else?

2 years ago
0 This Will Close It

It's not a big deal because it happens after I'm done with everything, I can just reset the Colab runtime and start over

2 years ago
0 So I'M In A Colab Notebook, And After Running My Trainer(), How Do I Upload My Test Metrics To Clearml? Clearml Caught These Metrics And Uploaded Them:

This seems to work:

` from clearml import Logger
for test_metric in posttrain_metrics:
print(test_metric, posttrain_metrics[test_metric])

#report_scalar(title, series, value, iteration)
Logger.current_logger().report_scalar("test", test_metric, posttrain_metrics[test_metric], 0) `

2 years ago
0 Ok, Next Question, I'Ve Got Some Training Args That I'D Like To Manually Upload And Have Them Show Up In The Attached Place, Under Configuration. It Is A Huggingface Trainingarguments Object, Which Has A To_Dict() And To_Json Function

OK, I guess
` training_args_dict = training_args.to_dict()

Task.current_task().set_parameters_as_dict(training_args_dict) `works, but how to change the name from "General"?

2 years ago
0 So, I Did A Slew Of Pretrainings, Then Finetuned Those Pretrained Models. Is There A Way To Go Backwards From The Finetuning Task Id To The Pretraining Task Id? What I Tried Was:

Martin I found a different solution (hardcoding the parent tasks by hand), but I'm curious to hear what you discover!

2 years ago
0 Hello, I'M Not Getting Training Metrics Tracked By Clearml When I Execute The A Training Script Remotely, But I Get Them If I Run Locally. Is It Because I Have A Task.Init() In The File? What Happens When You Remotely Run A Script Which Has An Init() In I

Long story, but in the other thread I couldn't install the particular version of transformers unless I removed it from "Installed Packages" and added it to setup script instead. So I took to just throwing in that list of packages.

3 years ago
0 Currently Trying To Figure Out How To Extend Clearml'S Automagical Reporting To Joeynmt.

Yup! That works.
from joeynmt.training import train train("transformer_epo_eng_bpe4000.yaml")And it's tracking stuff successfully. Nice

3 years ago
0 Hello, There'S A Particular Metric (Perplexity) I'D Like To Track, But Clearml Didn'T Seem To Catch It. Specifically, This "Evaluation" Section Of Run_Mlm.Py In The Transformers Repo:

Reproduce the training:
# How to run `

You need to pip install requirements first. I think the following would do: transformers datasets clearml tokenizers torch

CLEAR_DATA has train.txt and validation.txt, the .txt files just need to have text data on separate lines. For debugging, anything should do.

For training you need tokenizer files as well, vocab.json, merges.txt, and tokenizer.json.

you also need a config.json, should work.

export CLEAR_DATA="./data/dataset_for...

3 years ago
Show more results compactanswers