Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmallDeer34
Moderator
21 Questions, 155 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

132 × Eureka!
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
So, I'm trying to do a several-step process, but it needs to run on a GPU queue in ClearML. How would I do that? Specifically, here's what I'm trying to do, ...
4 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
Question: has anyone done anything with Ray or RLLib, and ClearML? Would ClearML be able to integrate with those out of the box? https://medium.com/distribut...
4 years ago
0 Votes
18 Answers
2K Views
0 Votes 18 Answers 2K Views
Second: is there a way to take internally tracked training runs and publish them publicly, e.g. for a research paper? "Appendix A: training runs can be found...
3 years ago
0 Votes
28 Answers
2K Views
0 Votes 28 Answers 2K Views
4 years ago
0 Votes
13 Answers
2K Views
0 Votes 13 Answers 2K Views
How, if at all, should we cite ClearML in a research paper? Would you like us to? How about a footnote/acknowledgement?
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Here's the original Colab notebook. It can import torch without error: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how...
4 years ago
0 Votes
4 Answers
2K Views
0 Votes 4 Answers 2K Views
OK, next question, I've got some training args that I'd like to manually upload and have them show up in the attached place, under Configuration. It is a Hug...
4 years ago
0 Votes
7 Answers
2K Views
0 Votes 7 Answers 2K Views
Is there any way to get just one dataset folder of a Dataset? e.g. only "train" or only "dev"?
4 years ago
0 Votes
13 Answers
2K Views
0 Votes 13 Answers 2K Views
Hello, there's a particular metric (perplexity) I'd like to track, but clearML didn't seem to catch it. Specifically, this "Evaluation" section of run_mlm.py...
4 years ago
0 Votes
7 Answers
2K Views
0 Votes 7 Answers 2K Views
Question about https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html#clearml.task.Task.upload_artifact : Let's say I g...
4 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
OK, we've got a GPU Queue setup on one of our local machines. I managed to run a script on it, which was intended to download a clearML dataset stored in s3....
4 years ago
0 Votes
10 Answers
2K Views
0 Votes 10 Answers 2K Views
Hello! I'm just starting out with ClearML, and I seem to be having some sort of conflict between clearml and torch , at least in Colab In this guide ( https:...
4 years ago
0 Votes
6 Answers
2K Views
0 Votes 6 Answers 2K Views
Currently trying to figure out how to extend clearML's automagical reporting to JoeyNMT. https://github.com/joeynmt/joeynmt/blob/master/joey_demo.ipynb is a ...
4 years ago
0 Votes
9 Answers
2K Views
0 Votes 9 Answers 2K Views
4 years ago
0 Votes
10 Answers
2K Views
0 Votes 10 Answers 2K Views
So, I did a slew of pretrainings, then finetuned those pretrained models. Is there a way to go backwards from the finetuning Task ID to the pretraining Task ...
3 years ago
0 Votes
21 Answers
2K Views
0 Votes 21 Answers 2K Views
3 years ago
0 Votes
18 Answers
2K Views
0 Votes 18 Answers 2K Views
Is there any way to: within the UI, select and compare the scalars for more than 10 experiments? I'd like to do something like: select these 10 run in such a...
3 years ago
0 Votes
30 Answers
2K Views
0 Votes 30 Answers 2K Views
4 years ago
0 Votes
8 Answers
2K Views
0 Votes 8 Answers 2K Views
So I'm in a Colab notebook, and after running my Trainer(), how do I upload my test metrics to ClearML? ClearML caught these metrics and uploaded them: train...
4 years ago
0 Votes
30 Answers
2K Views
0 Votes 30 Answers 2K Views
Hello! Getting credential errors when attempting to pip install transformers from git repo, on a GPU Queue. fatal: unable to write credential store: Device o...
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
So, this is something I've noticed, this line always seems to crash my Colab Notebooks: Task.current_task().completed()
4 years ago
0 Hi, There Is Small Bug In The Web Ui When Comparing Two Experiments Scalars: If The Two Tasks Have The Same Name, Then Clicking On The “Maximize Graph” Button On One Scalar Series To Get The Bigger View On That Scalar Series, Then The Color Of Both Series

I've been trying to do things like "color these five experiments one color, color these other five a different color", but then once I maximize the thing the colors all change

3 years ago
0 Hello! Does Someone Have A Huggingface Integration Example?

Hello! integration in what sense? Training a model? Uploading a model to the hub? Something else?

3 years ago
0 Question About

Well, I can just work around it now that I know, by creating a folder with no subfolders and uploading that. But... 🤔 perhaps allow the interface to take in a list or generator? As in,
files_to_upload = [f for f in output_dir.glob("*") if f.is_file()] Task.current_task().upload_artifact( "best_checkpoint", artifact_object=files_to_upload)And then it could zip up the list and name it "best_checkpoint"?

4 years ago
0 So, I Did A Slew Of Pretrainings, Then Finetuned Those Pretrained Models. Is There A Way To Go Backwards From The Finetuning Task Id To The Pretraining Task Id? What I Tried Was:

Martin I found a different solution (hardcoding the parent tasks by hand), but I'm curious to hear what you discover!

3 years ago
0 Question About

This sort of behavior is what I was thinking about when I saw "wildcard or pathlib Path" listed as options

4 years ago
0 Hello, There'S A Particular Metric (Perplexity) I'D Like To Track, But Clearml Didn'T Seem To Catch It. Specifically, This "Evaluation" Section Of Run_Mlm.Py In The Transformers Repo:

Yeah that should work. Basically in --train_file it needs the path to train.txt, --validation_file needs the path to validation.txt, etc. I just put them all in the same folder for convenience

4 years ago
0 So, I Did A Slew Of Pretrainings, Then Finetuned Those Pretrained Models. Is There A Way To Go Backwards From The Finetuning Task Id To The Pretraining Task Id? What I Tried Was:

So for example, I'm able to view in the UI that my finetuning task 7725f5bed94848039c68f2a3a573ded6 has an input model, and I can find the creating experiment for that. But how would I do this in code?

3 years ago
0 Hello! I'M Just Starting Out With Clearml, And I Seem To Be Having Some Sort Of Conflict Between

OK, so with the RC, the issue has gone away. I can now import torch without issue.

4 years ago
0 How, If At All, Should We Cite Clearml In A Research Paper? Would You Like Us To? How About A Footnote/Acknowledgement?

Oh, and good job starting your reference with an author that goes early in the alphabetical ordering, lol:

3 years ago
0 How, If At All, Should We Cite Clearml In A Research Paper? Would You Like Us To? How About A Footnote/Acknowledgement?

Or do you just want:
@misc{clearml, title = {ClearML - Your entire MLOps stack in one open-source tool}, year = {2019}, note = {Software available from }, url={ }, author = {ClearML}, }

3 years ago
0 Is There Any Way To: Within The Ui, Select And Compare The Scalars For More Than 10 Experiments? I'D Like To Do Something Like:

As an alternate solution, if I could group runs and get stats across the group, that would be cool

3 years ago
0 Currently Trying To Figure Out How To Extend Clearml'S Automagical Reporting To Joeynmt.

Yup! That works.
from joeynmt.training import train train("transformer_epo_eng_bpe4000.yaml")And it's tracking stuff successfully. Nice

4 years ago
3 years ago
0 Is There Any Way To Get Just One Dataset Folder Of A Dataset? E.G. Only "Train" Or Only "Dev"?

It would certainly be nice to have. Lately I've heard of groups that do slices of datasets for distributed training, or who "stream" data.

4 years ago
0 Ok, Next Question, I'Ve Got Some Training Args That I'D Like To Manually Upload And Have Them Show Up In The Attached Place, Under Configuration. It Is A Huggingface Trainingarguments Object, Which Has A To_Dict() And To_Json Function

So for example:
` {'output_dir': 'shiba_ner_trainer', 'overwrite_output_dir': False, 'do_train': True, 'do_eval': True, 'do_predict': True, 'evaluation_strategy': 'epoch', 'prediction_loss_only': False, 'per_device_train_batch_size': 16, 'per_device_eval_batch_size': 16, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 1, 'eval_accumulation_steps': None, 'learning_rate': 0.0004, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam...

4 years ago
0 How, If At All, Should We Cite Clearml In A Research Paper? Would You Like Us To? How About A Footnote/Acknowledgement?

Or we could do
@misc{clearml, title = {ClearML - Your entire MLOps stack in one open-source tool}, year = {2019}, note = {Software available from }, url={ }, author = {Allegro AI}, }

3 years ago
Show more results compactanswers