Reputation
Badges 1
132 × Eureka!Hang on, CostlyOstrich36 I just noticed that there's a "project compute time" on the dashboard? Do you know how that is calculated/what that is?
I might not be able to get to that but if you create an issue I'd be happy to link or post what I came up with, wdyt?
CostlyOstrich36 at the bottom of the screenshot it says "Compute Time: 440 days"
This sort of behavior is what I was thinking about when I saw "wildcard or pathlib Path" listed as options
And the reason is, because I have a bunch of "runs" with the same settings, and I want to compare broadly across several settings. So if I select "a bunch" with setting A I can see a general pattern when compared with setting B.
I've got 7-10 runs per setting, and about 7 or 8 settings
Yup! That works.from joeynmt.training import train train("transformer_epo_eng_bpe4000.yaml")
And it's tracking stuff successfully. Nice
No, not specifically 20,in fact more than 20
Ah, makes sense! Have you considered adding a "this is the old website! Click here to get to the new one!" banner, kinda like on docs for python2 functions? https://docs.python.org/2.7/library/string.html
This seems similar but not quite the thing I'm looking for: https://allegro.ai/clearml/docs/docs/tutorials/tutorial_explicit_reporting.html#step-1-setting-an-output-destination-for-model-checkpoints
As in, I edit Installed Packages, delete everything there, and put that particular list of packages.
BTW, http://clear.ml has this at the bottom:
Sure, if you want to give up that first-place spot! 😉
Or examples of, like, "select all experiments in project with iterations > 0"?
Sounds doable, I will give it a try.
The task.execute_remotely
thing is quite interesting, I didn't know about that!
generally I include the random seed in the name
Aggregating the sort of range of all the runs, maybe like a hurricane track?
Well they do all have different names
Actually at this point, I'd say it's too late, you might want to just generate new credentials...
CostlyOstrich36 I get some weird results, for "active duration".
For example, several of the experiments show that their active duration is more than 90 days, but I definitely didn't run them that long.
Martin I found a different solution (hardcoding the parent tasks by hand), but I'm curious to hear what you discover!
Well, I can just work around it now that I know, by creating a folder with no subfolders and uploading that. But... 🤔 perhaps allow the interface to take in a list or generator? As in,files_to_upload = [f for f in output_dir.glob("*") if f.is_file()] Task.current_task().upload_artifact( "best_checkpoint", artifact_object=files_to_upload)
And then it could zip up the list and name it "best_checkpoint"?
This seems to work:
` from clearml import Logger
for test_metric in posttrain_metrics:
print(test_metric, posttrain_metrics[test_metric])
#report_scalar(title, series, value, iteration)
Logger.current_logger().report_scalar("test", test_metric, posttrain_metrics[test_metric], 0) `
No, they're not in Tensorboard
essentially running this: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py