Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
IntriguedBat44
Moderator
3 Questions, 8 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

8 × Eureka!
0 Votes
10 Answers
954 Views
0 Votes 10 Answers 954 Views
When I run experiments I set CUDA_VISIBLE_DEVICES to some integer to only make that device available to the main process (as is common). I can verify that th...
3 years ago
0 Votes
3 Answers
891 Views
0 Votes 3 Answers 891 Views
Hi everyone, if I want to run a script that has Trains tracking statements in it but just this time I want to disable all logging, how would I go about that?
4 years ago
0 Votes
5 Answers
891 Views
0 Votes 5 Answers 891 Views
Hey! Is there a way to ignore the spammy output of progressbars like progressbar2 and tqdm in the captured log in Trains?
4 years ago
0 Hi Everyone, If I Want To Run A Script That Has Trains Tracking Statements In It But Just This Time I Want To Disable All Logging, How Would I Go About That?

Thanks guys 🙏 I was looking for a way to do it that doesn’t require code changes, which I got - Much appreciated!

4 years ago
4 years ago
0 When I Run Experiments I Set

I can confirm that marking out Task.init(…) fixes it.

You can reproduce simply by taking the ClearML PyTorch MNIST example https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/pytorch_mnist.py .

To clearly see it happening, it’s easiest if you get the GPU allocated before calling task = Task.init(…) and to avoid crashing because you’re missing the task variable, you can embed just before and after Task.init(…) using IPython . You also n...

3 years ago
0 When I Run Experiments I Set

This is the output of sudo fuser -v /dev/nvidia* for GPUs 0, 1 and 2 when I run a single experiment on GPU 0, a different user is running on GPU 1 and no-one is running on GPU 2 (remaining 7 GPUs omitted but are similar to 2).

This only happens when Task.init is called. Never happens if not.

` /dev/nvidia0: jdh 2448 F.... python
/dev/nvidia1: je 315 F...m python3
jdh 2448 F.... python
/dev/nvidia2: jdh ...

3 years ago
0 When I Run Experiments I Set

No problem!
Yes, I’m running manual mode and I only see one GPU tracked in the resource monitoring. I’m using train 0.16.4 .

Everything seems to work as it should, but if I run without trains, my process is only visible on the one GPU i made visible with CUDA_VISIBLE_DEVICES . If I run with Trains, it’s “registered” on all other devices as well if inspected with sudo fuser -v /dev/nvidia*

3 years ago
4 years ago
0 When I Run Experiments I Set

I’ve verified that CUDA_VISIBLE_DEVICES doesn’t get changed during the Task.init call or anywhere else during the script.

3 years ago