Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ScaryJellyfish75
Moderator
3 Questions, 7 Answers
  Active since 17 March 2023
  Last activity one year ago

Reputation

0

Badges 1

3 × Eureka!
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Hey community! I have a question regarding the Optuna optimizer with ClearML. I'm using a config yaml file that I'm connecting via task.connect_configuration...
one year ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Hey there! I kindly ask for your swarm knowledge on ClearML pipelines. I'm trying to setup a simple pipeline with a Controller running on the service-queue a...
one year ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
one year ago
0 Hi, If I'Ve Clearml Agents Installed On Several Servers, Each With A Single Gpu. How Can I Train A Gpt2 Model That Would Require Multiple Gpus?

ClearML is usually just moving the execution down to the nodes. I'm unsure what role ClearML is playing in your issue

one year ago
0 Hi, If I'Ve Clearml Agents Installed On Several Servers, Each With A Single Gpu. How Can I Train A Gpt2 Model That Would Require Multiple Gpus?

IMHO ClearML would just start the execution on multiple hosts. Keep in mind that the hosts need to be on the same LAN and have a very high bandwidth.

What you are looking for is called "DistributedDataParallel". Maybe this tutorial gives you a starting point:
None

one year ago
0 Hi, If I'Ve Clearml Agents Installed On Several Servers, Each With A Single Gpu. How Can I Train A Gpt2 Model That Would Require Multiple Gpus?

I would recommend you start getting familiar with the distributed training modes (for example DDP in PyTorch). There are some important concepts that are required to train multi-GPU and multi-devices.

None

Before you start with a sophisticated model, I'd recommend to try this training setup with a baseline model, check that data, gradients, weights, metrics, etc. are synced correctly.

one year ago
0 Hi, I Think I Found A Problem With A Clean Clearml Install. I Create A New Python Env:

Another workaround that did the trick for me was to fix the version of urllib3 in your requirements.txt
urllib3==1.26.15

one year ago
0 Hey Community! I Have A Question Regarding The Optuna Optimizer With Clearml. I'M Using A Config Yaml File That I'M Connecting Via

Yes that makes sense. I solved it by actively reading them via Task.parameters. Now that works, I just had to adjust the config parser a bit

one year ago
0 Hey Community! I Have A Question Regarding The Optuna Optimizer With Clearml. I'M Using A Config Yaml File That I'M Connecting Via

The optimizer part works out of the box, yes. But my training is usually consuming a yaml file with parameters and not via argparse. This is the part I had to adjust

one year ago