Reputation
Badges 1
8 × Eureka!WickedGoat98 I gave you a slight twitter push π if I were I would make sure that the app credentials you put on your screen shot are revoked π π
OddAlligator72 can you link to the wandb docs? Looks like you want a custom entry point, I'm thinking "maybe" but probably the answer is that we do it a little differently here.
Hi Dan, please take a look at this answer, the webapp interface mimics this. Does this click for you?
Which parser are you using? argparse should be logged automatically.
This looks like a genuine git fetch issue. Trains would have problems figuring the diff if git cannot find the base commit...
Do you have submodules on the repo? did the DS push his/her commits?
Hi, it is under construction, but it is going to be there.
Removed the yay's DeliciousBluewhale87
https://www.youtube.com/watch?v=XpXLMKhnV5k
Hi there, Evangelist for ClearML here. What you are describing is a conventional provisioning solution such as SLURM. And that works with ClearML as well βΊ btw a s survivor of such provisioning schemes I don't think they are always worth it
Hmm, anything -m will solve? https://docs.docker.com/config/containers/resource_constraints/
or is it a segfault inside the container becuase ulimit isn't set to -s unlimited ?
All should work, again - is it much slower than without trains?
Sure thing. All you need is the credentials. Did you see my extreme example here? https://youtu.be/qz9x7fTQZZ8
Yeah the file system on those VMs is really slow
Sure we do! Btw MiniatureCrocodile39 iirc I answered one of your threads with a recording to a webinar of mine
horray for the new channel ! you are all invited
Should work on new tasks of you use this command in the script. If you'd rather to keep the scripts as clean as possible, you can also configure it globally for all new tasks in trains.conf
I'm all for more technical tutorials for doing that... all of this fits the clearml methodology
I think the most up-to-date documentation for that is currently on the github repo, right SuccessfulKoala55 ?
https://github.com/allegroai/clearml-server-helm
I need to check something for you EnviousStarfish54 , I think one of our upcoming versions should have something to "write home about" in that regard
There short answer is "definitely yes" but to get maximum usage you will probably want to setup priority queues
BTW if anyone from the future is reading this, try the docs again π
Hi ManiacalPuppy53 glad to see that you got over the mongo problem by advancing to PI4 π
ClearML Free or self-hosted?
Hi Torben, thats great to hear! Which of the new features seems the most helpful to your own use case? Btw to your question the answer is yes. But I'm not exactly sure whats the pythonic way of doing that. AgitatedDove14 ?
EnviousStarfish54 that is the intention, it is cached. But you might need to manage your cache settings if you have many of those, since there is an initial sane setting for the cache size. Hope this helps.
Hi Elron, I think the easiest way is to print the results of !nvidia-smi or use the framework interface to get these and log them as a clearml artifact. for example -
https://pytorch.org/docs/stable/cuda.html
Another tip - if you have uncommitted changes on top of a commit, you will have to push that commit before the agent can successfully apply the diff in remote mode π
Sorry for being late to the party WearyLeopard29 , if you want to see get_mutable_copy() in the wild you can check the last cell of this notebook:
https://github.com/abiller/events/blob/webinars/videos/the_clear_show/S02/E05/dataset_edit_00.ipynb
Or skip to 3:30 in this video: