Reputation
Badges 1
8 × Eureka!I've been waiting so eagrly for this, I made a playlist! https://open.spotify.com/playlist/4XBqPUgxHD5dbhcYqANzNo?si=G0E_s-OaQzefKIJ0wDkzHA
SubstantialElk6 if you sign up for free on http://clear.ml you'll get a private workspace. some teams used in hackday-jp recently, it was a great success
Hmm, anything -m will solve? https://docs.docker.com/config/containers/resource_constraints/
or is it a segfault inside the container becuase ulimit isn't set to -s unlimited ?
if you want something that could work in either case, then maybe the second option is better
Not so much relevant, since it can be seen from your task π but it would be interesting to find out if trains made something be much slower, and if so - how
All should work, again - is it much slower than without trains?
Hi SubstantialElk6 , have a look at Task.execute_remotely, and it's especially for that. For instance in the recent webinar, I used pytorch-cpu on my laptop and task.execute_remotely. the agent automatically installs the GPU version. Example https://github.com/abiller/events/blob/webinars/webinars/flower_detection_rnd/A1_dataset_input.py
Hmm... For quick and dirty integration that would probably do the trick, you could very well issue clearml-task commands on each kubeflow vertex (is that how they are called?)
What do you think AgitatedDove14 ?
Its built in π and Its for... "Services"
https://github.com/allegroai/trains-server#trains-agent-services--
As I wrote before these are more geared towards unstructured data and I will feel more comfortable, as this is a community channel, if you continue your conversation with the enterprise rep. if you wish to take this thread to a more private channel I'm more than willing.
JealousParrot68 Some usability comments - Since ClearML is opinionated, there are several pipeline workflow behaviors that make sense if you use Datasets and Artefacts interchangeably, e.g. the step caching AgitatedDove14 mentioned. Also for Datasets, if you combine them with a dedicated subproject like I did on my show, then you have the pattern where asking for the dataset of that subproject will always give you the most up-to-date dataset. Thus you can reuse your pipelines without havin...
submouldes == git submodules
Thanks @<1523701205467926528:profile|AgitatedDove14> , also I think you're missing a few pronouns there π
MiniatureCrocodile39 lets get that fixed πͺ could you post the link here?
SubstantialBaldeagle49 not at the moment, but it is just a matter of implementing an apiclient call. you can open a feauture request for a demo on github, it will help make it sooner than later
If you can recreate the same problem with the original repo... π€― π€©
Hi SubstantialBaldeagle49 ,
certainly if you upload all the training images or even all the test images it will have a huge bandwidth/storage cost (I believe bandwidth does not matter e.g. if you are using s3 from ec2) If you need to store all the detection results (for example, QA, regression testing), you can always save the detections json as artifact and view them later in your dev environment when you need. The best option would be to only upload "control" images and "interesting" im...
There are an astounding number of such channels actually. It probably depends on your style. Would you like me to recommend some?
Of course we can always create a channel here as well... One more can't hurt π
Hi SoggyFrog26 welcome to ClearML!
The way you found definitely works very well, especially since you can use it to change the input model from the UI in case you use the task as a template for orchestrated inference.
Note that you can wrap metadata around the model as well such as labels trained on and network structure, you can also use a model package to ... well package whatever you need with the model. If you want a concrete example I think we need a little more detail here on the fr...
same name == same path, assuming no upload is taking place? *just making sure
The colors are controlled by the front-end so not programmatically, but it is possible and (in my opinion) convenient to do so manually via clicking the appropriate position on the legend. Does this meet your expectations?
SubstantialElk6 this is a three parter -
getting workers on your cluster, again because of the rebrand I would go to the repo itself for the dochttps://github.com/allegroai/clearml-agent#kubernetes-integration-optional
2. integrating any code with clearml (2 lines of code)
3. executing that from the web ui
If you need any help with the three, the community is here for you π
Well TenseOstrich47 you can certainly add any metadata to the model itself at training time (or as a post-training job if the model is not published). I'm actually doing a couple of videos on this on the ClearSHOW, tomorrow's episode should explain this approach and there will be examples next week. DM me here if you want to know more.
https://www.youtube.com/watch?v=r2BMMDzfyA0&list=PLMdIlCuMqSTkXBApOMqg2S5IeVfnkq2Mj
OddAlligator72 so if I get you correctly, it is equivalent to creating a file called driver.py with all your entry points with an argparser and using it instead of train.py?
Did you try archiving all the experiments and then deleting the project?
CloudyHamster42 it will only affect news tasks created with the config file...sorry
Sounds odd, I bet there is a way to make it work witout implicit logging statement. This is with tf2 or pytorch? Which trains version are you using?
Btw the reason that they initialize with barely discernible color lies within the hash function that encodes the str content to a color. I.e., this is actually a feature