Reputation
Badges 1
8 × Eureka!HighOtter69 so if you manually change the color of one of them, the others change for you as well?!
Hi TenseOstrich47 sorry for the long wait, here is a video + code of how to put any sort of metadata inside your clearml model artifact π We will also be improving this, so if you have feature requests we would love to hear about them
https://www.youtube.com/watch?v=WIZ88SmT58M&list=PLMdIlCuMqSTkXBApOMqg2S5IeVfnkq2Mj&index=12
Thanks for your interest in the enterprise offering πΆοΈ I would much rather if we kept this slack workspace for the open-source solution we all know and love. You can email me at mailto:ariel@clear.ml for more info. For a short answer: the Data lineage is about an order of magnitude cooler, and hyperdatasets can be thought of "beyond feature stores for unstructured data". Does this help?
Also, while we are at it, EnviousStarfish54 ,can I just make sure - you meant this page, right?
https://allegro.ai/enterprise/
There are several ways of doing what you need, but none of them are 'magical' like we pride ourselves for. For that, we would need user input like yours in order to find the commonalities.
SubstantialBaldeagle49
hopefully you can reuse the same code you used to render the images until now, just not inside a training loop. I would recommend against integrating with trains, but you can query the trains-server from any app, just make sure you serve it with the appropriate trains.conf and manage the security π you can even manage the visualization server from within trains using trains-agent. Open source is so much fun!
SubstantialBaldeagle49 not at the moment, but it is just a matter of implementing an apiclient call. you can open a feauture request for a demo on github, it will help make it sooner than later
Hi SubstantialBaldeagle49 ,
certainly if you upload all the training images or even all the test images it will have a huge bandwidth/storage cost (I believe bandwidth does not matter e.g. if you are using s3 from ec2) If you need to store all the detection results (for example, QA, regression testing), you can always save the detections json as artifact and view them later in your dev environment when you need. The best option would be to only upload "control" images and "interesting" im...
Hi there, Evangelist for ClearML here. What you are describing is a conventional provisioning solution such as SLURM. And that works with ClearML as well βΊ btw a s survivor of such provisioning schemes I don't think they are always worth it
CloudyHamster42 it will only affect news tasks created with the config file...sorry
#goodfirstissue AgitatedDove14 π€£
OddAlligator72 so if I get you correctly, it is equivalent to creating a file called driver.py with all your entry points with an argparser and using it instead of train.py?
Are you doing imshow or savefig? Is this the matplotlib oop or original subplot? Any warning message relevant to this?
if you want something that could work in either case, then maybe the second option is better
The log storage can be configured if you spin your own clearml-server, but it won't have repository structure. And it shouldn't have btw. If you need secondary backup of everything, it is possible to set something up as well.
Well in general there is no one answer. I can talk about it for days. In ClearML the question is really a non issue since of you build a pipeline from notebooks on your dev in r&d it is automatically converted to python scripts inside containers. Where shall we begin? Maybe you describe your typical workload and intended deployment with latency constraints?
Rather unfortunate that such a vendor would risk such an inaccurate comparison...
Thanks @<1523701205467926528:profile|AgitatedDove14> , also I think you're missing a few pronouns there π
Cool. I found Logger.tensorboard_single_series_per_graph()
Hi Dan, please take a look at this answer, the webapp interface mimics this. Does this click for you?
Should work on new tasks of you use this command in the script. If you'd rather to keep the scripts as clean as possible, you can also configure it globally for all new tasks in trains.conf
Another tip - if you have uncommitted changes on top of a commit, you will have to push that commit before the agent can successfully apply the diff in remote mode π
That will be sdk.metrics.tensorboard_single_series_per_graph
First of all I'd like to thank you for pointing out that our messaging is confusing. We'll fix that.
To the point: Nice interface and optimized db access for feature stores is part of our paid, enterprise offering.
Managing data/features as part of your pipelines and getting version-controlled features == offline feature store
The latter is doable with the current ClearML open source tools, and I intend to show it very soon. But right now you won't have a different pane for DataOps, it'll...
All should work, again - is it much slower than without trains?
Hi, I think this came up when we discussed the joblib integration right? We have a model registry, ranging from auto spec to manual reporting. E.g. https://allegro.ai/clearml/docs/docs/examples/frameworks/pytorch/manual_model_upload.html