Reputation
Badges 1
8 × Eureka!Also, while we are at it, EnviousStarfish54 ,can I just make sure - you meant this page, right?
https://allegro.ai/enterprise/
We need SEO for our docs π
Nbdev ia "neat" but it's ultimately another framework that you have to enforce.
Re: maturity models - you will find no love for then here π mainly because they don't drive research to production
Your described setup can easily be outshined by a ClearML deployment, but sagemaker instances are cheaper. If you have a limited number of model architectures you can get tge added benefit of tracking your s3 models with ClearML with very little code changes. As for deployment - that's anoth...
Rather unfortunate that such a vendor would risk such an inaccurate comparison...
Another tip - if you have uncommitted changes on top of a commit, you will have to push that commit before the agent can successfully apply the diff in remote mode π
As I wrote before these are more geared towards unstructured data and I will feel more comfortable, as this is a community channel, if you continue your conversation with the enterprise rep. if you wish to take this thread to a more private channel I'm more than willing.
What about cloning and setting "last commit in branch" ?
SubstantialElk6 if you sign up for free on http://clear.ml you'll get a private workspace. some teams used in hackday-jp recently, it was a great success
There are several ways of doing what you need, but none of them are 'magical' like we pride ourselves for. For that, we would need user input like yours in order to find the commonalities.
First of all I'd like to thank you for pointing out that our messaging is confusing. We'll fix that.
To the point: Nice interface and optimized db access for feature stores is part of our paid, enterprise offering.
Managing data/features as part of your pipelines and getting version-controlled features == offline feature store
The latter is doable with the current ClearML open source tools, and I intend to show it very soon. But right now you won't have a different pane for DataOps, it'll...
I would say that this is opposite of the ClearML vision... Repos are for codes, ClearML server is for logs, stats and metadata. It can also be used for artifacts if you dont have dedicated artifact storage (depending on deployment etc)
Do you mind explaining your viewpoint?
WickedGoat98 I gave you a slight twitter push π if I were I would make sure that the app credentials you put on your screen shot are revoked π π
Sounds odd, I bet there is a way to make it work witout implicit logging statement. This is with tf2 or pytorch? Which trains version are you using?
CloudyHamster42 it will only affect news tasks created with the config file...sorry
Cool. I found Logger.tensorboard_single_series_per_graph()
That will be sdk.metrics.tensorboard_single_series_per_graph
Should work on new tasks of you use this command in the script. If you'd rather to keep the scripts as clean as possible, you can also configure it globally for all new tasks in trains.conf
https://github.com/allegroai/trains/issues/193 for future reference (I will update later)
Hi, I think this came up when we discussed the joblib integration right? We have a model registry, ranging from auto spec to manual reporting. E.g. https://allegro.ai/clearml/docs/docs/examples/frameworks/pytorch/manual_model_upload.html
Hi SubstantialBaldeagle49 ,
certainly if you upload all the training images or even all the test images it will have a huge bandwidth/storage cost (I believe bandwidth does not matter e.g. if you are using s3 from ec2) If you need to store all the detection results (for example, QA, regression testing), you can always save the detections json as artifact and view them later in your dev environment when you need. The best option would be to only upload "control" images and "interesting" im...
I've been waiting so eagrly for this, I made a playlist! https://open.spotify.com/playlist/4XBqPUgxHD5dbhcYqANzNo?si=G0E_s-OaQzefKIJ0wDkzHA
I will dig around to see how all of this could be accomplished.
Right now I see it done in two ways:
a function that you must remember to call each time that would do the upkeep a script that you can run once in a while to do cleanupwhich one would you prefer that I will pursue?
I think there is a way, I'll have to check. BTW when you compare two tasks you do get separate graphs, right?
Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for π are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?
Hi TenseOstrich47 sorry for the long wait, here is a video + code of how to put any sort of metadata inside your clearml model artifact π We will also be improving this, so if you have feature requests we would love to hear about them
https://www.youtube.com/watch?v=WIZ88SmT58M&list=PLMdIlCuMqSTkXBApOMqg2S5IeVfnkq2Mj&index=12
Hi ManiacalPuppy53 glad to see that you got over the mongo problem by advancing to PI4 π