
Reputation
Badges 1
8 × Eureka!What about cloning and setting "last commit in branch" ?
These are excellent questions. While we are working towards including more of our users stack within the ClearML solution, there is still time until we unveil "the clearml approach" to these. From what I've seen within our community, deployment can anything from a simple launch of a docker built with 'clearml-agent build' to auto training pipelines.
Re triggering - this is why we have clearml-task π
same name == same path, assuming no upload is taking place? *just making sure
Hi, which Trains doc version are you looking at? Is it the latest?
Welcome! The machines are the ones you install and run the trains-agent daemon on, and creating the queues can be done via the trains-agent cli or the webapp UI
That's interesting, how would you select experiments to be viewed by the dashboard?
There are several ways of doing what you need, but none of them are 'magical' like we pride ourselves for. For that, we would need user input like yours in order to find the commonalities.
This is definitely getting into an example soon! Props for building something cool
Hi Dan, please take a look at this answer, the webapp interface mimics this. Does this click for you?
Not so much relevant, since it can be seen from your task π but it would be interesting to find out if trains made something be much slower, and if so - how
Hi! Looks like all the processes are calling torch.save so it's probably reflecting what Lightning did behind the curtain. Definitely not a feature though. Do you mind reporting this to our github repo? Also, are you also getting duplicate experiments?
https://github.com/allegroai/trains/issues/193 for future reference (I will update later)
script runs, tries to register 4 models, each one of them is exactly found in the path, size/timestamp is different. then it will update the old 4 models with the new details and erase all the other fields
isn't it better if it just updates the timestamp on the old models?
so what I am describing is exactly this - once you try to create an output model from the same task, if the name already exists - do not create a new model, just update the timestamp on the old one
wait, I thought this is without upload
what a turn of events π so lets summarize again:
upkeep script - for each task, find out if there are several models created by it with the same name if so, make some log so that devops can erase files DESTRUCTIVELY delete all the models from the trains-server that are in DRAFT mode, except the last one
if you want something that could work in either case, then maybe the second option is better
then your devops can delete the data and then delete the models pointing to that data
ClearML Free or self-hosted?
Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for π are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?
MiniatureCrocodile39 lets get that fixed πͺ could you post the link here?
Btw the reason that they initialize with barely discernible color lies within the hash function that encodes the str content to a color. I.e., this is actually a feature
It depends on what you mean by deployment, and what kind of inference you plan to do (ie rt vs batched etc)
But overall currently serving itself is not handled by the open source offering, mainly because there are so many variables and frameworks to consider.
Can you share some more details about the capabilities you are looking for? Some essentials like staging and model versioning are handled very well...
Rather unfortunate that such a vendor would risk such an inaccurate comparison...