![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/AnxiousSeal95.png)
Reputation
Badges 1
14 × Eureka!ZanyPig66 , the 2 agents can run from the same ubuntu account and use the same clearml.conf. if you want each to have its own configuration file just add --config-file PATH_TO_CONF_FILE and it would take another config file. Makes sense?
GiganticTurtle0 Got it, makes a lot of sense!
JitteryCoyote63 ReassuredTiger98
Could you please try with the latest agent 1.5.2rc0 and let us know if it solved the issue?
ReassuredTiger98 Nice digging and Ouch...that isn't fun. Let me see how quickly I can get eyes on this 🙂
GiganticTurtle0 That is correct. ATM, you can store some things on the model (I think you can hack it by editing the network configuration and storing whatever you want there.
And yes, we are going to revisit our assumptions for the model object, adding more stuff to it. Our goal is for it to have just enough info so you can have actionable information (IE, how accurate is it? How fast? How much power does it? How big it is, and other information), but not as comprehensive as a task. something like a lightweight task 🙂 This is one thing we are considering though.
GiganticTurtle0 What about modifying the cleanup service to put all experiments with a specific tag into a subfolder? Then you'll have a subfolder for published experiments (or production models or whatever criteria you want to have 🙂 ). That would declutter your workspace automatically and still retain everything.
BTW, I suggest for new questions, just ask in the clearml-community. I'm really happy to help but I almost missed this message 😄
ExcitedFish86 You came to ClearML because it's free, you stayed because of the magic 🎊 🎉
Hey GrotesqueDog77 , so it seems like references only works on "function_kwargs" and not other function step parameter.
I'm trying to figure out if there's some workaround we can offer 🙂
Hi OutrageousSheep60 , The plan is to release this week \ early next week a version that solves this.
EnviousStarfish54 VivaciousPenguin66 So for random seed we have a way to save it so this should be possible and reproducible.
As for execution progress I totally agree. We do have our pipelining solution but I see it's very common to use us only for experiment tracking and use other tools for pipelining as well.
Not trying to convert anyone but may I ask why did you choose to use another tool and not the built-in pipelining feature in ClearML? Anything missing? Or did you just build the in...
EnviousStarfish54 BTW, as for absolute reproducibility, you are obviously right. If you use S3 to store the data, and you changed the data in S3 then we can't catch it.
Our design compresses (zips) the files and store them in a version somewhere. If this is modified than you are trying hard to break stuff 🙂 (Although you can). This is not the most efficient space-wise when it comes to images \ videos, for these, you can save links, but I think it's only in the enterprise version but then,...
And as for clearml-data I would love to have more examples but not 100% sure what to focus on as using clearml-data is a bit...simple? In my, completely biased, eyes. I assume you're looking for workflow examples, and would love to get some inspiration 🙂
Hi anton, so the self-hosted ClearML provides all the features that you get from the hosted version so you're not losing on anything. You can either deploy it with docker-compose or on K8s cluster with helm charts.
Does that answer your question?
KindBlackbird59 I think you are looking for something like a "git repository" (which, IIRC, is how dvc sees "projects" or models),
that gives you a clear lineage (This model came first, then I got this model with this code and this data).
The way ClearML works is slightly different, and each "repo" is shown as a project which is flat. The way we envision users marking things like "model V2" is by adding tags.
The reason behind this design is that git, while has clear lineage, is harder to wo...
You can use pre \ post step callbacks.
Did you try with function_kwargs?
Yeah, with pleasure 🙂
Try this, I tested it and it works:docker=pipe._parse_step_ref("${pipeline.url}")
It's hack-ish but it should work. I'll try and get a fix in one of the upcoming SDK releases that supports parsing references for parameters other than kwargs
Oh!!! Sorry 🙂
So...basically it's none of them.
All of these are hosted tiers. The self-hosted is our Open Source which you can find https://github.com/allegroai/clearml-server
It has an explanation on how to install it and some of the options available for you.
Looking at our pricing page, I can see how it's not trivial to get from there to the github page...I'll try to improve that! 😄
Hi EnviousStarfish54 If you want to not send info to the server, I suggest you to set an environment variable, this way as long as the machine has this envvar set it won't send to the server
VivaciousPenguin66 This is very true! We are trying to explain the benefits of this method. Some people like it and some people like the flexibility. We do have our philosophy in mind when we create "best practices" and obviously features to ClearML but ultimately people should do what makes them the most productive!
If we are getting philosophical, I think it's the state of the industry and as it progresses, these standard methods would become more prominent.
also, to add to what you wrote,...
I think the best model name is person_detector_lr0.001_batchsz32_accuracy0.63.pkl 😄
EnviousStarfish54 VivaciousPenguin66 Another question if we're in a sharing mood 😉 Do you think a video \ audio session with one of our experts, where you present a problem you're having (let's say large size of artifacts) and he tries to help you, or even can give some example code \ code skeleton. Would something like that be of interest? Would you spend some time in such monthly session?