Reputation
Badges 1
14 × Eureka!EnviousStarfish54 BTW, as for absolute reproducibility, you are obviously right. If you use S3 to store the data, and you changed the data in S3 then we can't catch it.
Our design compresses (zips) the files and store them in a version somewhere. If this is modified than you are trying hard to break stuff 🙂 (Although you can). This is not the most efficient space-wise when it comes to images \ videos, for these, you can save links, but I think it's only in the enterprise version but then,...
once integrating clearml it'll automatically report resource utilization (GPU \ CPU \ Memory \ Network \ Disk IO)
Hi SmugTurtle78 , sorry for answer in slow-mo 😉 I'm not 100% sure I got the question... you want a a global security group and network for the entire autoscaler instead of per-instance type?
The ClearML team appreciates bitching anywhere you feel like it (especially the memes section).
In the absence of some UI \ UX channel I suggest to just write here. I can promise you the people who's responsibility it is to fix \ improve the UI are roaming here and will see the request 😄
when you create a new dataset you can define one or more parents. by default there are no parents to a dataset.
You mean add some list on top of all experiments with tags and their ID?
OutrageousSheep60 The python package is in testing. Hopefully will be out Sunday \ Monday :)
MelancholyElk85 , yes nesting pipelines is possible. as for flatting it afterwords, maybe AgitatedDove14 knows? I'm pretty sure it can't be done though
JitteryCoyote63 ReassuredTiger98
Could you please try with the latest agent 1.5.2rc0 and let us know if it solved the issue?
EcstaticBaldeagle77 , Actually, these scalars and configurations are not saved locally to a file, but can be retrieved and saved manually. If you want to get metrics you can call task.get_reported_scalars() and if you want configuration then call task.get_configuration_object() with the configuration section as it appears in the web application
Hi Itay! I know JitteryCoyote63 played with it a bit, I'm not sure what was his ultimate conclusion 🙂
We are now working adding such feature to ClearML-Pro (That is soon-to-be released), I suggest to stay tuned 😄
Hi BattySeahorse19 !
We have made a comparison, https://clear.ml/blog/stacking-up-against-the-competition/ but as this industry moves in lightning-speed, this is probably already outdated 🙂
I am not closely following MLFLow so some of the features I'll discuss below might be outdated but the gist of it is this:
ClearML has an orchestration part, data management, serving, pipelines, Hyperparameter optimziation while MLFlow doesn't. ClearML offers a hosted Saas while MLFlow needs to be se...
ReassuredTiger98 that's great to hear 🙂
Hi Doron, as a matter of fact yup 🙂 The next version would include a similar feature. Plan is to have it released middle of December so stay tuned 😄
instead of system_tags use:
ReassuredTiger98 Nice digging and Ouch...that isn't fun. Let me see how quickly I can get eyes on this 🙂
ReassuredTiger98 , Pytorch installation are a sore point 🙂 Can you maybe try to specify a specific build and see if it works?
Hey GrotesqueDog77
A few things, first you can call _logger.flush() which should solve the issue you're seeing (We are working to add auto-flushing when tasks end 🙂 )
Second, I ran this code and it works for me without a sleep, does it also work for you?
` from clearml import PipelineController
def process_data(inputs):
import pandas as pd
from clearml import PipelineController
data = {'Name': ['Tom', 'nick', 'krish', 'jack'],
'Age': [20, 21, 19, 18]}
_logger...
Hi Jevgeni! September is always a slow month in Israel as it's holiday season 🙂 So progress is slower than usual and we didn't have an update!
Next week will be the next community talk and publishing of the next version of the roadmap, a separate message will follow
Hmm, I'm not 100% sure I follow. you have multiple models doing predictions. Is there a single data source that feeds to them and they run in parallel. or is one's output is another input and they run serially?
Hi TenseOstrich47 Yup 🙂 You can check our scheduler module:
https://github.com/allegroai/clearml/tree/master/examples/scheduler
It supports time-events as well as triggers to external events
That's how I see the scalar comparison, no idea which is the "good" and which is the "bad"
Thanks! 😄 As i've mentioned above, these features were chosen because of users feedback so keep it up and Thanks again!
Hi anton, so the self-hosted ClearML provides all the features that you get from the hosted version so you're not losing on anything. You can either deploy it with docker-compose or on K8s cluster with helm charts.
Hi JumpyPig73 ,
So when you update to the SaaS Pro Tier, you get 20% increase in your Artifacts storage, Metric events and API calls so you won't be charged for them until you reach the new increased quota.
In addition to that, you pay 15 USD per user from the first user.
Does this make sense?
pytorch wheels are always a bit of a problem and AFAIK it tells that there isn't a matching version to the cuda specified \ installed on the machine. You can try and update the pytorch to have exact versions and it usually solves the issue
Hi MysteriousSeahorse54 How are you saving the models? torch.save() ? If you're not specifying output_uri=True it makes sense that you can't download as they are local files 🙂
And when you put output_uri = True, does no model appear in the UI at all?