
Reputation
Badges 1
26 × Eureka!yes we saved those in the hyper parameters
Ok thanks! show hidden projects actually allows me to see all of the missing datasets, but they are all in the same project, why do I need to enable it?
(I tested this, switched it off and the datasets disappear, switch it on and they appear)
ok, a great change, but why are they empty for me? 🙂
ok nice, so I updated to 1.6, and am now getting the following error when creating a dataset project in the cli using clearml-data create :
clearml-data - Dataset Management & Versioning CLI
Creating a new dataset:
Error: 'str' object has no attribute 'radd'
clearml-data create --project "PROJECT" --name "NAME"
ok great, I think I'll stick with 1.5 for now and wait for the official release, no rush, thanks!
ok so:
you recommend just saving the dataset id as part of the task configuration? I think I was a bit unclear, my question is how should I report them from the code, they are not caught automatically because they are custom parameters I calculate not as part of any framework, so I wonder if I should report them as artifacts, or maybe scalars? my issue with scalars is that I only have 1 of each type, and the API seems to be oriented toward a series of results of the same type
ahh just use one of the community ones?
Whatever is simpler, we are researching medical data so I can't use your hosted server, I need it to be inside our VPC in AWS. I was thinking either an EC2 instance with/out docker or use AWS ECS
and yes I meant the AMI, in your docs you recommend to use the old one until you post a new one, but the old one is no longer available.
Hi, I don't think so, because that will mean I have to send the credentials themselves from my users to the remote machine on every run which is insecure, If you had some temporary token I can send, or a way just to send the username and have the sensitive credentifals be separate from it , it would be ideal
@<1523701070390366208:profile|CostlyOstrich36> ?
anyway I can catch that token so I can use it directly? I looked throught he documentation and found no way to access it
Ok it works, but I don't see the labels in the model output, is there a way for me to use OutputModel to update those labels?
When creating new datasets by combinging and modifying existing datasets programmatically in python, if I add the previous datasets as "parents" I can see a nice flow graph in ClearML of how each child dataset was generated. However this means my datasets contain the content of their parents which is quite a lot, is there a way for me to have that flow graph without using the parent datasets option?
 so I can create an image with the model. Whats the recommended way of doing that?
I would like to recreate an experiment after saving its configurations, to do that that I need to load those configurations in another notebook, right now the only way I managed to do that is by saving those configurations as an artifact and load that artifact, but it is less convenient than loading a configuration.
Thanks, found it, but yeah, it would be a lot more convenient if you can add it to regular right click menu the same way it is with models and tasks