ContemplativeGoat37 , Hi ๐
You can do the following configuration in your ~/clearml.conf
sdk.development.default_output_uri: "
s3://my_bucket/ "
Hi @<1523701083040387072:profile|UnevenDolphin73> , looping in @<1523701435869433856:profile|SmugDolphin23> & @<1523701087100473344:profile|SuccessfulKoala55> for visibility ๐
AbruptWorm50 , you can send me. Also can you please answer the following two questions?When were they registered? Were you able to view them before?
Also, you mention plots but in the screenshot you show debug samples. Can I assume you're talking about debug samples?
The metadata would relate to the entire dataset.
For your use case I think what's relevant is HyperDatasets
WackyRabbit7 I don't believe there is currently a 'children' section for a task. You could try managing the children to access them later.
One option is add_pipeline_tags(True)
this should mark all the child tasks with a tag of the parent task
That's weird. Did you do docker-compose down
and up properly?
Yes & Yes.task.upload_artifact('test_artifact', artifact_object='foobar')
You can save a string, however please note that in the end it will be saved as a file and not a pythonic object. If you want to keep your object, you can pickle it ๐
That sounds like a good idea! Can you please open a GitHub issue to track this?
For example artifacts or debug samples
Also, I think that maybe there is a bug with the CPU mode: I tried to run tests with instance without GPU , marked the option "Run in CPU mode (no gpus)" and I saw on the experiment logs that its trying to run the docker with "--gpus all" option and failed right after the execution.
Which instance type did you use?
You can mix and match various buckets in your ~/clearml.conf
` Status: Downloaded newer image for nvidia/cuda:10.2-runtime-ubuntu18.04
1657737108941 dynamic_aws:cpu_services:n1-standard-1:4834718519308496943 DEBUG docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
time="2022-07-13T18:31:45Z" level=error msg="error waiting for container: context canceled" `As can be seen here ๐
Do you mean if they are shared between steps or if each step creates a duplicate?
Hi @<1736919317200506880:profile|NastyStarfish19> , the default behaviour of the agent will install everything in the 'installed packages' section in the execution tab. You can also specify packages manually using Task.set_packages - None
What's the docker image that you're using?
Maybe even make a PR out of it if you want ๐
How are you launching the agents?
RoughTiger69 So basically (If I follow your example), the question is whether ClearML "knows" Task B" is a clone of "Task A"?
And if the loaded Dataset Y, is somehow registered on Task X?
Is that correct?
From the screenshots provided you ticked 'cpu' mode AND I think the machine that you're using n1-standard-1 is a cpu only machine, if I'm not mistaken.
I think that's what's there. In the Scale & Enterprise version ClearML usually works together with customers to provide a glue layer for K8s or even SLURM
Hi @<1581454875005292544:profile|SuccessfulOtter28> , using the most metrics you mean how much metric space it takes?
Archiving doesn't remove anything, but once archived, you can delete experiments to free up space
What is the combination of --storage
and configuration that worked in the end?
Hi @<1799974757064511488:profile|ResponsivePeacock56> , in that case I think you would need to actually migrate the files from files server to S3 and then also change the links logged in MongoDB associated to the artifacts.
Do you mean see the datasets in the UI?
@<1787653555927126016:profile|SoggyDuck67> , can you try setting the binary to 3.11 instead of 3.10?