CostlyOstrich36 This is for a step in the pipeline
PricklyRaven28 at the beginning of the log, the clearml agent should print the configuration, do you have api.fileserver as the S3 bucket?
PricklyRaven28 did you set the iam role support in the conf?
https://github.com/allegroai/clearml/blob/0397f2b41e41325db2a191070e01b218251bc8b2/docs/clearml.conf#L86
AgitatedDove14 So it looks like it started to do something, but now it’s missing parts of the configuration
Missing key and secret for S3 storage access
(i’m using boto credential chain, which is off by default…)
why isn’t the config being passed to the inner step properly ?
and the agent is outputting sdk.development.default_output_uri =
although it’s different in both the original config, and the agent extra config
Try with sdk.development.default_output_uri
as well
tried your suggestion, still got to file server…
Is the entire pipeline running on the autoscaler?
i had a misconception that the conf comes from the machine triggering the pipeline
Sorry, this one :)
using api.files_server? not default_output ?
but it makes sense, because the agent in that case is local
that does happen when you create a normal local task, that's why i was confused
The parts that are not passed in both cases are the configurations from the conf file. Only the environment is passed (e.g. git python packages etc) , . For example if you have storage credentials in your conf file , they are not passed to a remote agent, instead the credentials from the remote agent are used when it runs the task.
make sense?
Yes... I think that this might be a bit much automagic even for clearml 😄
In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?
Yes it worked 🙂
I loaded my entire clearml.conf in the “extra conf” part of the auto scaler, that worked
when i did this with a normal task it worked wonderfully, with pipeline it didn’t
i had a misconception that the conf comes from the machine triggering the pipeline
If possible, i would like all together prevent the fileserver and write everything to S3 (without needing every user to change their config)
There is no current way to "globally" change the default files server (I think this is part of the enterprise version, alongside vault etc.).
What you can do is use an OS environment to override the conf file:CLEARML_FILES_HOST="
"
PricklyRaven28 wdyt?
that’s what i started with, doesn’t work in pipelines
that does happen when you create a normal local task, that’s why i was confused
regarding what AgitatedDove14 suggested, i’ll try tomorrow and update