using api.files_server? not default_output ?
tried your suggestion, still got to file server…
when i did this with a normal task it worked wonderfully, with pipeline it didn’t
Try with sdk.development.default_output_uri
as well
that’s what i started with, doesn’t work in pipelines
Is the entire pipeline running on the autoscaler?
and the agent is outputting sdk.development.default_output_uri =
although it’s different in both the original config, and the agent extra config
In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?
PricklyRaven28 at the beginning of the log, the clearml agent should print the configuration, do you have api.fileserver as the S3 bucket?
If possible, i would like all together prevent the fileserver and write everything to S3 (without needing every user to change their config)
There is no current way to "globally" change the default files server (I think this is part of the enterprise version, alongside vault etc.).
What you can do is use an OS environment to override the conf file:CLEARML_FILES_HOST="
"
PricklyRaven28 wdyt?
CostlyOstrich36 This is for a step in the pipeline
regarding what AgitatedDove14 suggested, i’ll try tomorrow and update
AgitatedDove14 So it looks like it started to do something, but now it’s missing parts of the configuration
Missing key and secret for S3 storage access
(i’m using boto credential chain, which is off by default…)
why isn’t the config being passed to the inner step properly ?
PricklyRaven28 did you set the iam role support in the conf?
https://github.com/allegroai/clearml/blob/0397f2b41e41325db2a191070e01b218251bc8b2/docs/clearml.conf#L86
Yes it worked 🙂
I loaded my entire clearml.conf in the “extra conf” part of the auto scaler, that worked
i had a misconception that the conf comes from the machine triggering the pipeline
Yes... I think that this might be a bit much automagic even for clearml 😄
i had a misconception that the conf comes from the machine triggering the pipeline
Sorry, this one :)
that does happen when you create a normal local task, that’s why i was confused
but it makes sense, because the agent in that case is local
that does happen when you create a normal local task, that's why i was confused
The parts that are not passed in both cases are the configurations from the conf file. Only the environment is passed (e.g. git python packages etc) , . For example if you have storage credentials in your conf file , they are not passed to a remote agent, instead the credentials from the remote agent are used when it runs the task.
make sense?