Reputation
Badges 1
93 × Eureka!We also wanted this, we preferred to create a docker image with all we need, and let the pipeline steps use that docker image
That way you donāt rely on clearml capturing the local env, and you can control what exists in the env
not sure about this, we really like being in control of reproducibility and not depend on the invoking machineā¦ maybe thatās not what you intend
@<1523701118159294464:profile|ExasperatedCrab78>
Hey š
Any updates on this? We need to use a new version of transformers because of another bug they have in an old version. so we canāt use the old transformers version anymore.
Hey š Thanks for the update!
what iām missing the is the point where you report to clearml between cast and casting back š¤
@<1523701118159294464:profile|ExasperatedCrab78> Sorry only saw this now,
Thanks for checking it!
Glad to see you found the issue, hope you find a way to fix the second one. for now we will continue using the previous version.
Would be glad if you can post when everything is fixed so we can advance our version.
@<1523701118159294464:profile|ExasperatedCrab78>
Ok. bummer to hear that it won't be included automatically in the package.
I am now experiencing a bug with the patch, not sure it's to blame... but i'm unable to save models in the pipeline.. checking if it's related
looks like itās working š tnx
I'm working with the patch, and installing transformers from github
when i did this with a normal task it worked wonderfully, with pipeline it didnāt
tried your suggestion, still got to file serverā¦
and the agent is outputting sdk.development.default_output_uri =
although itās different in both the original config, and the agent extra config
Artifacts, nothing is reaching s3
regarding what AgitatedDove14 suggested, iāll try tomorrow and update
CostlyOstrich36 This is for a step in the pipeline
and this is for a normal task
using api.files_server? not default_output ?
thatās what i started with, doesnāt work in pipelines
And i am logging some explicitly
AgitatedDove14 So it looks like it started to do something, but now itās missing parts of the configuration
Missing key and secret for S3 storage access
(iām using boto credential chain, which is off by defaultā¦)
why isnāt the config being passed to the inner step properly ?
Yes it worked š
I loaded my entire clearml.conf in the āextra confā part of the auto scaler, that worked
i had a misconception that the conf comes from the machine triggering the pipeline
I have but i believe i found the issue
but it makes sense, because the agent in that case is local
that does happen when you create a normal local task, thatās why i was confused
Yes tnx for clarifying š
TimelyMouse69
Thanks for the reply, this is only regarding automatic logging, where i want to disable logging all together (avoiding the task being added to the UI)
Yes, but itās more complex because iām using a pipelineā¦ where i donāt explicitly call Task.init()
also, i donāt need to change it during execution, i want it for a specific run