
Reputation
Badges 1
108 × Eureka!BTW the code above is from clearml github so it’s the latest
I have but i believe i found the issue
and this is for a normal task
regarding what AgitatedDove14 suggested, i’ll try tomorrow and update
And i am logging some explicitly
AgitatedDove14 So it looks like it started to do something, but now it’s missing parts of the configuration
Missing key and secret for S3 storage access
(i’m using boto credential chain, which is off by default…)
why isn’t the config being passed to the inner step properly ?
@<1523701118159294464:profile|ExasperatedCrab78>
Hey 🙂
Any updates on this? We need to use a new version of transformers because of another bug they have in an old version. so we can’t use the old transformers version anymore.
@<1523701118159294464:profile|ExasperatedCrab78>
Hey again 🙂
I believe that the transformers patch wasn’t released yet right? we are getting into a problem where we need new features from transformers but can’t use because of this
Yes tnx for clarifying 😁
This is the next step not being able to find the output of the last step
ValueError: Could not retrieve a local copy of artifact return_object, failed downloading
hi, yes we tried with the same result
SmugDolphin23 SuccessfulKoala55 ^
@<1523701435869433856:profile|SmugDolphin23> @<1523701087100473344:profile|SuccessfulKoala55> Yes, the second issue still consists, currently breaking our pipeline
I'm working with the patch, and installing transformers from github
tried your suggestion, still got to file server…
I tried to work on a reproducible script but then i get errors that my clearml task is already initialized (also doesn’t happen on 1.7.2)
you can get updates on the issue i opened
https://github.com/fastai/fastai/issues/3543
but i think the probably better solution would be to create a custom ClearML callback for fastai with the best practices you think are needed…
Or try to fix the TensorBoardCallback, because for now we can’t use multi gpu because of it 😪
using api.files_server? not default_output ?
Saw it was merged 🙂 One down, one to go
TimelyMouse69
Thanks for the reply, this is only regarding automatic logging, where i want to disable logging all together (avoiding the task being added to the UI)
but it makes sense, because the agent in that case is local
My use case is developing the code, i don’t want to spam the UI
BTW, i would expect this to happen automtically when running “local” and “debug”
and the agent is outputting sdk.development.default_output_uri =
although it’s different in both the original config, and the agent extra config
but anyway, this will still not work because fastai’s tensorboard doesn’t work in multi gpu 😞
Hi, yes it's running with autoscaler so it's for sure in docker mode
Are you saying that it should've worked? I got 'docker' attribute doesn't exist error. Maybe it's the version of the clearml server?
don’t have one ATM