Reputation
Badges 1
18 × Eureka!The tensorflow and keras version is 2.11.0 in both cases. Not noticing any mismatch.
I tried
running it without calling Task.init without the agent - This works Without calling Task.init with the agent - doesn't work Calling Taskl.init with the agent - doesn't work
AgitatedDove14 It is the same version. In fact, I am using the same image from tensorflow on docker hub to run the code a) directly, and b) with clearml. It runs directly but leads to the above error with clearml.
The issue has been resolved. Details in the same github issue https://github.com/allegroai/clearml/issues/635#issuecomment-1324870817
CostlyOstrich36 FancyTurkey50 in case this was still unresolved at your end.
To give some background - We signed up for SaaS (free tier) about 2 weeks ago. Snice then we have have been running agents on 3 on-premise systems. We have tried about 5 practice projects to familiarize ourselves with the platform, and that exhausted the 1M api usage limit. So, before I sign up for the pro version and add all of my team members, I wanted to figure out how to monitor and control API usage. What do you think could be the biggest contributor to API usage with such little use of ...
Interestingly, the example provided on clearml github works in the target agent (a docker container). It imports keras through tensorflow. Importing keras directly works on local, and in the target container. However, that fails as a clearml-task.
My guess is that clearml is reimporting keras somewhere, leading to circular dependencies.
Also the line -File "train_tf/keras_mnist.py", line 8, in <module> import keras `` import keras
is not at line 8 in the entry script train_tf/keras_mnist.py
. I wonder why this is wrong in the logs.
Hello. Sorry for bringing up the thread. I am facing the same issue on clearml-agent version 1.4.1 and clearml version 1.8.0 . Can you please point me to a github issue FancyTurkey50 or any resolution CostlyOstrich36 ?
Is clearml importing keras or any of its modules separately? I am not able to reproduce this error outside clearml.
I have posted an update on a relevant issue - https://github.com/allegroai/clearml/issues/635
Thank you so much! AgitatedDove14 It's pretty clear now.
I am not using --force-current-version
so I suppose it would be pulling the latest clearml-agent version inside the container. From the logs I can see it is installing clearml-agent version 1.4.1 in the container too.
The clearml-agent version is 1.4.1 and the cleaml version is 1.8.0.
I am using the following command to run the agent:clearml-agent daemon --detached --queue US3090 USany default --docker
Thanks AgitatedDove14 Does the sdk.development.worker.report_period_sec
configuration determine the reporting period?
AgitatedDove14 You are right. I confused myself by making a minor error in passing flags.
Ideally I want to use conda environment.yml instead of a requirements file.
Amazing. Thanks! Is there a similar setting available for conda mode.