
Reputation
Badges 1
30 × Eureka!i'm still trying to understand why it was needed in our case. i have a nvidia gpu operator with mostly the default values installed on our on prem cluster. i found there is an option CONTAINERD_SET_AS_DEFAULT in the operator, which, when enabled, puts the runtime for all pods. we didn't enable that option, maybe if we had enabled it would have worked.
i can pass any crazy value i want.. it doesn't matter. however, i can use --output_uri= s3://blabla and then at least i get the error that it cannot use that bucket
for comparison: this is when i use --output-uri
so it seems that it takes output_uri from the clearml commandline but not from the Task.init inside the scripot
the model has this information ... the /tmp seem local URIs suggesting that it doesn't even try to upload them
this is now in my python script:
i set reuse_last_task_id to false to force creation of a new task in all cases
it was to test if reuse_last_task_id made any effect (i have the impression it doesn't)
well it doesn't fail. but whatever i set gets ignored
and when i try to use --output-uri i can't pass true because obviously i can't pass a boolean only strings
well it made a difference (the code for the init() is not added anymore) but it still didn't take my output uri
and also, on the tutorials that do something with task.init, the example always talks about running locally and not in the agent
this is the output of the training. it doens't try to upload (note that this is my second try so it already found a model with that name, but on my first try it didn't work either)
and ... clearml-agent takes a --project and a --name argument that are mandatory , so these are never taken from Task.init
this seems to be confirmed by this documentation None If you have not changed the default runtime on your GPU nodes, you must explicitly request the NVIDIA runtime by setting runtimeClassName: nvidia
in the Pod spec:
(same for environment variable)
don't know.. but i see for instance when using clearml-task i can put any (even nonsensical) values in Task.init
also the inability to see workers/queues has the same error but on diferent fields. so i guess i must be missing something bigger than just a misconfigured index ?
its as if the line is not there
this is the script shown by clearML ui. so the task.init call looks right
hello, i'm still not able to save clearml models. They are generated and registered okay, but they are not on the fileserver. i now have Task.init(output_uri=True) and i also have --skip-task-init in clearml commandline so that it doesn't overwrite the task.init call
this is my cmdline: clearml-task --name hla --requirements requirements.txt --project examples --output-uri http://clearml-fileserver:8081 --queue aws-instances --script keras_tensorboard.py
AgitatedDove14 your trick seems to work (i had to change the url to reflect the fact i run on k8s)
it seems that whatever i pass to Task.init is ignored
hi @<1729671499981262848:profile|CooperativeKitten94> did i convince you with my argument ? do you think having runtimeClass configurable is worth it ?