Reputation
Badges 1
533 × Eureka!anyway, my ultimate goal is to create templates for other tasks... Is that possible in any other way through the CLI?
after you create the pipeline object itself , can you get Task.current_task() ?
AgitatedDove14 no I can't... Just checked this. This is a huge problem for us, it used to work before and it just stopped working and I can't figure out why.
It's a problem for us because we made it a methodology of running some tasks under a pipeline task and saving summary iunfo to the pipeline task - but now since Task.current_task()
doesn't work on the pipeline object we have a serious problem
checking and will let you know
I prefer we debug on my machine (tell me what you want to check) than create a snippet
Continuing on this discussion... What is the relationship between configuring files_server
and all the rest we just talked about and the the default_output_uri
?
Is there a way to do so without touching the config? directly through the Task object?
AgitatedDove14 I really don't know how is this possible... I tried upgrading the server, tried whatever I could
About small toy code to reproduce I just don't have the time for that, but I will paste the callback I am using to this explanation. This is the overall logic so you can replicate and use my callback
From the pipeline task, launch some sub tasks, and put in their post_execute_callback
the .collect_description_tables
method from my callback class (attached below) Run t...
Any news on this? This is kind of creepy, it's something so basic that I can't trust my prediction pipeline because sometimes it fails randomly with no reason
I jsut think that if I use "report_table" I might as well be able to download it as CSV or something
AgitatedDove14
So I couldn't kill the service agent myself (permission denied, I'm not sudo). What I did is I docker-compose down
ed, commented out only the environment variable of GOOGLE_APPLICATION_CREDENTIALS
from the clearml services agent service and upped the docker-compose again. I enqueued the Cleanup Service and now it works. Really weird, looks like the setting of GOOGLE_APPLICATION_CREDENTIALS
causes an error when set even though I'm 100% is it not used for storag...
When I ran the clearml-task --name ... -project ... -script ....
it failed saying not requiremetns was found
Thanks Martin, code runs as expected
What do you mean by submodules?
She did not push, I told her she does not have to push before executing as trains figures out the diffs.
When she pushes - it works
It's kind of random, it works sometimes and sometimes it doesn't
and in the UI configuration I didn't understand where does permission management came into play
moreover, in each pipeline I have 10 different settings of task A -> Task b (and then task C), each run 1-2 fails randomly
So regarding 1, I'm not really sure what is the difference
When running in docker mode what is different the the regular mode? No where in the instructions is nvidia docker a prerequisite, so how exacly will tasks on GPU get executed?
I feel I don't underatand enough of the mechanism to (1) understand the difference between docker mode and not and (2) what is the use casr for each
-_- why there isn't a link to source on the docs?
ClearML results page:
`
Launching step: 2019-09-03_2021-01-25_choose_best
Parameters:
{***}
Configurations:
None
Overrides:
None
Launching step: 2019-10-23_2021-01-15_choose_best
Parameters:
{********}
Configurations:
None
Overrides:
None
Launching step: 2019-05-26_2020-12-26_choose_best
Parameters:
{******}
Configurations:
None
Overrides:
None
Launching step: 2019-07-15_2021-01-05_choose_best
Parameters:
{************}
Configurations:
None
Overrides:
None
Launching step...
does the services mode have a separate configuration for base image?
Okay, so let me get this straight
The autoscaling is basically an ever-running task (lets say on the services
queue). Now, the actual auto scaling and which queues exist have nothign to do with that, and are configured in the auto scale task?
There are many ohter packages in my environment which are not listed
So prior to doing any work on the trains autoscaler servcice, I should first create a auto scaling group in AWS?
Trains docs have at no point any mention on what should I do on the AWS interface... So I'm not sure at what point I should encounter this wizard
I'm going to play with it a bit and see if I can figure out how to make it work