Reputation
Badges 1
533 × Eureka!2021-10-11 10:07:19 ClearML results page: `
2021-10-11 10:07:20
Traceback (most recent call last):
File "tasks/hpo_n_best_evaluation.py", line 256, in <module>
main(args, task)
File "tasks/hpo_n_best_evaluation.py", line 164, in main
trained_models = get_models_from_task(task=hpo_task)
File "tasks/hpo_n_best_evaluation.py", line 72, in get_models_from_task
with open(pickle_path, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/elior/.clearml/c...
I mean usually it would read if cached_file: return cached_file
bottom line I want to edit the cleanup service code to only delete tasks under a specific project - how do I do that?
that's the only line starting with 192.168
BTW is the if not cached_file: return cached_file is legit or a bug?
How did it come to this? I didn't configure anything, I'm using the trains AMI, with the suggested instance type
Well this will have to wait a bit... my clearml-server is causing problems
No I don't have trains anywhere in my code
or its the same palce in the config file for configuring the docker mode agent base image?
How can I change the version of the Cleanup Service?
I guess the AMI auto updated
AgitatedDove14
So I couldn't kill the service agent myself (permission denied, I'm not sudo). What I did is I docker-compose down ed, commented out only the environment variable of GOOGLE_APPLICATION_CREDENTIALS from the clearml services agent service and upped the docker-compose again. I enqueued the Cleanup Service and now it works. Really weird, looks like the setting of GOOGLE_APPLICATION_CREDENTIALS causes an error when set even though I'm 100% is it not used for storag...
Trains docs have at no point any mention on what should I do on the AWS interface... So I'm not sure at what point I should encounter this wizard
I'm going to play with it a bit and see if I can figure out how to make it work
ClearML results page: `
Launching step: 2019-09-03_2021-01-25_choose_best
Parameters:
{***}
Configurations:
None
Overrides:
None
Launching step: 2019-10-23_2021-01-15_choose_best
Parameters:
{********}
Configurations:
None
Overrides:
None
Launching step: 2019-05-26_2020-12-26_choose_best
Parameters:
{******}
Configurations:
None
Overrides:
None
Launching step: 2019-07-15_2021-01-05_choose_best
Parameters:
{************}
Configurations:
None
Overrides:
None
Launching step...
Continuing on this line of thought... Is it possible to call task.execute_remotely on a CPU only machine (data scientists' laptop for example) and make the agent that fetches this task to run it using GPU? I'm asking that because it is mentioned that it replicates the running environment on the task creator... which is exactly what I'm not trying to do 😄
One sec I'll paste the relevant pieces of code
I don't fully get it - it says it has to be enqueued
could be 192.168.1.255?
I also ran it without $(pwd) on the Create Clearml task templates section, I added it because of CostlyOstrich36 's comments but it didn't help