Reputation
Badges 1
45 × Eureka!Thanks @<1523701070390366208:profile|CostlyOstrich36> , but doesn’t the agent create/caches an environment from the requirements.txt when running? I’m reproducing an old project that used to work like that, and also my ClearML.conf set to work that way
Also looked at it but its only supported registered artifact object type is a pandas.DataFrame and not strings.
I think I'll keep it with ':' in the start of the string and that way it won't upload the folder
Solved it by doing clearml.Task.current_task().id but thank you
yep, just a string which is a path but not to upload the folder
not sure what that means to be honest @<1523701070390366208:profile|CostlyOstrich36>
Oh so in that case I'll need to change every agent's pip config file.
yes sometimes I suffer from small network issues, is there a way to make clearml have a bigger timeout when installing packages?
and if not is there a way to point it to a local package for installation or a local virtual enviroment?
@<1523701087100473344:profile|SuccessfulKoala55> and @<1523701070390366208:profile|CostlyOstrich36> , in the end I've found the problem, it was due to me running the pipeline locally and when running the pipeline locally it, doesn't copy all the dir but only the script that is running None
Hi @<1523701070390366208:profile|CostlyOstrich36> , Here is a better explanation of my situation, in my IDE the working directory is where my code starts and I'm importing from common_utils my custom augmentations and locally the code is working with the import I've added in my previous message, however when i run from ClearML agent the import from point a to point b isn't working however they are both in the same git repo and i don't want to copy the files into project_1 as to not have unne...
I reviewed this example and sadly there isn't anything about how to upload a path as a string only.
@<1523701087100473344:profile|SuccessfulKoala55> After going into the steps full details I reset the step and enqueued it
Thanks I'll look into that, but in the end we decided to add a private repo with the pytorch libraries that we need.
I've added the extra_index_url to point to our https and we changed to requirements.txt to look for that https however I'm getting this that warning I've attached and its still trying to download the packages not from my path.
how do i enable clearml-agent to look for private repos?
when i tried doing with the decorators it threw me an error that it cannot run task init in side a working task (the pipe lines task)
@<1523701070390366208:profile|CostlyOstrich36> my repo is like this and both the files are located at the same dir so its weird that they cannot find train:
.
├── pytorch
├── tensorflow
│ ├── Project A
│ │ └── src
│ ├── Project B
│ │ ├── data
│ │ ├── model
│ │ ├── reports
│ │ └── utils
│ ├── hand_validator_boxes
│ │ ├── src
│ │ ├── train.py (the module i need)
│ │ └── clearml_pipeline.py (where the pipeline is initilizied
└── utils
sadly the teammate that had the problem re-ran the experiments so i don't have the taskids but I do have the cpu and gpu usage of the agent that ran the experiment:
Btw in pipelines is there a way to get the pipelines main task id? for example <step_name>.id gets me the stages id but I need the main pipeline that's running all the tasks
Yes, here is the log file.
Thank you @<1523701070390366208:profile|CostlyOstrich36> and @<1523701205467926528:profile|AgitatedDove14> , after that bit on information, can you tell me where I can find the differences between the community server and self hosted server?
Are there any additional downsides to migrating to a self hosted server?
The flow is: Training.py (which creates and runs a training task) -> conversion_task.py (converts the outputs of the models into a format of our choosing) -> testing.py (testing the model after conversion).
I tried using the decorators and fucntions but they both threw me errors that i cannot do task init in side a running task.
@<1523701087100473344:profile|SuccessfulKoala55> What I'm trying to do is connect 3 different tasks into 1 pipeline but still being able to run each task as an individual when needed but without changing the tasks code. for example i have a training.py file which runs task.init in the start and creates a task in the server for training a new model, but i want also to create a pipeline that will run that training.py and other tasks together, is that more clear now?
Hi @<1523701070390366208:profile|CostlyOstrich36> , I am using the community server, what happens if i change to a self hosting server?
I'm using Tensorboard to report everything, nothing special besides that.
Hi @<1523701070390366208:profile|CostlyOstrich36> , it is part of the repository, do pipelines run differently then normal tasks? what I mean is when i run a task it has a working directory do pipelines also have that or are their working directory the root of the repo?
Ok cool, I'll try that, Thanks
Wow, thanks a lot @<1523701070390366208:profile|CostlyOstrich36> for pointing me in the right direction. I also see that i can use sdk.development.worker.log_stdout
if i really need to kill my api calls before I'll Host my own server.
BTW what does suppress_update_message
do? I mean which kind of messages does it suppress?
No, until now we used the default server that is handled by Clearml and we want to transfer to a self hosted one
Thanks John, I read the one about the pip timeout, the problem is that I'm assume clearml runs the following command :
"pip install -r requirments.txt" and I want to know if I make clearml add the timeout flag.
@<1523701070390366208:profile|CostlyOstrich36> After discussing with my TL, we think the plan we are subscribed to might not be for us, can you point me to a person who we can have a meeting with and advice us the best plan for my team?
Just upgraded to clearml-agent==1.5.1 and I still get this error.
@<1523701087100473344:profile|SuccessfulKoala55> did i do something wrong?