
Reputation
Badges 1
45 × Eureka!I understand what you mean. I am just describing different case. Lets assume i have my docker image already (all dependencies , data solved). Right now I run my task and it automatically looks for requirements.txt file in the repository. My question is -> is there any way to avoid this (simplest solution for now will be to -> rename requirements.txt to different filename)? I tried the things you sent already. The thing is that in requirements.txt in this repos can not be installed that easily...
I just set up my server from following url : https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_linux_mac/
So probably you are right - > nc -vz localhost 8080
Output when run locally not in docker: Connection to localhost (127.0.0.1) 8080 port [tcp/http-alt] succeeded!
Output when inside docker bash: localhost [127.0.0.1] 8080 (http-alt) : Connection refused
Thanks ! That is exactly what I meant 🙂
Have a nice day!
when i run it locally it was python script.py and for the remote you are right
OPENBLAS="$(brew --prefix openblas)" pip install pandas
Yes api server is on the same machine -> running in container
web_server: http://localhost:8080
api_server: http://localhost:8008
files_server: http://localhost:8081
Hi @<1523701070390366208:profile|CostlyOstrich36> , the worker cloned the repo correctly, however in the nested scipt if you use task.init
it wont work / wont overwrite anything.
Hi ExasperatedCrab78 ,
am I getting it right that alias = dataset id which can be found in the clearml dashboard?
I also tried the changed url (github) you sent. I can successfully run all the scripts, but I do not see any results from log_dataset_statistics
. Maybe I am wrong but in the https://github.com/allegroai/clearml-blogs/blob/master/urbansounds8k/preprocessing.py this function is not even called. Could you help me with that please? I just want to replicate -> clearml dashboa...
Anyways no histogram / table is reported in dashboard however in tutorial video @ https://www.youtube.com/watch?v=quSGXvuK1IM it is.
Oh thanks ! 🙂 I understand now. Please let me know about the other problem.
Original experiment has 1.10.0 pytorch and 113 cuda ['1.10.0+cu113']. Everything was run on the my local computer. In the virutal env i have these versions (however the system itself has little bit newer).
you can edit the requirements section directly <- where ? if i create requirements.txt it seems to be ignored
Still not solved, idk if these dependencies are cached somewhere but when i change requirements.txt or i add it manually into code it still have problems with the torch and is looking for 'torch==1.10.0+cu113'
I can see the docker in docker ps
but it seems like it never gets to code execution. I do not have an idea where it got from. Seems like somewhere it gets "pip" + "pip".
Hello ExasperatedCrab78 ! I tried to add the PR. I hope it will be sufficient and clear. Have a nice day and thanks for help ! :)
For future you can put in requirements.txt :detectron @
The agent simply try to install requirements from requirements.txt , however i dont want to do that because i have my docker image ready.
for the requirements how do you mean it please? To add requirements.txt into root directory ith the description of packages is enough ? or do you have to put somewhere you want to use this file? Thanks
i can try , any guide online ? or is it totaly easy
command i run:learml-agent daemon --queue default --foreground
response i get:clearml_agent: ERROR: create.<locals>.Validator.__init__() got an unexpected keyword argument 'types'
reinstalling realy solved the problem , thank you. Have a very nice day.
To describe the use-case. Lets say we have someapp which can export specific training script. I would like to create this as a specific "draft" task and later execute it.