
Reputation
Badges 1
67 × Eureka!how do i access the clearnl.conf custom variables then?
or - how do i configure and access env variables that way?
the use case is simple:
i wanna fetch data from an sql table, inside a task.
so i want to execcute a query, and then do some operations on it, from within a task, to do that i have to connect to the db,
and i don't want the connection details to be logged
ok, yeah, makes sense. thanks John!
the base image is python:3.9-slim
basicaly now i understand that I do need to define a PYTHONPATH inside my dockerimage
my only problem is that the path depends on the clearml worker
for example, i see that the current path is:
File "/root/.clearml/venvs-builds/3.10/task_repository/palmers.git
is there a way for me to know that dynamiclly?
@<1523701087100473344:profile|SuccessfulKoala55> hey jake, in case you've missed my answer
am i making sense?
@<1523701205467926528:profile|AgitatedDove14>
thanks for the help 🙂
@<1523701205467926528:profile|AgitatedDove14>
ok so now i upload with the following line:
op_model.update_weights(weights_filename=model_path, upload_uri=upload_uri) #, upload_uri=upload_uri, iteration=0)
and while doing it locally, it seems to upload
when i let it run remotely i get yhe original Failed uploading error.
altough, one time when i ran remote it did uploaded it. and then at other times it didn't. weird behaivor
can you help?
Hey @<1523701087100473344:profile|SuccessfulKoala55> , thanks for the quick response
I'm not sure I understand, but that might just be my lack of knowledge, to clarify:
i am running the task remotely on a worker with the --docker flag.
how can i add the root folder to PYTHONPATH?
as far as I understand, clearml is creating a new venv inside this docker, with its own python executeable, (which i don't have access to in advance)
hey martin thanks for the reply.
im doing the calling at the main function
why doesn't it try to use ssh as default? the clearml.conf doesn't contain user name and password
Hey joey, i configured ssh locally for my pc as well, and now it works.
are you guys planning in the future a feature which will allow me to specify the connection type, without relation to what i am running locally?
i have the same error
it seems like:
files.clear.ml
is down???
is it possible to tell him not the install my local libraries all at once? instead of manually saying ignore_requirements?
this is not an error per se, rather an INFO log
then it works
i opened a new, clean venv just now
ok so i accidentally (probably with luck) noticed the max_connection: 2 in the azure.storage config.
canceled that, and so now everything works
well, also, now, my runs are showing off problems as well:
yes,
so basically I should create a services queue, and preferably let it contain its own workers
basically, only test.py need the packages, but for somereason pipeline_test installs them as well
why those library need to run on a pipelinecontroller task, this task requires no libraries at all
the unsuccesslful:
Using cached meteostat-1.6.5-py3-none-any.whl (31 kB)
Requirement already satisfied: neuralprophet==0.5.3 in /usr/local/lib/python3.9/site-packages (from -r /tmp/cached-reqs58y_jg9f.txt (line 9)) (0.5.3)
Requirement already satisfied: numpy==1.23.5 in /usr/local/lib/python3.9/site-packages (from -r /tmp/cached-reqs58y_jg9f.txt (line 10)) (1.23.5)
Requirement already satisfied: pandas==1.5.3 in /usr/local/lib/python3.9/site-packages (from -r /tmp/cached-reqs58y_jg9f.txt (lin...