
Reputation
Badges 1
67 × Eureka!Hey @<1523701087100473344:profile|SuccessfulKoala55> , thanks for the quick response
I'm not sure I understand, but that might just be my lack of knowledge, to clarify:
i am running the task remotely on a worker with the --docker flag.
how can i add the root folder to PYTHONPATH?
as far as I understand, clearml is creating a new venv inside this docker, with its own python executeable, (which i don't have access to in advance)
@<1523701205467926528:profile|AgitatedDove14>
ok so now i upload with the following line:
op_model.update_weights(weights_filename=model_path, upload_uri=upload_uri) #, upload_uri=upload_uri, iteration=0)
and while doing it locally, it seems to upload
when i let it run remotely i get yhe original Failed uploading error.
altough, one time when i ran remote it did uploaded it. and then at other times it didn't. weird behaivor
can you help?
@<1523701205467926528:profile|AgitatedDove14>
i have the same error
it seems like:
files.clear.ml
is down???
hey martin thanks for the reply.
im doing the calling at the main function
Hey john, i thought this was the end of it, but apperantly the dataset was uploaded in the end
only sometimes, the pipeline runs using local machines
this is not an error per se, rather an INFO log
ok so, idk why it helped, but setting base_task_id
instead of base_task_name in the pipe.add_step
function, seems to overcome this
ok, yeah, makes sense. thanks John!
yes,
so basically I should create a services queue, and preferably let it contain its own workers
how do i access the clearnl.conf custom variables then?
or - how do i configure and access env variables that way?
the use case is simple:
i wanna fetch data from an sql table, inside a task.
so i want to execcute a query, and then do some operations on it, from within a task, to do that i have to connect to the db,
and i don't want the connection details to be logged
do you want the entire log files? (it is a pipeline, and i can't seem to find the "Task" itself, to download the logs)
i can send you our pipeline file and task
basically, only test.py need the packages, but for somereason pipeline_test installs them as well
@<1523701070390366208:profile|CostlyOstrich36>
hey john, let us know if you need any more information
WebApp: 3.16.3-949 • Server: 3.16.1-974 • API: 2.24
@<1523701087100473344:profile|SuccessfulKoala55> hey jake, in case you've missed my answer
am i making sense?
it is installed as a pip package
but i am not using it in the code
ignore it, I didn't try and read everything you said so far, I'll try again tomorrow and update this comment
oh, so then we're back to the old problem, when i am using
weights_filename, and it gives me the errorFailed uploading: cannot schedule new futures after interpreter shutdown
i don't know why the server crashed (it is not self hosted)