Reputation
Badges 1
67 × Eureka!only sometimes, the pipeline runs using local machines
i can send you our pipeline file and task
ok, yeah, makes sense. thanks John!
ok, thanks jake
what will be the fastest fix for it?
how do i access the clearnl.conf custom variables then?
or - how do i configure and access env variables that way?
the use case is simple:
i wanna fetch data from an sql table, inside a task.
so i want to execcute a query, and then do some operations on it, from within a task, to do that i have to connect to the db,
and i don't want the connection details to be logged
is it possible to tell him not the install my local libraries all at once? instead of manually saying ignore_requirements?
i have the same error
it seems like:
files.clear.ml
is down???
Hey @<1523701087100473344:profile|SuccessfulKoala55> , thanks for the quick response
I'm not sure I understand, but that might just be my lack of knowledge, to clarify:
i am running the task remotely on a worker with the --docker flag.
how can i add the root folder to PYTHONPATH?
as far as I understand, clearml is creating a new venv inside this docker, with its own python executeable, (which i don't have access to in advance)
this is not an error per se, rather an INFO log
hey, matrin
this script actuall does work
@<1523701205467926528:profile|AgitatedDove14> hey martin, i deleted the task.mark_completed() line
but still i get the same error,
could it possibly be something else?
basicaly now i understand that I do need to define a PYTHONPATH inside my dockerimage
my only problem is that the path depends on the clearml worker
for example, i see that the current path is:
File "/root/.clearml/venvs-builds/3.10/task_repository/palmers.git
is there a way for me to know that dynamiclly?
Hey john, i thought this was the end of it, but apperantly the dataset was uploaded in the end
i don't know why the server crashed (it is not self hosted)
and then it works, doesn't try to install any other packages
then it works
i opened a new, clean venv just now
(still doesn't work)
WebApp: 3.16.3-949 • Server: 3.16.1-974 • API: 2.24
but why does it matter if i ran it on a remote agent?
(im running it on docker)
yes,
so basically I should create a services queue, and preferably let it contain its own workers
well, also, now, my runs are showing off problems as well:
the base image is python:3.9-slim
@<1523701205467926528:profile|AgitatedDove14>
ok so now i upload with the following line:
op_model.update_weights(weights_filename=model_path, upload_uri=upload_uri) #, upload_uri=upload_uri, iteration=0)
and while doing it locally, it seems to upload
when i let it run remotely i get yhe original Failed uploading error.
altough, one time when i ran remote it did uploaded it. and then at other times it didn't. weird behaivor
can you help?
