Reputation
Badges 1
67 × Eureka!it is installed as a pip package
but i am not using it in the code
the successful, which is aborted for some reason (but at least the enviorment is setup correctly)
the end of it is :
- urllib3==1.26.15
- virtualenv==20.23.0
- wcwidth==0.2.6
- Werkzeug==2.3.2
- widgetsnbextension==4.0.7
- xgboost==1.7.5
- yarl==1.9.2
Environment setup completed successfully
Starting Task Execution:
2023-04-29 21:41:02
Process terminated by user
yes,
so basically I should create a services queue, and preferably let it contain its own workers
i don't know why the server crashed (it is not self hosted)
from which we run the task
ok martin, so what i am having troubles with now is understanding how to save the model in our azure blob storage, what i did was to specify:
upload_uri = f'
'
output_model.update_weights(register_uri=model_path, upload_uri=upload_uri, iteration=0)
but it doesn't seem to save the pkl file (which is the model_path) to the storage
(still doesn't work)
hey, matrin
this script actuall does work
so i think debian (and python 3.9)
another question: if i save heavy artifcats, should my services worker ram be at least as high? (or is it enough for the default queue workers to have that)
ignore it, I didn't try and read everything you said so far, I'll try again tomorrow and update this comment
oh, so then we're back to the old problem, when i am using
weights_filename, and it gives me the errorFailed uploading: cannot schedule new futures after interpreter shutdown
ok, yeah, makes sense. thanks John!
ok so, idk why it helped, but setting base_task_id
instead of base_task_name in the pipe.add_step
function, seems to overcome this
basically, only test.py need the packages, but for somereason pipeline_test installs them as well
plus, is there an option to configure the agent configuration? for example we are using:
force_git_root_python_path: true
can we do it there as well?
i updated to 1.10
i am uploading the model inside the main() function, using this code:
model_path = model_name + '.pkl'
with open(model_path, "wb") as f:
pickle.dump(prophet_model, f)
output_model.update_weights(weights_filename=model_path, iteration=0)
Hey joey, i configured ssh locally for my pc as well, and now it works.
are you guys planning in the future a feature which will allow me to specify the connection type, without relation to what i am running locally?
no, i just commented it and it worked fine
basicaly now i understand that I do need to define a PYTHONPATH inside my dockerimage
my only problem is that the path depends on the clearml worker
for example, i see that the current path is:
File "/root/.clearml/venvs-builds/3.10/task_repository/palmers.git
is there a way for me to know that dynamiclly?
@<1523701205467926528:profile|AgitatedDove14>
the base image is python:3.9-slim
why those library need to run on a pipelinecontroller task, this task requires no libraries at all
i have the same error
it seems like:
files.clear.ml
is down???
Hey @<1523701087100473344:profile|SuccessfulKoala55> , thanks for the quick response
I'm not sure I understand, but that might just be my lack of knowledge, to clarify:
i am running the task remotely on a worker with the --docker flag.
how can i add the root folder to PYTHONPATH?
as far as I understand, clearml is creating a new venv inside this docker, with its own python executeable, (which i don't have access to in advance)
