Reputation
Badges 1
67 × Eureka!basically, only test.py need the packages, but for somereason pipeline_test installs them as well
ok so, idk why it helped, but setting base_task_id
instead of base_task_name in the pipe.add_step
function, seems to overcome this
the successful, which is aborted for some reason (but at least the enviorment is setup correctly)
the end of it is :
- urllib3==1.26.15
- virtualenv==20.23.0
- wcwidth==0.2.6
- Werkzeug==2.3.2
- widgetsnbextension==4.0.7
- xgboost==1.7.5
- yarl==1.9.2
Environment setup completed successfully
Starting Task Execution:
2023-04-29 21:41:02
Process terminated by user
i have the same error
it seems like:
files.clear.ml
is down???
do you want the entire log files? (it is a pipeline, and i can't seem to find the "Task" itself, to download the logs)
Hey joey, i configured ssh locally for my pc as well, and now it works.
are you guys planning in the future a feature which will allow me to specify the connection type, without relation to what i am running locally?
why doesn't it try to use ssh as default? the clearml.conf doesn't contain user name and password
well, also, now, my runs are showing off problems as well:
ok, thanks jake
what will be the fastest fix for it?
(still doesn't work)
only sometimes, the pipeline runs using local machines
it is installed as a pip package
but i am not using it in the code
@<1523701070390366208:profile|CostlyOstrich36>
hey john, let us know if you need any more information
i need to read and write, i do have access from genesis autoscaler when i set off all firewall rules. but this is not recommend by microsoft.
I need to add specific firewall rules for the genesis machines, to allow them to authorize to my azure blob storage
plus, is there an option to configure the agent configuration? for example we are using:
force_git_root_python_path: true
can we do it there as well?
@<1523701070390366208:profile|CostlyOstrich36>
By the way, how do i set up a shell script?
i don't see an option to do it from the UI
i don't know why the server crashed (it is not self hosted)
@<1523701205467926528:profile|AgitatedDove14> hey martin, i deleted the task.mark_completed() line
but still i get the same error,
could it possibly be something else?
(im running it on docker)
that's the one, I'll add a comment (I didn't check the number of connections it opens, so idk the right number)
@<1523701205467926528:profile|AgitatedDove14>
ok so now i upload with the following line:
op_model.update_weights(weights_filename=model_path, upload_uri=upload_uri) #, upload_uri=upload_uri, iteration=0)
and while doing it locally, it seems to upload
when i let it run remotely i get yhe original Failed uploading error.
altough, one time when i ran remote it did uploaded it. and then at other times it didn't. weird behaivor
can you help?
i updated to 1.10
i am uploading the model inside the main() function, using this code:
model_path = model_name + '.pkl'
with open(model_path, "wb") as f:
pickle.dump(prophet_model, f)
output_model.update_weights(weights_filename=model_path, iteration=0)
im trying to figure out
i'll play with it a bit and let you know
i'll send you the file in private
thanks for the help 🙂