im trying to figure out
i'll play with it a bit and let you know
cannot schedule new futures after interpreter shutdown
This implies the process is shutting down.
Where are you uploading the model? What is the clearml version you are using ? can you check with the latest version (1.10) ?
Hmm whats the OS and python version?
Is this simple example working for you?
None
i updated to 1.10
i am uploading the model inside the main() function, using this code:
model_path = model_name + '.pkl'
with open(model_path, "wb") as f:
pickle.dump(prophet_model, f)
output_model.update_weights(weights_filename=model_path, iteration=0)
no, i just commented it and it worked fine
Yeah, we should add a comment saying "optional" because it looks as if you need to have it there if you are using Azure.
hey, matrin
this script actuall does work
@<1523701205467926528:profile|AgitatedDove14> hey martin, i deleted the task.mark_completed() line
but still i get the same error,
could it possibly be something else?
@<1523701205467926528:profile|AgitatedDove14>
ok so now i upload with the following line:
op_model.update_weights(weights_filename=model_path, upload_uri=upload_uri) #, upload_uri=upload_uri, iteration=0)
and while doing it locally, it seems to upload
when i let it run remotely i get yhe original Failed uploading error.
altough, one time when i ran remote it did uploaded it. and then at other times it didn't. weird behaivor
can you help?
ok so i accidentally (probably with luck) noticed the max_connection: 2 in the azure.storage config.
NICE!!!! 🎊
But wait where is that set?
None
Should we change the default or add a comment ?
ok so i accidentally (probably with luck) noticed the max_connection: 2 in the azure.storage config.
canceled that, and so now everything works
that's the one, I'll add a comment (I didn't check the number of connections it opens, so idk the right number)
hey martin thanks for the reply.
im doing the calling at the main function
I'm trying to figure if this is reproducible...
of that makes sense, basically here is what you should do:
Task.init(... output_uri='
')
output_model.update_weights(register_uri=model_path)
It will automatically create a unique target folder / file under None to store your model
(btw: passing the register_uri
basically sais: "I already uploaded the model there, just store the link" - i.e. does Not upload the model)
ignore it, I didn't try and read everything you said so far, I'll try again tomorrow and update this comment
oh, so then we're back to the old problem, when i am using
weights_filename, and it gives me the errorFailed uploading: cannot schedule new futures after interpreter shutdown
@<1523701205467926528:profile|AgitatedDove14>
no, i just commented it and it worked fine
ok martin, so what i am having troubles with now is understanding how to save the model in our azure blob storage, what i did was to specify:
upload_uri = f'
'
output_model.update_weights(register_uri=model_path, upload_uri=upload_uri, iteration=0)
but it doesn't seem to save the pkl file (which is the model_path) to the storage
Hi @<1546303269423288320:profile|MinuteStork43>
Failed uploading: cannot schedule new futures after interpreter shutdown
Failed uploading: cannot schedule new futures after interpreter shutdown
This is odd where / when exactly are you trying to upload it?
task.mark_completed()
You have that at the bottom of the script, never call it on yourself, it will kill the actual process.
So what is going on you are marking your own process for termination, then it terminates itself leaving the interpreter and this is the reason for the errors you are seeing
The idea of mark_* is to mark an external Task, forcefully.
By just completing your process with exit code (0) (i.e. no error) the Task will be marked as completed anyhow, no need to call any function