Reputation
Badges 1
67 × Eureka!i have the same error
it seems like:
files.clear.ml
is down???
how do i access the clearnl.conf custom variables then?
or - how do i configure and access env variables that way?
the use case is simple:
i wanna fetch data from an sql table, inside a task.
so i want to execcute a query, and then do some operations on it, from within a task, to do that i have to connect to the db,
and i don't want the connection details to be logged
create a queue named services (and subscribe a worker to it)
@<1523701070390366208:profile|CostlyOstrich36>
hey john, let us know if you need any more information
well, also, now, my runs are showing off problems as well:
this is not an error per se, rather an INFO log
Hey john, i thought this was the end of it, but apperantly the dataset was uploaded in the end
i don't know why the server crashed (it is not self hosted)
the base image is python:3.9-slim
ok so i accidentally (probably with luck) noticed the max_connection: 2 in the azure.storage config.
canceled that, and so now everything works
so i think debian (and python 3.9)
i'll send you the file in private
(im running it on docker)
hey martin thanks for the reply.
im doing the calling at the main function
thanks for the help 🙂
ok martin, so what i am having troubles with now is understanding how to save the model in our azure blob storage, what i did was to specify:
upload_uri = f'
'
output_model.update_weights(register_uri=model_path, upload_uri=upload_uri, iteration=0)
but it doesn't seem to save the pkl file (which is the model_path) to the storage
no, i just commented it and it worked fine
i updated to 1.10
i am uploading the model inside the main() function, using this code:
model_path = model_name + '.pkl'
with open(model_path, "wb") as f:
pickle.dump(prophet_model, f)
output_model.update_weights(weights_filename=model_path, iteration=0)
im trying to figure out
i'll play with it a bit and let you know
@<1523701205467926528:profile|AgitatedDove14>
@<1523701205467926528:profile|AgitatedDove14> hey martin, i deleted the task.mark_completed() line
but still i get the same error,
could it possibly be something else?
another question: if i save heavy artifcats, should my services worker ram be at least as high? (or is it enough for the default queue workers to have that)
ok, yeah, makes sense. thanks John!
we use the clearml hosted server, so i don't know the version