Reputation
Badges 1
195 × Eureka!I am getting the following error when I try the above:
` /opt/conda/envs/torch_38/lib/python3.8/site-packages/clearml/backend_api/session/client/client.py in new_func(self, *args, **kwargs)
374 @wrap
375 def new_func(self, *args, **kwargs):
--> 376 return Response(self.session.send(request_cls(*args, **kwargs)))
377
378 new_func.name = new_func.qualname = action
TypeError: init() missing 1 required positional argument: 'items' `
I added it to my code before Task.init or after both didn't seem to change anything
AgitatedDove14 , when I test using the yaml python package:
I see the following:import yaml yaml.dump({'\.': ('a', '\.')}) [In]: '\\.: !!python/tuple\n- a\n- \\.\n'YAML treats both strings in tuples and outside the same, however this is not the behavior you get in clearml task.connect
AgitatedDove14 sounds great I'm going to give it ago
AgitatedDove14 thanks, at peak usage we have 6-8 gb of free RAM
it is still relevant if you have any ideas
this way I can avoid the heavy computation I describe above for each individual trial
AgitatedDove14 seem to work in terms of updating the file which is great! the notebook HTML preview seem not to work though.. I guess you are aware of it because the displayed text is saying something like click the link to view the file
thanks AgitatedDove14 , I will be happy to test it, however I didn't understand it fully.
I can see how it works in the single machine case, however if I want multiple machines syncing with the optimizer, for pulling the sampled hyper parameters and reporting results, I can't see how it would work
is there a chance you help me with the specific POST call for debug? I was trying to implement it using requests package but I got errors.. didn't work for me.. I believe it something trivial
sounds great, is it part of the release? or latest repo?
sounds like an ok option, any recommended reference for how to start?
It does, I am familiar with it I used it many times
AgitatedDove14 ,maybe worth updating the main Readme.md in the github.. if someone try to follow the instructions there it breaks
or should it be assigned somewhere
AgitatedDove14 , definitely so, this is very generic and very useful
In many cases the objective is just one of multiple metrics of interest, so for me almost always I would want to combine it with the rest of the scalar metrics
we just found it out ourselves , https://github.com/jupyter/nbconvert/issues/754
if I remove the import and use the path I get this:raise MissingConfigException(hydra.errors.MissingConfigException: Cannot find primary config 'config'. Check that it's in your config search path.
this is what I get with curl
Hi AgitatedDove14 , that's what I'm doing exactly.. from the error I am thinking it has more to do with the combination of offline with a hierarchical config or maybe just multiple configs registered in the hydra config store? I don't know..
from the docs::param items: List of metric keys and requested statistics
kind of on the same topic, it would be very useful if some kind of verbosity will be enabled.. some kind of progress bar for get_top_experiments()
client has the following attributes:['auth', 'events', 'models', 'projects', 'queues', 'session', 'tasks', 'workers']
I am trying to mimic an agent pulling a task, and while running it syncing some custom configuration dict I have according to the task configuration (overriding the defaults)
or actually the local html, I believe it should work for a mounted s3
no programmatic python options? could be nice..