Reputation
Badges 1
25 × Eureka!Hi @<1603198134261911552:profile|ColossalReindeer77>
Hello! does anyone know how to do
HPO
when your parameters are in a
Hydra
Basically hydra parameters are overridden with "Hydra/param"
(this is equivalent to the "override" option of hydra in CLI)
WackyRabbit7 If you have an idea on an interface to shut it down, please feel free to suggest?
JitteryCoyote63 you mean? (notice no brackets)task.update_requirements(".")ย
Either pass a text or a list of lines:
The safest would be '\n'.join(all_req_lines)
so i end up having to clone the other ones manually in my code
Hi ConvolutedChicken69
Yes the problem is that there is no standard for multi repo environments
The best solution I can come up with is using git-submodules or packaging the auxiliary repo as wheels. wdyt?
Hi @<1576381444509405184:profile|ManiacalLizard2>
If you make sure all server access is via a host name (i.e. instead of IP:port, use host_address:port), you should be able to replace it with cloud host on the same port
How so? Installing a local package should work, what am I missing?
but maybe hyperparam aborts in those cases?
from the hyperparam perspective it will be trying to optimize the global minimum, basically "ignoring" the last value reported. Does that make sense ?
WackyRabbit7 this is funny, it is not ClearML providing this offering
some generic company grabbed the open-source and put t there, which they should not ๐
Hmm, so what I'm thinking is "extending" the capabilities of the "configuration" section (as it seems this is the right context). Allowing to upload a bunch of files (with the same mechanism as artifacts), as zip files, in the configuration "editable" section have the URL storing the zip, together with the target folder. wdyt?
and when you remove the "." line does it work?
If we have the time maybe we could PR a fix?!
Hi GloriousPenguin2 , Sorry this is a bit confusing. Let me expand:
When converting into a plotly object (the default), you cannot really control the dimensions of the plot in the UI programatically, you can however drag the seperator and expand width / height If you pass to report_matplotlib_figure
the argument " report_image=True,
" it will create a static image from matplotlib figure (as rendered locally) and use that as the figure, this way you get exactly wysiwyg , but the...
Weird ?!, I see this in the code:
https://github.com/allegroai/clearml/blob/382d361bfff04cb663d6d695edd7d834abb92787/clearml/automation/controller.py#L2871
first try the current setup usingย
pip
, and if it fails, useย
poetry
ย ifย
poetry.lock
ย exists
I guess the order here is not clear to me (the agent does the opposite), why would you start with pip if you are using poetry ?
Hi BattyLion34
No problem asking here ๐
Check your ~/clearml.conf or ~/trains.conf :
There is a section names api, under it you will find the definition of your trains-server ๐
how can I start up the clearml agent using the clearml-agent image instead of SDK?
Not sure I follow, what do you mean instead of the SDK? and what is the "clearml-agent image" ?
This is the thread checking the state of the running pods (and updating the Task status, so you have visibility into the state of the pod inside the cluster before it starts running)
5 seconds will be a sleep between two consecutive pulls where there are no jobs to process, why would you increase it to a higher pull freq ?
Yes, that makes sense. But did you see the callback being executed ? it seems it was supposed to, then the next call would have been 2:30 hours later, am I missing something ?
Hi DefeatedCrab47
You mean by trains-agent, or accumulated over all experiences ?
try these values:
os.environ.update({
'CLEARML_VCS_COMMIT_ID': '<commit_id>',
'CLEARML_VCS_BRANCH': 'origin/master',
'CLEARML_VCS_DIFF': '',
'CLEARML_VCS_STATUS': '',
'CLEARML_VCS_ROOT': '.',
'CLEARML_VCS_REPO_URL': '
',
})
task = Task.init(...)
Could you post what you see under "installed packages" in the UI ?
I suppose the same would need to be done for anyย
clientย
PC runningย
clearml
ย such that you are submitting dataset upload jobs?
Correct
That is, the dataset is perhaps local to my laptop, or on a development VM that is not in theย
clearml
ย system, but I from there I want to submit a copy of a dataset, then I would need to configure the storage section in the same way as well?
Correct
No Task.create is for creating an external Task not logging your own process,
That said you can probably override the git repo with env vars:
None
PompousParrot44 That should be very easy to do, basically a service mode code that clones a base task and puts it into a queue:
This should more or less do what you need :)
` from trains import Task
task = Task.init('devops', 'daily train', task_type='controller')
stop the local execution of this code, and put it into the service queue, so we have a remote machine running it.
task = execute_remotely('services')
while True:
a_task = Task.clone(base_task_id='aaabb111')
Task.enqueu...
the error for uploading is weird
wait, are you still getting this error?
when I run it on my laptop...
Then yes, you need to set the default_output_uri
on Your laptop's clearml.conf (just like you set it on the k8s glue)
Make sense ?