Badges 1121 × Eureka!
Sometimes (<10%) we use two registrars with different task_names (in terms of ClearML) to display the same indicators but for different models that do different logic. And in such cases, we made two tb versions of / task and wrote in parallel.
And I wanted to know if it is possible here as well.
Of course, now I thought about the fact that maybe we need to write everything in one place, but with different names, but different metrics are used there. I'm not very well versed in ClearML...
It's all? 😯 🪄 😀
without assigning a logger variable?
Do you accidentally know if there are any plans for an implementation with the logger variable, so that in case of something it would be possible to write to different tables?
Returning to this question, tell me please. when it will be possible to expand this limit, in cases where you run 20-30 runs to enumerate one parameter, it is not very convenient to compare them in batches of 10 experiments.
Is it possible to make a checkbox in the profile settings. which would answer az the maximum limit for comparison?
This feature is becoming more and more relevant.
please tell me, is the approximate date already known when this feature will be released along with the release?)
specify container from UI
libraries in the ubuntu repository have not yet reached their pip / pypi repository
please tell me, is it possible to somehow make it so that costomous fakets, which are not in the public domain, would be used?
for example, if I somehow start the execution of an agent task in a specific docker container?)
Thank you very much for your help and for such a convenient product!)
I haven't figured out the alents yet, but it already looks amazing!)
great, point 2 sounds like the right thing!)
Does this only work for the completed status?
and does not take into account failed and ABORTED experiments?
this is the experiment that was useful, but we stopped it, because. convergence has happened before. than we expected
we run in containers without venv, in the main section, and then delete it or use it for similar experiments Sounds like something very similar, I'll try to use it, thanks a lot! Can this be configured in the UI by simply adding a docker image to the launch options?
yeah, thanks, I see)
and how to write it down in the code and using the PL.logger?
Pd; please tell me, is it possible to make slack notifications go to a private channel if the bot is added there? (Error bottom)
Now messages go only to public channels
ValueError: Error: Could not locate channel name 'gg_clearml'
Something interesting and possibly the same)
Please tell me, is there an example of how these notifications look like?
Pretty sieve in the near future. As an idea for new users: it would be convenient to have some kind of visual example, what would they understand how it will be.
yes, that looks like it, thanks!
I'll try adding these and see how it helps
did not help(
yes, these items exist. write in a normal public channel is obtained. but it is not possible to write to a private channel in which the bot is added.
don't think. next week I'll try to change the code of the example, maybe it will work with the channel ID
yes, I think so too
I tried to update the main libraries to newer ones - it did not help
now I'll try to run with an older version of the mb code in this led
This only happens when I try to block pictures.
When I disable image logging, this error does not occur.
Perhaps someone has already encountered this and knows how to solve it?
2022-03-29 15:15:52,031 - clearml.metrics - WARNING - Failed uploading to http://10.151.32.18:8091
(HTTPConnectionPool(host='10.151.32.18', port=8091): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd4ec8f2310>: Failed to establish a new connection: [Errno 111] Connection refused')))
2022-03-29 15:15:52,034 - clearml.metrics - ERROR - Not uploading 1/4 events because the data upload failed