Reputation
Badges 1
131 × Eureka!up error
/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py:86: DeprecationWarning: Using 'Retry.BACKOFF_MAX' is deprecated and will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead DeprecationWarning,
I tried to update the main libraries to newer ones - it did not help
now I'll try to run with an older version of the mb code in this led
` # Python 3.7.5 (default, Dec 9 2021, 17:04:37) [GCC 8.4.0]
clearml == 1.3.2
numpy == 1.21.5 `
I found what the problem is, I had port 8091 specified, and the file server was raised to 8081
yes, that looks like it, thanks!
I'll try adding these and see how it helps
for simplicity, you can make a flag for this and display only for graphs that are in sight, if it will be easier in terms of implementation or cpu costs.
I didn’t understand how exactly (
in any case, after looking at many examples, I found how it was implemented.
'status_changed': ['>{}'.format(datetime.utcfromtimestamp(previous_timestamp)), ],
please tell me, is it possible to somehow make it so that costomous fakets, which are not in the public domain, would be used?
for example, if I somehow start the execution of an agent task in a specific docker container?)
Is that what the thing was called a bucket?
great, point 2 sounds like the right thing!)
I'm interested in more details)))
please tell me how I can help you?)
ah, I get it, I use (pytorch) lightning, and that's where it all comes from.
basket name all ok?
-
specify container from UI
-
libraries in the ubuntu repository have not yet reached their pip / pypi repository
Pretty sieve in the near future. As an idea for new users: it would be convenient to have some kind of visual example, what would they understand how it will be.
cool!
Sometimes (<10%) we use two registrars with different task_names (in terms of ClearML) to display the same indicators but for different models that do different logic. And in such cases, we made two tb versions of / task and wrote in parallel.
And I wanted to know if it is possible here as well.
Of course, now I thought about the fact that maybe we need to write everything in one place, but with different names, but different metrics are used there. I'm not very well versed in ClearML...
Thank you very much for your help and for such a convenient product!)
I haven't figured out the alents yet, but it already looks amazing!)
I found a solution - I did not specify the address of the service (
here is a working code example:
basket = s3-infra.loc/
path = "s3-artifacts-test/proj_path"
task = Task.init(
project_name=project_name,
task_name=task_name,
reuse_last_task_id=False,
output_uri='s3://{basket}{path}',
)
this is the experiment that was useful, but we stopped it, because. convergence has happened before. than we expected
And why can it be that the displayed time is zero?
although the experiments were considered for several days
yeah, thanks, I see)
and how to write it down in the code and using the PL.logger?
UPD: Like this doesn't work:
task = Task.init(
project_name=project_name,
task_name=task_name,
reuse_last_task_id=False,
output_uri='
',
)
we run in containers without venv, in the main section, and then delete it or use it for similar experiments Sounds like something very similar, I'll try to use it, thanks a lot! Can this be configured in the UI by simply adding a docker image to the launch options?
Perhaps this will help
yes, I think so too
if i replace port from 8091 ti 8090 then opened page (pic. 1)
to