Reputation
Badges 1
131 × Eureka!did not help(
Pretty sieve in the near future. As an idea for new users: it would be convenient to have some kind of visual example, what would they understand how it will be.
I'm sorry I took your attention, thanks for your time!
all ok!
don't think. next week I'll try to change the code of the example, maybe it will work with the channel ID
I tried to update the main libraries to newer ones - it did not help
now I'll try to run with an older version of the mb code in this led
@<1523701087100473344:profile|SuccessfulKoala55>
I'm talking about something like OPTUNA
wow, that's interesting, please let me know. Are there screenshots or a demo video somewhere where you can see how the enumeration parameters are set.
i use the local free version of clearml
CostlyOstrich36
we have a render server and a file server - one machine, unfortunately I'm not so familiar with clearml yet to set it all up separately.
Something interesting and possibly the same)
Please tell me, is there an example of how these notifications look like?
Does this only work for the completed status?
and does not take into account failed and ABORTED experiments?
For reference
I have now locally raised Clearml on a nearby machine and logging is configured as in the textbook - all metrics are worked out.
On the machine where I run docker, I just copied the clearml.conf file. Maybe you need to do something else (send the /opt/clearml folder and drop it into the docker image?)?
when the old server is up - all the pictures in the new server are also opened from the old server, if you click on open at the link address
docker run --rm -v /srv:/root/srv -v /srv/data/apatshin_docker/airflow/dags/voyager:/voyager voyager:cpu python3.7 /voyager/src/pipeline/train_task_dqn_demo.py
ah, I get it, I use (pytorch) lightning, and that's where it all comes from.
CostlyOstrich36
usability of the pytorch_lightning logger
we log the average reward of each action for the RL agent.
If the agent you did this action on the current episode, then his average reward will be nan , not 0. for obvious reasons. And we would like it to be visualized in the same way as in the tensorboard, for informational content.
yes, I think so too
we just started logging metrics in clearml
for simplicity, you can make a flag for this and display only for graphs that are in sight, if it will be easier in terms of implementation or cpu costs.
UPD: Like this doesn't work:
task = Task.init(
project_name=project_name,
task_name=task_name,
reuse_last_task_id=False,
output_uri='
',
)
up error
/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py:86: DeprecationWarning: Using 'Retry.BACKOFF_MAX' is deprecated and will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead DeprecationWarning,
TimelyPenguin76
As I remember, the last one is 1.1.4
What exactly do you mean about task logger?
yeah, thanks, I see)
and how to write it down in the code and using the PL.logger?
File Store Host configured to: http://localhost:8090if i setFile Store Host configured to:then
` (base) user@s130:~$ python3.7
Python 3.7.9 (default, Aug 31 2020, 12:42:55)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
from clearml import Task
task = Task.init(project_name="my project", task_name="my task")
ClearML Task: overwriting (reusing) task id=bf47e430826d43998c0f54c73addc12b
2021-11-03 19:15:13,491 - clear...
great, point 2 sounds like the right thing!)

