Reputation
Badges 1
131 × Eureka!GorgeousSeagull44
Cool!
Please tell me how to run it, I have never run JS before.
Yes, thank you!) Is there an addition to it, how will it look like in Slack?
AgitatedDove14
cool
in theory, a calm launch is possible at 1.17.1-2?
I replaced the open ports with ??9? so that there would be no conflicts. If you write them down in a new form, then everything is ok
ClearML does not log images when in lighting use TensorBoardLogger
AgitatedDove14
Returning to this question, tell me please. when it will be possible to expand this limit, in cases where you run 20-30 runs to enumerate one parameter, it is not very convenient to compare them in batches of 10 experiments.
Is it possible to make a checkbox in the profile settings. which would answer az the maximum limit for comparison?
This feature is becoming more and more relevant.
AgitatedDove14
I want to be able to compare scalars of more than 10 experiments, otherwise there is no strong need yet
AgitatedDove14 ^
me too
AgitatedDove14
please tell me, is the approximate date already known when this feature will be released along with the release?)
AgitatedDove14 SweetBadger76
Pd; please tell me, is it possible to make slack notifications go to a private channel if the bot is added there? (Error bottom)
Now messages go only to public channelsValueError: Error: Could not locate channel name 'gg_clearml'
` # Python 3.7.5 (default, Dec 9 2021, 17:04:37) [GCC 8.4.0]
clearml == 1.3.2
numpy == 1.21.5 `
task.get_logger()?get_logger not use
i run:
` task = Task.init(
project_name=f"RL_experiments/{cfg.train.env_train.target.split('.')[-1]}/{'/'.join(cfg.train.trainmodule.target.split('.')[-2:])}",
task_name="demo",
reuse_last_task_id=False)
task.connect(dict(OmegaConf.to_container(cfg, resolve=True)))
logger = get_logger("train_ql", log_level=cfg.base.log_level)
logger.info(f"cfg:\n{OmegaConf.to_yaml(cfg)}")
tmp_values = train_dqn_task(cfg.train, cfg.base)
task.mark_completed() `
` (base) user@s130:~$ clearml-init
ClearML SDK setup process
Please create new clearml credentials through the profile page in your clearml-server web app (e.g. )
Or create a free account at
In the profile page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
api {
web_server: ` `
api_server: ` `
credentials {
"access_key" = "6VUTS73D48DMPVI1NMPS"
"secret_key" = "4fGtZVirW0ztSLbm6JPLESM...
AgitatedDove14 yes, that's right, he's changed
AgitatedDove14
if I had to choose between logging or not logging, I would choose logging
If you choose between logging as 0 or as nan, then I would choose as nan
If you choose between skipping or logging like nan, then here I find it difficult, it seems that it is better to log than skip, but you need to think.
to a greater extent, we are used to the tensorboard, where nan is logged in a special way, and this behavior seems to be natural.
class LitMNIST(LightningModule): ... self.log('test/test_nan', np.nan, prog_bar=False, logger=True, on_step=True, on_epoch=False) ...
I found what the problem is, I had port 8091 specified, and the file server was raised to 8081
set in clearml-init
file sistem port 8090
but the compose is set to 8091
` (base) user@s130:~$ clearml-init
ClearML SDK setup process
Please create new clearml credentials through the profile page in your clearml-server web app (e.g. )
Or create a free account at
In the profile page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
api {
web_server: ` `
api_server: ` `
credentials {
"access_key...
docker-compose -f /opt/clearml/docker-compose.yml down docker-compose -f /opt/clearml/docker-compose.yml pull docker-compose -f /opt/clearml/docker-compose.yml up -d
- File Store Host configured to: http://localhost:8091
` (base) user@s130:~$ clearml-init
ClearML SDK setup process
Please create new clearml credentials through the profile page in your clearml-server web app (e.g. )
Or create a free account at
In the profile page, press "Create new credentials", then press "Copy to clipboard".
Paste copied configuration here:
api {
web_server: ` `
api_server: ` `
credentials {
"access_key" = "OV676692R7V...
yes, these items exist. write in a normal public channel is obtained. but it is not possible to write to a private channel in which the bot is added.
-
specify container from UI
-
libraries in the ubuntu repository have not yet reached their pip / pypi repository
CostlyOstrich36
Will wait)
not nice that this logging is misleading
