
Reputation
Badges 1
18 × Eureka!SuccessfulKoala55 Ok. Here is my code. Thanks.
` def plot_feature_scatter(df1, df2, features):
i = 0
sns.set_style("whitegrid")
plt.figure
fig, ax = plt.subplots(5, 4, figsize=(20, 20))
for feature in features:
i += 1
plt.subplot(5, 4, i)
plt.scatter(df1[feature], df2[feature], marker="+", color='#2B3A67', alpha=0.2)
plt.xlabel(feature, fontsize=9)
plt.show()
plot_feature_scatter(train_df.sample(50000), test_df.samp...
CostlyOstrich36 It's because the ports are used by other services.
CostlyOstrich36 Thanks.
I'm not sure how I can do it if I don't use the ClearML agent. In https://clear.ml/docs/latest/docs/references/sdk/scheduler/#class-automationtaskscheduler , I can't find how to stop it programmatically. Could I stop it by the UI if I don't use the ClearML agent. I see. In my understanding, the log would show all the message, but not so clear to me. Especially, if I have tens or hundreds of scheduled tasks. It's not convenient for me to check one by one.
agent.enable_task_env = false
agent.hide_docker_command_env_vars.enabled = true
agent.docker_internal_mounts.sdk_cache = /clearml_agent_cache
agent.docker_internal_mounts.apt_cache = /var/cache/apt/archives
agent.docker_internal_mounts.ssh_folder = /root/.ssh
agent.docker_internal_mounts.pip_cache = /root/.cache/pip
agent.docker_internal_mounts.poetry_cache = /root/.cache/pypoetry
agent.docker_internal_mounts.vcs_cache = /root/.clearml/vcs-cache
agent.docker_internal_mounts.venv_build = /root...
CostlyOstrich36 Thanks.
I installed ClearML-Agent to run it. However, I encounter another issue.
It shows the error message of
clearml_agent: ERROR: [Errno 2] No such file or directory: '/root/.clearml/venvs-builds/3.8/task_repository/PyTorch.git/ctbc/image_classification_CIFAR10.py'
I've executed https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/image/image_classification_CIFAR10.ipynb before executing https://github.com/allegroai/clearml/blo...
SuccessfulKoala55 I don't understand your meaning.
TimelyMouse69
Yeah, there is no further explanation about the status of closed
so I'm wondering when it can become closed
. As for my second question, my intention is that no need to update the original task or create a new task for another training. I expect that I can do another training after task.close()
and I won't encounter any issues, but I'm wrong.
TimelyMouse69 Hello, could you help check my above questions? Thanks.
CostlyOstrich36 Ok. Thanks.
SweetBadger76 Thanks! Waiting for your example.
AbruptCow41 Hi, thanks.
I follow this tutorial - https://clear.ml/docs/latest/docs/guides/frameworks/pytorch/notebooks/image/hyperparameter_search/ , but I didn't see it told me to add any repository.
Also, what I execute as a base experiment is https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/image/image_classification_CIFAR10.ipynb , not image_classification_CIFAR10.py. Does the https://clear.ml/docs/latest/docs/references/sdk/hpo_optimization_hyp...
TimelyMouse69 Thanks.
About question #2,
I don't want to reuse a task. I want to temporarily pause or permanently stop this ClearML task so the ClearML task won't record my following experiment (training job).
Hi SuccessfulKoala55 Do u mean give you the code to plot the image (but not including the data) or the image or the experiment I encountered this performance issue?
SuccessfulKoala55 The image is something like this.
My questions are two
Is it possible to let ClearML not record this plot manually? Doesn't ClearML have performance issue for this kind of plot?
SuccessfulKoala55 No, I don't know how to turn off and turn on the auto-logging. Could you tell me? Thanks.
TimelyMouse69 Hello, could you help check the above messages ? Thanks.
Hi SweetBadger76
For example, I want to compare accuracy (the metrics I'm interested) among different experiments. This metrics isn't automatically recorded by ClearML so I want to manually record it.
I've found a workaround to achieve it (as mentioned in the original message), but I'm still wondering if there is any suggestion except using logger.report_scalar
?
I see! Thanks AnxiousSeal95 and SweetBadger76 !
I have another related questions. Are the items in + metric
I can select only the items in Results -> Scalars
.
TimelyMouse69
Ok. It's strange. After executing mark_completed()
, the kernel of Jupyter is dead. You can see the following image. The three cells (3~5) run at once, then the kernel is dead. I use task.close()
but the status is still completed
, not closed
.
CostlyOstrich36 Thanks! I'll try it.
TimelyMouse69 About the closed
status, I'll wait for your response. Thanks!!
SweetBadger76 Thanks.