Hi HandsomeGiraffe70
There is a way, this is the API. You can use it this way :
retrieve the task the model belongs to retrieve the model you want (from a lit of input and output models) create the metadata inject them to the model
Here is an example :
` from clearml import Task
from clearml.backend_api import Session
from clearml.backend_api.services import models
from clearml.backend_api.services.v2_13.models import MetadataItem
task = Task.get_task(project_name=project_name, task_name=...
Hey UnevenDolphin73
Is there any particular reason why not to create the dataset ? I mean, you need to use it in different tasks, so it could make sense to create it , for it to exist on its own, and then to use it at will in any task, by simply retrieving its id (using Dataset.get)
Makes sense ?
Hi CrabbyKoala94
I am working on your issue, I will update you asap. Thanks
Hi,
We are going to try to reproduce this issue and will update you asap
it is for the sack of the example. It permits to fire the agents in background, and thus to have several agents fired from the same terminal
hey OutrageousSheep60
what about the process ? there must be one clearml-agent process that runs somwhere, and that is why it can continue reporting to the server
Hi MotionlessCoral18
Have these threads been useful to solve your issue ? Do you still need some support ? 🙂
You need to use the API for exporting experiments to csv/excel. I am preparing an example for you
you can try something like this - which reproduces the gui behavior
` import math
import datetime
from clearml.backend_api.session.client import APIClient
client = APIClient()
q = client.queues.get_all(name='queue-2')[0]
n = math.floor(datetime.timestamp(datetime.now()))
res = client.queues.get_queue_metrics(from_date=n-1, to_date=n, interval=1, queue_ids=[q.id]) `Be careful though of the null value in the results. It happens when the there are values in the res than intervals between start...
Hey
There is a cache limit that you can change by modifying the conf file.
You simply add this to clearml.conf :
storage {
cache {
default_cache_manager_size: 100
}
}
(100 is the defasult value)
Depending on what you need to achieve, there are more advanced cache management tools.
report_scalar pernits to manually report a scalar series. This is the dedicated function. There could be other ways to report a scalar, for example through tensorboard - in this case you would have to report to tensorboard, and clearML will automatically report the values
as long as you dont precise any uri when you init a task (with default outuput uri parameter), clearml will use the config file value registered into sdk.development.default_output_uri
hi SteepDeer88
did you managed to get rid of your issue ?
can you tell me what your clearml and clearml server versions are please ?
You can initiate your task as usual. When some dataset will be used in it - for example it could start by retrieving it using Dataset.get - then the dataset will be registered in the Info section (check in the UI) 😊
hey
You have 2 options to retrieve a dataset : by its id or by the project_name AND dataset_name - those ones are working together, you need to pass both of them !
yes it is 🙂 do you manage to upgrade ?
We also brought a lot of new features in the datasets in 1.6.2 version !
Hi Igor
we are working on your issue and will update you asap
Hi NonsensicalWoodpecker96
you can you the SDK 🙂
task = Task.init(project_name=project_name, task_name=task_name)task.set_comment('Hi there')
for the scalars :
` import pandas as pd
import plotly.graph_objects as go
scalars = client.events.scalar_metrics_iter_histogram(task=task.id).to_dict()['metrics']
for graph in scalars.keys():
for i, metric in enumerate(scalars[graph].keys()):
df = pd.DataFrame(scalars[graph][metric]).iloc[:, 1:]
fig = go.Scatter(
scalars[graph][metric],
mode='lines',
name=metric,
showlegend=True
)
plio.write_image(fig=go.Fi...
hey WhoppingMole85 good morning !
try to pip it !pip install clearml -U
and then check withpip show clearml
hi GentleSwallow91
Concerning the warning message, there is an entry in the FAQ. Here is the link :
https://clear.ml/docs/latest/docs/faq/#resource_monitoring
We are working on reproducing your issue
hi NervousFrog58
Can you share some more details with us please ?
Do you mean that when you have an experiment failing, you would like to have a snippet that reset and relaunch it, the way you do through the UI ?
Your ClearML packages version, and your logs would be very userful too 🙂
hey WickedElephant66 TenderCoyote78
I'm working on a solution, just hold on, I update you asap
yep i am working on it - i have something that i suspect not to work as expected. nothing sure though
for the step that reports the model :
`
@PipelineDecorator.component(return_values=['res'],
parents=['step_one'],
cache=False,
monitor_models=['mymodel'])
def step_two():
import torch
from clearml import Task
import torch.nn as nn
class nn_model(nn.Module):
def init(self):
...
yes it could worth it, i will submit, thanks. This is the same for Task.get_task() : either id or project_name/task_name
🙂