i managed to import a custom package using the same way you did : i have added the current dir path to my system
i have a 2 steps pipeline :
- Run a function from a custom package. This function returns a Dataloader (built from torchvision.MNIST) 2) This step receives the dataloader built in the first step as a parameter ; it shows random samples from itthere has been no error to return the dataloader at the end of step1 and to import it at step2. Here is my code :
` from clearml import Pi...
hey WickedElephant66 TenderCoyote78
I'm working on a solution, just hold on, I update you asap
TenderCoyote78
the status should normally be automatically updated . Do all the steps finish successfully ? And also the pipeline ?
No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?
In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms
` log = task.get_logger()
x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1
fig = go.Figure()
fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))
fig.update_layout(barmode='overlay') ...
SubstantialElk6
Can you provide us a screenshot with a better resolution, to check the ratio between the total workers/active workers ?
hey SmugSnake6
Can you give some more precisions on your configuration please ? (clearml, agent, server versions)
Also, if you have some example code to share it could help us reproduce the issue and thus help you a lot faster 🙂 (script, command line for firing your agent)
Hi HandsomeGiraffe70
There is a way, this is the API. You can use it this way :
retrieve the task the model belongs to retrieve the model you want (from a lit of input and output models) create the metadata inject them to the model
Here is an example :
` from clearml import Task
from clearml.backend_api import Session
from clearml.backend_api.services import models
from clearml.backend_api.services.v2_13.models import MetadataItem
task = Task.get_task(project_name=project_name, task_name=...
Hi PanickyMoth78
There is indeed a versioning mechanism available for the open source version 🎉
The datasets keep track of their "genealogy" so you can easily access the version that you need through its ID
In order to create a child dataset, you simply have to use the parameter "parent_datasets" when you create your dataset : have a look at
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#datasetcreate
You also alternatively squash datasets together to create a c...
you also have to set your agent to use you partner credentials
simply do a :
clearml-agent init
and paste your partner's credentials
Hey Yossi
Do you want to erase from the ui ?
You first have to erase the dataset/project content : you select and archive. Archive is almost the recycle bin ! Then you can easily erase the empty dataset/project
Hi,
It would be great if you could also send your clearml package version 🙂
Hey
I'll play a bit with what you sent, because reproducing the issues help a lot to solve them. I keep you updated 😊
Hi SmugSnake6
I might have found you a solution 🎉 I answered on the GH thread https://github.com/allegroai/clearml-agent/issues/111
hi FiercePenguin76
Can you also send your clearml packages versions ?
I would like to sum your issue up , so that you could check i got it right
you have a task that has a model, that you use to make some inference on a dataset you clone the task, and would like to make inferences on the dataset, but with another modelthe problem is that you have a cloned task with the first model....
How have you registered the second model ? Also can you share your logs ?
hi SoggyBeetle95
i reproduced the issue, could you confirm me that it is the issue ?
here is what i did :
i declared some secret env var in the agent section of clearml.conf i used extra_keys to have hidden on the console, it is indeed hidden but in the execution -> container section it is clear
hey SoggyBeetle95
You're right that's an error on our part 🙂
Could you please open an issue in https://github.com/allegroai/clearml-server/issues so we can track it?
We'll update there once a fix for that issue will be released! 😄
Hey
There is a cache limit that you can change by modifying the conf file.
You simply add this to clearml.conf :
storage {
cache {
default_cache_manager_size: 100
}
}
(100 is the defasult value)
Depending on what you need to achieve, there are more advanced cache management tools.
i suggest you to use a docker image that has the same python version as your local one, in order to avoid such requirements errors
oups please pardon me I made a confusion, this answer is not related to your issue. my fault 🙏
🙂
here is a bit of code that seems to do the job. have a look
`
wrapper = Task.get_task(project_name="***", task_name="***")
req_obj = events.DownloadTaskLogRequest(wrapper.id)
res = wrapper.session.send_request(
service=req_obj._service,
action=req_obj._action,
version=req_obj._version,
json=req_obj.to_dict(),
method=req_obj._method,
async_enable=False,
headers=None,
)
print(res.json()) `
Yep sorry I have not pasted the import line. You should add something like this :
from clearml.backend_api.services import events
🙏
what do you mean ? the average time that the tasks are waiting before being executed by an agent ? that is to say the average difference between enqueue time and beginning time ?
you can try something like this - which reproduces the gui behavior
` import math
import datetime
from clearml.backend_api.session.client import APIClient
client = APIClient()
q = client.queues.get_all(name='queue-2')[0]
n = math.floor(datetime.timestamp(datetime.now()))
res = client.queues.get_queue_metrics(from_date=n-1, to_date=n, interval=1, queue_ids=[q.id]) `Be careful though of the null value in the results. It happens when the there are values in the res than intervals between start...
can you please try to replace client.queues.get_all by client.queues.get_default ?
this is a specific function for retrieving the default queue 🙂
Hi RobustRat47
Is your issue solved ? 🙂