This is only supported in the Enterprise version I think
Hi @<1524560082761682944:profile|MammothParrot39> , how do you usually fetch metadata from a dataset?
Then these should be by default killed by the ClearML server after a few hours. How long was it stuck?
The ValueError is happening because there is no queue called services it appears
GrittyKangaroo27 , Hi!
ClearML currently does not support deleting S3 objects through the UI. I believe it will be added in coming versions 🙂
I understand. In that case you could implement some code to check if the same parameters were used before and then 'switch' to different parameters that haven't been checked yet. I think it's a bit 'hacky' so I would suggest waiting for a fix from Optuna
I see. When you're working with catboost, as what type of object is it being passed?
Hi @<1774245260931633152:profile|GloriousGoldfish63> , you can configure it in the volumes section of the fileserver
in the docker compose.
get_parameter
returns the value of a parameter as documented:
https://clear.ml/docs/latest/docs/references/sdk/task#get_parameter
Maybe try https://clear.ml/docs/latest/docs/references/sdk/task#get_parameters
Hi @<1570220858075516928:profile|SlipperySheep79> , nested pipelines aren't supported currently. What is the use case that you need it for?
Can you verify in the INFO section of an individual step to what queue it is enqueued into? Can you see them in the Queues page?
This is part of the Scale/Enterprise versions only
Hi @<1533619725983027200:profile|BattyHedgehong22> , can you please elaborate on this? Can you add a snippet that reproduces this?
What do you mean by dashboard of the admin?
Hi @<1603198163143888896:profile|LonelyKangaroo55> , you can see the commit ID in the execution tab of the experiment.
This is what I just tested now for a task with a commit in the webUI:
from clearml import Task
task = Task.get_task(task_id="<TASK_ID>")
print(task.data.script.version_num)
This returned the commit ID I see in the webUI.
Are you sure there is a commit ID in the UI? Are you sure you're fetching the correct task?
Hi IrritableJellyfish76 , it looks like you need to create the services queue in the system. You can do it directly through the UI by going to Workers & Queues -> Queues -> New Queue
I am not very familiar with KubeFlow but as far as I know it is mainly for orchestration whereas ClearML offers a full E2E solution 🙂
SwankySeaurchin41 , I don't think pipelines were mentioned in the video. Are you looking for something specific?
I'm afraid there is no such capability at the moment. However, I'd suggest opening a GitHub feature request for this 🙂
Hi @<1768084624061239296:profile|QuaintWoodpecker78> , you have an error when you try to unzip? Are you downloading directly through the webUI? Where was the artifact stored?
How do you currently save artifacts now?
Hi @<1722061354531033088:profile|TroubledCamel37> , what do you see in the apiserver logs?
Hi @<1760474471606521856:profile|UptightMoth89> , what if you just run the pipeline without run locally and then enqueue it (assuming you have no uncommitted changes)
What is your use case though? I think the point of local/remote is that you can debug in local
Can you add a full log of an experiment?
How are you saving your models? torch.save ("<MODEL_NAME>")
?