Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 3713 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 Hi There, Our Team Started Using Clearml A Few Months Ago And We'Ve Recently Deployed An Aws Eks K8S Cluster With The Hopes Of Deploying A Clearml-Agent. I'Ve Been Able To Install The Agent On The Cluster Using:

@<1754676270102220800:profile|AlertReindeer55> , I think what @<1523701087100473344:profile|SuccessfulKoala55> means is that you can set the docker image on the experiment level itself as well. If you go into the "EXECUTION" tab of the experiment, in the container section you might see an image there

one month ago
0 Hi Community, I'M Installing Multiple Clearml-Agent On Some Gpu Workstations, So I Have A Question: How Can I Use The Cache Of Dataset As When I Run Only 1 Clearml-Agent? I Don'T Want To Download Full Dataset Everytime When A Task Run Because Now Since I

Hi @<1749965229388730368:profile|UnevenDeer21> , an NFS is one good option. You can also point all agents on the same machine to the same cache folder as well. Or just like you suggested, point all workers to the same cache on a mounted NFS

2 months ago
0 Hi, I Have A Question About Migrating From Tensorflow 2.14 To 2.16. Up Until Now I Have Been Using

Hi @<1558986867771183104:profile|ShakyKangaroo32> , what version of clearml are you using?

8 months ago
0 Hi

Hi @<1523701949617147904:profile|PricklyRaven28> , you mean that a single machine will have multiple workers on it each "serving" a slice of the gpu?

2 months ago
0 Hello, I Am Using The Clearml Integration With Ultralytics. I Have Very Simple Code

@<1644147961996775424:profile|HurtStarfish47> , you also have the auto_connect_frameworks parameter of Task.init do disable the automatic logging and then manually log using the Model module to manually name and register the model (and upload ofc)

22 days ago
0 Hey Team I Can See Last Clearml-Server Released On August, When New Release Going Public? I'M Going To Upgrade Our Env And Prefer To Update After Upcoming New Release

Hi @<1523701842515595264:profile|PleasantOwl46> , the version is released, thus public. Not sure what you mean, can you please elaborate?

21 days ago
0 I Am Using Clearml Free Saas. I Have A Task "Mytask" Of Type "Data_Processing" In Project "Myproject" Which Uploads A Dataset In The End Of Its Execution. For Some Reason, After Uploading The Dataset, My Task Appears In Ui Not Under "Myproject", But Under

Hi @<1613344994104446976:profile|FancyOtter74> , I think this is cause because you're creating a dataset in the same task. Therefor there is a connection between the task and the dataset and they are moved to a special folder for datasets. Is there a specific reason why you're creating both a Task & Dataset in the same code?

one year ago
0 Hi All How Can I Move Experiments From One Workspace To Another?

Hi @<1671689458606411776:profile|StormySeaturtle98> , I'm afraid that's not possible. You could rerun the code on the other workspace though 🙂

8 months ago
0 Hi, I Wrote A Pipeline Of Two Steps.

Hi, if you pass an input model, at the end of the training you will have your output model. Why do you want to fetch the input model from the previous step?

one year ago
0 Hi, I Wrote A Pipeline Of Two Steps.

I think what you're looking for is
None
to create the model and connect using
None

one year ago
0 How Do I Extract From A Completed Task, The Time It Took To Run ? Given I Have The Id Of The Task.

You can fetch the task object via the SDK and inspect task.data or do dir(task) to see what else is inside.

You can also fetch it via the API using tasks.get_by_id

one year ago
0 I Am Struggling A Bit To Understand The Use Case Of A Pipeline: Let Say You Have Step1 -> Step2 -> Step3 What Is The Point To Use Pipeline Feature Versus Having A Single Task That Do Those Steps One After Another ???

@<1576381444509405184:profile|ManiacalLizard2> , the rules for caching steps is as follows - First you need to enable it. Then assuming that there is no change of input from the previous time run AND there is no code change THEN use output from previous pipeline run. Code from imports shouldn't change since requirements are logged from previous runs and used in subsequent runs

one year ago
0 Another Questions Related To

Hi @<1576381444509405184:profile|ManiacalLizard2> , I think the correct format is PACKAGE @ git+ None

one year ago
0 Hi, With A Given Task Id, How Do I Get All The Information Of The Tab "Info" In The Python Sdk ? I Struggle To Find That In The Doc

Hi @<1576381444509405184:profile|ManiacalLizard2> , I would suggest playing with the Task object in python. You can do dir(<TASK_OBJECT>) in python to see all of it's parameters/attributes.

one year ago
0 Clearnl Failed To Detect Custom Packages

Please read the documentation, there is an example there

one year ago
0 Clearnl Failed To Detect Custom Packages

I think this is what you need:
None

one year ago
0 Is There A Way To Tell The Agent To Use A Specific Venv Pre Installed ? Like The One Already Installed In The Developer Pc And The Agent Is Running Inside That Same Pc?

Hi @<1576381444509405184:profile|ManiacalLizard2> , I think this is the env var you're looking for
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
None

one year ago
0 Hi There! :) I Have An Issue Regarding The

So your HPO job is affected by the azure_storage_blob package? How are you running HPO? Can you provide logs & configurations for two such different runs?

one year ago
0 Hi All, I'M Getting Set Up With Gcp Autoscaler And I'M Wondering What Image People Typically Use For Running Docker Jobs. The Image That I Was Using

Hi @<1529271085315395584:profile|AmusedCat74> , this is the default image I use
projects/ml-images/global/images/c6-deeplearning-tf2-ent-2-3-cu110-v20201105-ubuntu-1804
I guess the image really depends on your needs

one year ago
Show more results compactanswers