Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 221 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
2 years ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
2 years ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

Are you running a self-hosted/enterprise server or on app.clear.ml? Can you confirm that the field in the screenshot is empty for you?

Or are you using the SDK to create an autoscaler script?

one year ago
0 Hey, Is There An Easy Way To Retrieve The Code Used To Run An Experiment? Without Recreating The Whole Environment Etc. The Problem: I Have Ran A

If you didn't use git, then clearML saves your .py script completely in the uncommited changes section like you say. You should be able to just copy paste it to get the code. In what format are your uncommited changes logged? Can you paste a screenshot or paste the contents of uncommitted changes ?

3 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Now worries! Just so I understand fully though: you were already using the patch with success from my branch. Now that it has been merged into transformers main branch you installed it from there and that's when you started having issues with not saving models? Then installing transformers 4.21.3 fixes it (which should have the old clearml integration even before the patch?)

one year ago
0 Hey, Is There An Easy Way To Retrieve The Code Used To Run An Experiment? Without Recreating The Whole Environment Etc. The Problem: I Have Ran A

You can apply git diffs by copying the diff to a file and then running git apply <file_containing_diff>

But check this thread to make sure to dry-run first, to check what it will do, before you overwrite anything
https://stackoverflow.com/questions/2249852/how-to-apply-a-patch-generated-with-git-format-patch

3 years ago
0 Can We Use The Simple Docker-Compose.Yml File For Clearml Serving On A Huggingface Model (Not Processed To Tensorrt)?

That wasn't my intention! Not a dumb question, just a logical one 😄

one year ago
2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

That's what happens in the background when you click "new run". A pipeline is simply a task in the background. You can find the task using querying and you can clone it too! It is places in a "hidden" folder called .pipelines as a subfolder on your main project. Check out the settings, you can enable "show hidden folders"

one year ago
0 Hello Everyone ! I Tried To Reproduce Your Tutorial :

Also, please note that since the video has been uploaded, the dataset UI has changed. So now a dataset will be found under the dataset tab on the left instead of in the experiment manager 🙂

2 years ago
0 Hello Everyone ! I Tried To Reproduce Your Tutorial :

Thank you so much ExasperatedCrocodile76 , I'll check it tomorrow 🙂

2 years ago
0 Hello, Trying To Figure Out How To Run A Machine In Docker Mode (Ecr Private Repo) Using Clearml. For Some Reason I Cannot Get This To Work With :

I see. Are you able to manually boot a VM on GCP and then manually SSHing into it and running the docker login command from there? Just to be able to cross out networking or permissions as possible issues.

one year ago
0 Hello Everyone! I Am Trying To Run A Pipeline From The Web Ui. As You Can See On The Screenshot, It Is Possible To Specify The

Maybe you can add https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller/#set_default_execution_queue to your pipelinecontroller, only have the actual value be linked to a pipeline parameter? So when you create a new run, you can manually enter a queue name and the parameter will be used by the pipeline controller script to set the default execution queue.

one year ago
0 Hello Everyone ! I Tried To Reproduce Your Tutorial :

The point of the alias is for better visibility in the Experiment Manager. Check the screenshots above for what it looks like in the UI. Essentially, setting an Alias makes sure the task that is getting the dataset automatically logs the ID that it gets using Dataset.get() . The reason being that if you later on look back to your experiment, you can also see what dataset was .get() 't back then.

ExuberantBat52 When you still get the log messages, where did you specify the alias?...

2 years ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

Not exactly sure what is going wrong without an exact error or reproducible example.

However, passing around the dataset object is not ideal, because passing info from one step to another in a pipeline requires ClearML to pickle said object and I'm not exactly sure a Dataset obj is picklable.

Next to that, running get_local_copy() in the first step does not guarantee that you can access that data from the other step. Both might be executed in different docker containers or even on different...

one year ago
0 Hi Community, How Can I Prevent Clearml Creating A New Experiment, Each Time I Interrupt And Restart Training On The Same Task? I'M Training Yolov8 And Clearml Docker Usage Is Up To 30Gb. I Can'T See A Yaml Config Parameter For This.

Hey @<1539780305588588544:profile|ConvolutedLeopard95> , unfortunately this is not built-in into the YOLOv8 tracker. Would you mind opening an issue on the YOLOv8 github page and atting me? (I'm thepycoder on github)

I can then follow up the progress on it, because it makes sense to expose this parameter through the yaml.

That said, to help you right now, please change [this line](https://github.com/ultralytics/ultralytics/blob/fe61018975182f4d7645681b4ecc09266939dbfb/ultralytics/yolo/uti...

one year ago
0 Hi, I'M Using Hyperparameteroptimizer Alongside Optimizeroptuna And I Am Unsure How To Implement Pruning On Tasks That Are Not Producing Good Results. Is There A Way To Implement This On These Modules?

Yeah, I do the same thing all the time. You can limit the amount of tasks that are kept in HPO with the save_top_k_tasks_only parameter and you can create subprojects by simply using a slash in the name 🙂 https://clear.ml/docs/latest/docs/fundamentals/projects#creating-subprojects

2 years ago
0 Hi Everyone! How Can I Filter Archived Tasks In

Hi ThoughtfulGrasshopper59 !

You're right, we should probably add the convenient allow_archived function in .get_task s () as well.
That said, for now this can be a workaround:

` from clearml import Task

print([task.name for task in Task.get_tasks(
project_name="TAO Toolkit ClearML Demo",
task_filter=dict(system_tags=['archived'])
)]) Specifically task_filter=dict(system_tags=['archived']) ` should be what you need.

2 years ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Great to hear! Then it comes down to waiting for the next hugging release!

one year ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Ah I see 😄 I have submitted a ClearML patch to Huggingface transformers: None

It is merged, but not in a release yet. Would you mind checking if it works if you install transformers from github? (aka the latest master version)

one year ago
0 Hey Everyone, Is It Possible To Use The

Yes you can! The filter syntax can be quite confusing, but for me it helps to print task.__ dict__ on an existing task object to see what options are available. You can get values in a nested dict by appending them into a string with a .

Example code:

` from clearml import Task

task = Task.get_task(task_id="17cbcce8976c467d995ab65a6f852c7e")
print(task.dict)

list_of_tasks = Task.query_tasks(task_filter={
"all": dict(fields=['hyperparams.General.epochs.value'], p...

one year ago
0 Hi All, Im Executing A Task Remotely Via A Queue. I Don'T Want It To Cache The Env Or Install Anything Before The Run, Just To Run The Task On The Agent Machine (I Set Up The Agent'S Env Previously, The Env Cache Causes Versions Problems In My Case). I Tr

Can you try setting the env variables to 1 instead of True ? In general, those should indeed be the correct variables to set. For me it works when I start the agent with the following command:

CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 clearml-agent daemon --queue "demo-queue"
one year ago
0 Hello

And maybe also what kind of docker install do you have given that you changed off of snap?

one year ago
0 Started Using The Integrated Gcp Autoscaler To Avoid Some Problems We Had. For Some Reason The Instances Doesn'T Have A Gpu Although Specifically Defined In The Ui. How Come? (Not Using Any Docker Container For The Agents)

Hi EmbarrassedSpider34 , would you mind showing us a screenshot of your machine configuration? Can you check for any output logs that ClearML might have given you? Depending on the region, maybe there were no GPUs available, so could you maybe also check if you can manually spin up a GPU vm?

2 years ago
0 Hey All, Is Anyone Able To Access The Clear Ml Website?

Isitdown seems to be reporting it as up. Any issues with other websites?

2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

RoundMosquito25 it is true that the TaskScheduler requires a task_id , but that does not mean you have to run the pipeline every time 🙂

When setting up, you indeed need to run the pipeline once, to get it into the system. But from that point on, you should be able to just use the task_scheduler on the pipeline ID. The scheduler should automatically clone the pipeline and enqueue it. It will basically use the 1 existing pipeline as a "template" for subsequent runs.

2 years ago
0 Hello Everyone ! As You Can Observe In Attached Snipped, In My Code I Freeze The Env, And The Agent Install Every Cached Dependency With The Same Version. Is There Any Way That The Agent On My Side (My Computer) Will Straightly Use The Virtual Environment

Hi ExasperatedCrocodile76 ,

You can try running the agent with these environment variables set to 1:

CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1There's more env vars here: https://clear.ml/docs/latest/docs/clearml_agent/clearml_agent_env_var

Does that work for you?

2 years ago
0 It Would Be Nice To Group Experiments Within Projects Use Cases:

The built in HPO uses tags to group experiment runs together and actually use the original optimizer task ID as tag to be able to quickly go back and see where they came from. You can find an example in the ClearML Examples project.

2 years ago
0 Hello Channel, I Have A Question Regarding Clearml Serving In Production. I Have Different Environments, And Different Models Each Of Them Linked To A Use Case. I Would Like To Spin Up One Kubernetes Cluster (From Triton Gpu Docker Compose) Taking Into

To be honest, I'm not completely sure as I've never tried hundreds of endpoints myself. In theory, yes it should be possible, Triton, FastAPI and Intel OneAPI (ClearML building blocks) all claim they can handle that kind of load, but again, I've not tested it myself.

To answer the second question, yes! You can basically use the "type" of model to decide where it should be run. You always have the custom model option if you want to run it yourself too 🙂

one year ago
Show more results compactanswers