Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 221 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
3 years ago
0 Hi!, Where Can I Read More About What Each Datastore (Mongo/Es/Redis) Is Used For ?

Hi Adib!
I saw this question about the datastores before and it was answered then with this:
Redis is used for caching so it's fairly 'lightly' used, you don't need many resources for it. Mongo is for artifacts, system info and some metadata. Elastic is for events and logs, this one might require more resources depending on your usage.Hope it can already help a bit!

3 years ago
0 Hi! Can Someone Explain In Details To Me For What The Fileserver, Redis, Mongodb And Elasticsearch Are Used For?

If I'm not mistaken:

Fileserver - Model files and artifacts
MongoDB - all experiment objects are saved there.
Elastic - Console logs, debug samples, scalars all is saved there.
Redis - caching regarding agents I think

2 years ago
0 Hey, If I Uploading A New Dataset With Parent, But The Child Dataset==Parent Dataset It Will Work? Thanks

Hello!
What is the usecase here, why would you want to do that? If they're the same dataset, you don't really need lineage, no?

3 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Doing this might actually help with the previous issue as well, because when there are multiple docker containers running they might interfere with each other 🙂

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Yes, with docker auto-starting containers is def a thing 🙂 We set the containers to restart automatically (a reboot will do that too) for when the container crashes it will immediately restarts, let's say in a production environment.

So the best thing to do there is to use docker ps to get all running containers and then kill them using docker kill <container_id> . Chatgpt tells me this command should kill all currently running containers:
docker rm -f $(docker ps -aq)And I...

2 years ago
0 Question On Using Clearml-Data To Manage Contents Of Datasets. I’M Having An Issue Deleting A Directory Within A Dataset Uploaded. Here Are A Few Ways I’Ve Tried, Create New Dataset With Parent, Remove --Files <Path To Folder>. That Doesn’T Work, Only

For the record, this is a minimal reproducible example:

Local folder structure:
` ├── remove_folder
│ ├── batch_0
│ │ ├── file_0_0.txt
│ │ ├── file_0_1.txt
│ │ ├── file_0_2.txt
│ │ ├── file_0_3.txt
│ │ ├── file_0_4.txt
│ │ ├── file_0_5.txt
│ │ ├── file_0_6.txt
│ │ ├── file_0_7.txt
│ │ ├── file_0_8.txt
│ │ └── file_0_9.txt
│ └── batch_1
│ ├── file_1_0.txt
│ ├── file_1_1.txt
│ ├── file_1_2.txt
│ ├── file_1_3.txt
│ ├── fi...

3 years ago
0 Hi Everybody, I’M Getting Errors With Automatic Model Logging On Pytorch (Running On A Dockered Agent).

Did you by any chance save the checkpoint without any file extention? Or with a weird name containing slashes or points? The error seems to suggest the content type was not properly parsed

3 years ago
0 Hey All, Is Anyone Able To Access The Clear Ml Website?

Isitdown seems to be reporting it as up. Any issues with other websites?

3 years ago
2 years ago
0 Hi Guys, I'M Currently Work With Clearml-Serving For Deployment Of My Model, But I Have Few Questions And Errors: 1. In The Preprocess Class, I Need To Get Some Value That I Got From Training Process For Example, In My Time Series Anomaly Detection I Save

Hi William!

1 So if I understand correctly, you want to get an artifact from another task into your preprocessing.

You can do this using the Task.get_task() call. So imagine your anomaly detection task is called anomaly_detection it produces an artifact called my_anomaly_artifact and is located in the my_project project you can do:
` from clearml import Task

anomaly_task = Task.get_task(project_name='my_project', task_name='anomaly_detection')
treshold = anomaly_ta...

3 years ago
0 Hi Guys, I'M Currently Work With Clearml-Serving For Deployment Of My Model, But I Have Few Questions And Errors: 1. In The Preprocess Class, I Need To Get Some Value That I Got From Training Process For Example, In My Time Series Anomaly Detection I Save

1 Can you give a little more explanation about your usecase? It seems I don't fully understand yet. So you have multiple endpoints, but always the same preprocessing script to go with it? And you need to gather a different threshold for each of the models?

2 Not completely sure of this, but I think an AMD APU simply won't work. ClearML serving is using triton as inference engine for GPU based models and that is written by nvidia, specifically for nvidia hardware. I don't think triton will ...

3 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

It's been accepted in master, but was not released yet indeed!

As for the other issue, it seems like we won't be adding support for non-string dict keys anytime soon. I'm thinking of adding a specific example/tutorial on how to work with Huggingface + ClearML so people can do it themselves.

For now (using the patch) the only thing you need to be careful about is to not connect a dict or object with ints as keys. If you do need to (e.g. ususally huggingface models need the id2label dict some...

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

I can see 2 kinds of errors:
Error: Failed to initialize NVML and Unable to allocate pinned system memory, pinned memory pool will not be available: CUDA driver version is insufficient for CUDA runtime version
These 2 lines make me think something went wrong with the GPU itself. Chances are you won't be able to run nvidia-smi this looks like a non-clearml issue 🙂 It might be that triton hogs the GPU memory if not properly closed down (doubl ctrl-c). It says the driver ver...

2 years ago
0 Hello Everyone. When Pressing The "Copy Embed Code" Button In Scalar Plots, I Don'T Get To Choose The Embedding Type Like In The Video, It Seems That I Get Only Code For Clearml Reports. How To Get The Code For Embedding Plots Into External Tools?

Most likely you are running a self-hosted server. External embeds are not available for self-hosted servers due to difficult network routing and safety concerns (need access from the public internet). The free hosted server at app.clear.ml does have it.

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Hi NuttyCamel41 !

Your suspicion is correct, there should be no need to specify the config.pbtxt manually, normally this file is made automatically using the information you provide using the command line.

It might be somehow silently failing to parse your CLI input to correctly build the config.pbtxt . One difference I see immediately is that you opted for "[1, 64]" notation instead of the 1 64 notation from the example. Might be worth a try to change the input for...

2 years ago
0 Hey! Did Anyone Try Hpo On Yolov5 Model According To The Following Tutorial:

Hey CheekyFox58 like Martin said, it should at least work locally. If not, can you give us some more details on what exactly the werid behaviour is?

3 years ago
0 Hi Community, How Can I Prevent Clearml Creating A New Experiment, Each Time I Interrupt And Restart Training On The Same Task? I'M Training Yolov8 And Clearml Docker Usage Is Up To 30Gb. I Can'T See A Yaml Config Parameter For This.

Hey @<1539780305588588544:profile|ConvolutedLeopard95> , unfortunately this is not built-in into the YOLOv8 tracker. Would you mind opening an issue on the YOLOv8 github page and atting me? (I'm thepycoder on github)

I can then follow up the progress on it, because it makes sense to expose this parameter through the yaml.

That said, to help you right now, please change [this line](https://github.com/ultralytics/ultralytics/blob/fe61018975182f4d7645681b4ecc09266939dbfb/ultralytics/yolo/uti...

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Thank you so much, sorry for the inconvenience and thank you for your patience! I've pushed it internally and we're looking for a patch 🙂

2 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Hi @<1523701949617147904:profile|PricklyRaven28> just letting you know I still have this on my TODO, I'll update you as soon as I have something!

2 years ago
0 Can We Use The Simple Docker-Compose.Yml File For Clearml Serving On A Huggingface Model (Not Processed To Tensorrt)?

Sorry, I jumped the gun before I fully understood your question 🙂 So with simple docker compose file, you mean you don't want to use docker-compose-triton.yaml file and so want to run the huggingface model on CPU instead of Triton?

Or do you want to know if the general docker compose version is able to handle a huggingface model?

2 years ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

@<1547028116780617728:profile|TimelyRabbit96>
Pipelines has little to do with serving, so let's not focus on that for now.

Instead, if you need a ensemble_scheduling block, you can use the CLI's --aux-config command to add any extra stuff that needs to be in the config.pbtxt

For example here, under the Setup section step 2, we use the --aux-config flag to add a dynamic batching block: None

2 years ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Ah I see 😄 I have submitted a ClearML patch to Huggingface transformers: None

It is merged, but not in a release yet. Would you mind checking if it works if you install transformers from github? (aka the latest master version)

2 years ago
0 <image>

Can you please post the result of running df -h in this chat? Chances are quite high your actual machine does indeed have no more space left 🙂

2 years ago
0 Hi All, Im Executing A Task Remotely Via A Queue. I Don'T Want It To Cache The Env Or Install Anything Before The Run, Just To Run The Task On The Agent Machine (I Set Up The Agent'S Env Previously, The Env Cache Causes Versions Problems In My Case). I Tr

Can you try setting the env variables to 1 instead of True ? In general, those should indeed be the correct variables to set. For me it works when I start the agent with the following command:

CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 clearml-agent daemon --queue "demo-queue"
2 years ago
0 Hello, I Am Training Some Models With Yolov8 And Want To Upload The Metrics To The Clearml Webpage In. However, Sometimes It Works And Sometimes It Does Not Work. Clearml Is Able To Read Everything From The Console And Stuff Like That, But Is Not Able To

I'm still struggling to reproduce the issue. Trying on my own PC locally as well as on google colab yields nothing.

The fact that you do get tensorboard logs, but none of them are captured by ClearML means there might be something wrong with our tensorboard bindings, but it's hard to pinpoint exactly what if I can't get it to fail like yours 😅 Let me try and instal exactly your environment using your packages above. Which python version are you using?

2 years ago
0 Hi All, What Is The Appropriate Way To Mount A Volume When Running The Docker Container For A Task? I'M Executing A Task From The Experiment Manager And Adding In

Nice! Well found and thanks for posting the solution!

May I ask out of curiosity, why mount X11? Are you planning to use a GUI app on the k8s cluster?

2 years ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

It is not filled in by default?

projects/debian-cloud/global/images/debian-10-buster-v20210721

2 years ago
Show more results compactanswers