Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 220 Answers
  Active since 10 January 2023
  Last activity 8 months ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
5 Answers
268 Views
0 Votes 5 Answers 268 Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
one year ago
0 Votes
0 Answers
226 Views
0 Votes 0 Answers 226 Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
one year ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

I still have my tasks I ran remotely and they don't show any uncommitted changes. @<1540142651142049792:profile|BurlyHorse22> are you sure the remote machine is running transformers from the latest github branch, instead of from the package?

If it all looks fine, can you please install transformers from this repo (branch main) and rerun? It might be that not all my fixes came through

6 months ago
0 I Am Looking For The Dataset Used In Sarcasm Detection Demo

Great to hear! Then it comes down to waiting for the next hugging release!

6 months ago
0 Hi All, I'M Having Some Issues With Syncing Modified Files Using

Well I'll be had, you're 100% right, I can recreate the issue. I'm logging it as a bug now and we'll fix it asap! Thanks for sharing!!

9 months ago
0 Hi! Can Someone Explain In Details To Me For What The Fileserver, Redis, Mongodb And Elasticsearch Are Used For?

If I'm not mistaken:

Fileserver - Model files and artifacts
MongoDB - all experiment objects are saved there.
Elastic - Console logs, debug samples, scalars all is saved there.
Redis - caching regarding agents I think

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

It looks like you need to add the compute.imageUser role to your credentials: None

Did you by any chance set up the autoscaler to use a custom image? It's trying to use β€˜projects/image-processing/global/images/image-for-clearml’ which is a path I don't recognise. Is this your own, custom image? If so, we can add this role to the documentation as required when using a custom image πŸ™‚

7 months ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

It should, but please check first. This is some code I quickly made for myself. It did make tests for it, but it would be nice to hear from someone else that it worked (as evidenced by the error above πŸ˜… )

7 months ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Damn it, you're right πŸ˜…

        # Allow ClearML access to the training args and allow it to override the arguments for remote execution
        args_class = type(training_args)
        args, changed_keys = cast_keys_to_string(training_args.to_dict())
        Task.current_task().connect(args)
        training_args = args_class(**cast_keys_back(args, changed_keys)[0])
7 months ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Doing this might actually help with the previous issue as well, because when there are multiple docker containers running they might interfere with each other πŸ™‚

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

Indeed that should be the case. By default debian is used, but it's good that you ran with a custom image, so now we know it's not clear that more permissions are needed
image

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

Great! Please let me know if it works when adding this permission, we'll update the docs in a jiffy!

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

Are you running a self-hosted/enterprise server or on app.clear.ml? Can you confirm that the field in the screenshot is empty for you?

Or are you using the SDK to create an autoscaler script?

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

It is not filled in by default?

projects/debian-cloud/global/images/debian-10-buster-v20210721

7 months ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

I can see 2 kinds of errors:
Error: Failed to initialize NVML and Unable to allocate pinned system memory, pinned memory pool will not be available: CUDA driver version is insufficient for CUDA runtime version
These 2 lines make me think something went wrong with the GPU itself. Chances are you won't be able to run nvidia-smi this looks like a non-clearml issue πŸ™‚ It might be that triton hogs the GPU memory if not properly closed down (doubl ctrl-c). It says the driver ver...

7 months ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

What might also help is to look inside the triton docker container while it's running. You can check the example, there should be a pbtxt file in there. Just to doublecheck that it is also in your own folder

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

This looks to me like a permission issue on GCP side. Do your GCP credentials have the compute.images.useReadOnly permission set? It looks like the worker needs that permission to be able to pull the images correctly πŸ™‚

7 months ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

I'm using image and machine image interchangeably here. It is quite weird that it is still giving the same error, the error clearly asked for "Required 'compute.images.useReadOnly' permission for 'projects/image-processing/global/images/image-for-clearml'" πŸ€”

Also, now I see your credentials even have the role of compute admin, which I would expect to be sufficient.
I see 2 ways forward:

  • Try running the autoscaler with the default machine image and see if it launches correctly
  • Dou...
7 months ago
one year ago
0 Hi All, What Is The Appropriate Way To Mount A Volume When Running The Docker Container For A Task? I'M Executing A Task From The Experiment Manager And Adding In

Nice! Well found and thanks for posting the solution!

May I ask out of curiosity, why mount X11? Are you planning to use a GUI app on the k8s cluster?

7 months ago
0 Hi, I Am Trying The Triggerscheduler To Catch When A User Add Specific Tag To A Task. I Used The Below Code But The Schedule_Function Is Not Called When Adding Tags To Task (It Seems The Task.Last_Update Is Not Modified After Adding Tag)

Could you try and create a new task with the tag already added? Instead of adding a tag on an existing task. It should work then. If it does, this might be a bug? Or if not, a good feature to exist πŸ™‚

one year ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Hey @<1523701949617147904:profile|PricklyRaven28> I'm checking! Have you updated anything else and on which exact commit of transformers are you now?

6 months ago
0 Hello! Is There Any Way To Access The The

I'm not quite sure what you mean here? From the docs it seems like you should be able to simply send an HTTP request to the localhost url to get the metrics. Is this not working for you? Otherwise, all the metrics end up in Prometheus, so you can also query that instead or use something like Grafana to visualize it

4 months ago
0 Hey, Trying To Figure Out How To Create An

FierceHamster54 I saw you saying the YOLOv5 project and name are hardcoded in there. Fixed that for ya πŸ˜‰ https://github.com/ultralytics/yolov5/pull/10100

10 months ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Now worries! Just so I understand fully though: you were already using the patch with success from my branch. Now that it has been merged into transformers main branch you installed it from there and that's when you started having issues with not saving models? Then installing transformers 4.21.3 fixes it (which should have the old clearml integration even before the patch?)

6 months ago
0 Hello Channel, I Have A Question Regarding Clearml Serving In Production. I Have Different Environments, And Different Models Each Of Them Linked To A Use Case. I Would Like To Spin Up One Kubernetes Cluster (From Triton Gpu Docker Compose) Taking Into

To be honest, I'm not completely sure as I've never tried hundreds of endpoints myself. In theory, yes it should be possible, Triton, FastAPI and Intel OneAPI (ClearML building blocks) all claim they can handle that kind of load, but again, I've not tested it myself.

To answer the second question, yes! You can basically use the "type" of model to decide where it should be run. You always have the custom model option if you want to run it yourself too πŸ™‚

5 months ago
0 Hi Everyone! I Faced The Problem With Clearml-Serving. I'Ve Deployed Onnx Model From Higgingface In Clearml-Serving, But

Hi! You should add extra packages in your docker-compse through your env file, they'll get installed when building the serving container. In this case you're missing the transformers package.

You'll also get the same explanation here .

4 months ago
0 Hi Team,When Clearml-Agent Is Used To Run The Code,I T Will Setup The Environment ,How It Take The Python Package Version?

Hi @<1533257278776414208:profile|SuperiorCockroach75>

I must say I don't really know where this comes from. As far as I understand the agent should install the packages exactly as they are saved on the task itself. Can you go to the original experiment of the pipeline step in question (You can do this by selecting the step and clicking on Full Details" in the info panel), there under the execution tab you should see which version the task detected.

The task itself will try to autodetect t...

7 months ago
Show more results compactanswers