Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 221 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
2 years ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

RoundMosquito25 it is true that the TaskScheduler requires a task_id , but that does not mean you have to run the pipeline every time 🙂

When setting up, you indeed need to run the pipeline once, to get it into the system. But from that point on, you should be able to just use the task_scheduler on the pipeline ID. The scheduler should automatically clone the pipeline and enqueue it. It will basically use the 1 existing pipeline as a "template" for subsequent runs.

2 years ago
0 It Would Be Nice To Group Experiments Within Projects Use Cases:

Could you use tags for that? In that case you can easily filter on which group you're interested in, or do you have a more impactful UI change in mind to implement groups? 🙂

2 years ago
0 Hi, I Am Trying The Triggerscheduler To Catch When A User Add Specific Tag To A Task. I Used The Below Code But The Schedule_Function Is Not Called When Adding Tags To Task (It Seems The Task.Last_Update Is Not Modified After Adding Tag)

Can you elaborate a bit more, I don't quite understand yet. So it works when you update an existing task by adding a tag to it, but it doesn't work when adding a tag for the first time?

2 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

It's been accepted in master, but was not released yet indeed!

As for the other issue, it seems like we won't be adding support for non-string dict keys anytime soon. I'm thinking of adding a specific example/tutorial on how to work with Huggingface + ClearML so people can do it themselves.

For now (using the patch) the only thing you need to be careful about is to not connect a dict or object with ints as keys. If you do need to (e.g. ususally huggingface models need the id2label dict some...

one year ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Damn it, you're right 😅

        # Allow ClearML access to the training args and allow it to override the arguments for remote execution
        args_class = type(training_args)
        args, changed_keys = cast_keys_to_string(training_args.to_dict())
        Task.current_task().connect(args)
        training_args = args_class(**cast_keys_back(args, changed_keys)[0])
one year ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Just for reference, the main issue is that ClearML does not allow non-string types as dict keys for its configuration. Usually the labeling mapping does have ints as keys. Which is why we need to cast them to strings first, then pass them to ClearML then cast them back.

one year ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

HomelyShells16 Thanks for the detailed write-up and minimal example. I'm running it now too

2 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

@<1523701949617147904:profile|PricklyRaven28> Please use this patch instead of the one previously shared. It excludes the dict hack :)

one year ago
2 years ago
0 We Have A Use Case Where An Experiment Consists Of Multiple Docker Containers. For Example, One Container Works On Cpu Machine, Preprocesses Images And Puts Them Into Queue. The Second One (Main One) Resides On Gpu Machine, Reads Tensors And Targets From

After re-reading your question, it might be difficult to have cross-process communication though. So if you want the preprocessing to happen at the same time as the training and the training to pull data from the preprocessing on the fly, that might be more difficult. Is this your usecase?

2 years ago
2 years ago
0 Can We Use The Simple Docker-Compose.Yml File For Clearml Serving On A Huggingface Model (Not Processed To Tensorrt)?

Sorry, I jumped the gun before I fully understood your question 🙂 So with simple docker compose file, you mean you don't want to use docker-compose-triton.yaml file and so want to run the huggingface model on CPU instead of Triton?

Or do you want to know if the general docker compose version is able to handle a huggingface model?

one year ago
0 Can We Use The Simple Docker-Compose.Yml File For Clearml Serving On A Huggingface Model (Not Processed To Tensorrt)?

As I understand it, vertical scaling means giving each container more resources to work with. This should always be possible in a k8s context, because you decide which types of machines go in your pool and your define the requirements for each container yourself 🙂 So if you want to set the container to use 10.000 CPUs feel free! Unless you mean something else with this, in which case please counter!

one year ago
0 Started Using The Integrated Gcp Autoscaler To Avoid Some Problems We Had. For Some Reason The Instances Doesn'T Have A Gpu Although Specifically Defined In The Ui. How Come? (Not Using Any Docker Container For The Agents)

Hi EmbarrassedSpider34 , would you mind showing us a screenshot of your machine configuration? Can you check for any output logs that ClearML might have given you? Depending on the region, maybe there were no GPUs available, so could you maybe also check if you can manually spin up a GPU vm?

2 years ago
one year ago
0 Started Using The Integrated Gcp Autoscaler To Avoid Some Problems We Had. For Some Reason The Instances Doesn'T Have A Gpu Although Specifically Defined In The Ui. How Come? (Not Using Any Docker Container For The Agents)

Thanks! I've asked this to the autoscaler devs and it might be a possible bug, you are the second one. He's checking and we'll come back to you!

2 years ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Hi @<1523701949617147904:profile|PricklyRaven28> sorry that this is happening. I tried to run your minimal example, but get a IndexError: Invalid key: 5872 is out of bounds for size 0 error. That said, I get the same error without the code running in a pipeline. There seems to be no difference between simply running the code and the pipeline (for me). Do you have an updated example, maybe also including getting a local copy of an artifact, so I can check?

one year ago
0 Hi Guys! I Am New To Clearml And I Was Trying Out This Simple Code And It Took 4Min To Run. Is This Normal?

Hi @<1541592213111181312:profile|PleasantCoral12> thanks for sending me the details. Out of curiosity, could it be that your codebase / environment (apart from the clearml code, e.g. the whole git repo) is quit large? ClearML does a scan of your repo and packages every time a task is initialized, maybe that could be it. In the meantime I'm asking our devs if they can see any weird lag with your account on our end 🙂

one year ago
0 I Am Trying To Run The Urbandsounds8K Example, But When I Run "Preprocessing" I Get The Error In The Line

VivaciousBadger56 hope you had a great time while away :)

That looks correct indeed. Do you mind checking for me if the dataset actually contains the correct metadata?

Go to the datasets section, select the one you need and on the right click on more information. It should send you to the experiment manager view. Then, under artifacts, do you see a key in the list named metadata? Can you post a screenshot?

2 years ago
0 Can I Upload A Text With The Dataset? The Goal Is To Explain The Ds

You should be able to! But we just saw that it isn't supported as part of the dataset interface, only on the task interface. So you can get the datasets's underlying task for now and add a comment this way:

your_dataset._task.comment = "your text here"We have flagged this as a bug and we'll add this soon!

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Thank you so much, sorry for the inconvenience and thank you for your patience! I've pushed it internally and we're looking for a patch 🙂

one year ago
0 Hi There, Another Triton-Related Question: Are We Able To Deploy

Hi there!

Technically there should be nothing stopping you from deploying a python backend model. I just checked the source code and ClearML basically just downloads the model artifact and renames it based on the inferred type of model.

None

As far as I'm aware (could def be wrong here!), the Triton Python backend essentially requires a folder...

one year ago
0 I Am Trying To Run The Urbandsounds8K Example, But When I Run "Preprocessing" I Get The Error In The Line

VivaciousBadger56 Thanks for your patience, I was away for a week 🙂 Can you check that you properly changed the project name in the line above the one you posted?

In the example, by default, the project name is "ClearML Examples/Urbansounds". But it should give you an error when first running the get_data.py script that you can't actually modify that project (by design). You need to change it to one of you own choice. You might have done that in get_data.py but forgot to do s...

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Yes, with docker auto-starting containers is def a thing 🙂 We set the containers to restart automatically (a reboot will do that too) for when the container crashes it will immediately restarts, let's say in a production environment.

So the best thing to do there is to use docker ps to get all running containers and then kill them using docker kill <container_id> . Chatgpt tells me this command should kill all currently running containers:
docker rm -f $(docker ps -aq)And I...

one year ago
Show more results compactanswers