Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ObedientToad56
Moderator
10 Questions, 17 Answers
  Active since 10 January 2023
  Last activity 3 months ago

Reputation

0

Badges 1

15 × Eureka!
0 Votes
1 Answers
629 Views
0 Votes 1 Answers 629 Views
Hi Everyone, How do i change the default value for agent.default_docker.image that is being used ?
one year ago
0 Votes
3 Answers
622 Views
0 Votes 3 Answers 622 Views
Hi folks, how does clearml figure out the virtual environment that it needs to use for the task. Is there any way to override the default virtual environment...
one year ago
0 Votes
7 Answers
140 Views
0 Votes 7 Answers 140 Views
2 years ago
0 Votes
7 Answers
620 Views
0 Votes 7 Answers 620 Views
Hi, Folks I am trying to setup clearml locally and installing it via helm helm install clearml allegroai/clearml When i do that, i see that few of the pods a...
one year ago
0 Votes
1 Answers
632 Views
0 Votes 1 Answers 632 Views
Hi, For clearml-serving , is there any contract on how the output from the model should be formatted. I have created a custom engine for spacy which will ret...
2 years ago
0 Votes
3 Answers
583 Views
0 Votes 3 Answers 583 Views
Hi
Hi Currently the clearml tasks submitted to a queue gets executed sequentially in a FIFO mode. Is there any way to make the agent run some of this concurrent...
one year ago
0 Votes
1 Answers
639 Views
0 Votes 1 Answers 639 Views
Hi folks, I have a question on Pipeline Configuration Object I can access the configuration object through the API client ( client.tasks.get_by_id(task_id).c...
2 years ago
0 Votes
3 Answers
668 Views
0 Votes 3 Answers 668 Views
hi folks, I have a question on the clearml-serving . In the tutorials I see spinning up the inference container as the last step. My question is on how the d...
2 years ago
0 Votes
6 Answers
695 Views
0 Votes 6 Answers 695 Views
Hi Everyone, I have a training job task which was using GPU that went to failed status because of CUDA Out of memory . However when i look at the worker view...
one year ago
0 Votes
2 Answers
650 Views
0 Votes 2 Answers 650 Views
Hey, trying to do some cleanup because of No space left. For the clearml worker nodes under the .clearml directory , i see drwxr-xr-x 3 ubuntu ubuntu 20K Apr...
one year ago
0 Hi Everyone, I Have A Training Job Task Which Was Using Gpu That Went To

sure Thanks SuccessfulKoala55 Not sure if is a one off event. I will try to reproduce it.

one year ago
0 Hi Folks, How Does Clearml Figure Out The Virtual Environment That It Needs To Use For The Task. Is There Any Way To Override The Default Virtual Environment That Is Picked ?

Thanks @<1567321739677929472:profile|StoutGorilla30> and @<1523701070390366208:profile|CostlyOstrich36> my question was from the perspective of agent. I am guessing agent.binary is what i would have to set.
None

one year ago
0 Hi

@<1523701087100473344:profile|SuccessfulKoala55> , it's on a single machine.

one year ago
0 Hi Folks, I Have A Question On The

hmm, i was speaking from a production point of view, i thought there will be some hooks for deploying where the integration with k8s was also taken care automatically.

AFAIK, i have to create a deployment of this container and add an ingress on top of it. In the architecture diagram in github, this seems to be something that is already baked in , which is what caused confusion. Curious to know your thoughts on this.

2 years ago
0 Hi Everyone, I Have A Training Job Task Which Was Using Gpu That Went To

Yeah GPU utilization was 100% . I cleaned it up using nvidia-smi and killing the process. But i was expecting the clean up to happen automatically since the process failed.

one year ago
one year ago
one year ago
0 Hi Team. Why Am Getting This Error K8S Helm

This is because you need to add the helm repo first.

one year ago
0 Hi, For

Got this working after using the preprocess step similar to the sklearn example to convert the input explicitly to list.

2 years ago
0 Hi, Folks I Am Trying To Setup

@<1523701087100473344:profile|SuccessfulKoala55> , trying to use external mongo. In the values.yaml
I see these two fields

mongodbConnectionStringAuth: ""
mongodbConnectionStringBackend: ""

can you please help with what should go in these ?

one year ago
0 Hi, Folks I Am Trying To Setup

@<1523701087100473344:profile|SuccessfulKoala55> , from the init-containers i could see that it is waiting for mongodb to start.

one year ago
0 Hi, Folks I Am Trying To Setup

@<1523701087100473344:profile|SuccessfulKoala55> , using this

allegroai 

It's been stuck in initialization for a long time.

one year ago
0 Hi, Folks I Am Trying To Setup

cc : @<1523701827080556544:profile|JuicyFox94>

one year ago
0 <no title>

Thanks @<1523701205467926528:profile|AgitatedDove14> . For now i have forked the clearml-serving locally and added an engine for spacy . It is working fine. Yeah, i think some documentation and a good example would make it more visible. An example for something like spacy would be useful for the community.

2 years ago
0 <no title>

Hmm, thanks @<1523701087100473344:profile|SuccessfulKoala55> what would be the right way that you would recommend for adding support for other models/frameworks like spacy .

Would you recommend adding other models by sending PR in line with the lightgbm example here
None

or use the custom option and move the logic for loading the model to preprocess or `proce...

2 years ago
0 <no title>

@<1523701087100473344:profile|SuccessfulKoala55> I saw in the examples one case of engine being passed as custom .

None

My requirement is the need for supporting let's say other frameworks like spacy . So I was thinking maybe i could create a pipeline that does the model load and inference and pass that pipeline. I am still figuring out the ecosystem, would something like that make sense?

2 years ago