Reputation
Badges 1
282 × Eureka!Hi, i tried the k8s-glue on my k8s setup and needed some clarifications on some of the arguments.
--queue. Does this only refer to default and service? How can i create new queue to which it can sync with the ClearML server? --ports-mode. I'm not sure what ports mode does. doc says "add a label to the pod which can be used as service". Which pod is it referring to in the first place? All args pertaining to --ports-mode. (E.g. base-pod-num, gateway-address...etc) --overrides-yaml. What is the ...
The doc also mentioned preconfigured services with selectors in the form of
"ai.allegro.agent.serial=pod-<number>" and a targetPort of 10022.
Would you have any examples of how to do this?
first line to make sure kubectl is connected to k8s.
Ok sure. Thanks.
Thanks AgitatedDove14 , will take a look.
We are using k8s glue to spawn the job. Would you be able to advise in detail of steps on what goes on when the above code executes?
yeah, someone should call them out.
For example, it would useful to integrate https://github.com/whylabs/whylogs#features into ClearML as part of data and model monitoring. WhyLogs would have their own static page that would preferably be displayed as a new custom tab (besides logs, scalars and plots.).
I see. Is there a more elaborate codeset that describes the above interactions?
Hi SuccessfulKoala55 , just wondering how i can follow up on this.
Hi, any idea if i can acheive this? I just need a list of usernames.
Do you mean by this that you want to be able to seamlessly deploy models that were tracked using ClearML experiment manager with ClearML serving?
Ideally that's best. Imagine that i used Spacy (Among other frameworks) and i just need to add the one or two lines of clearml codes in my python scripts and i get to track the experiments. Then when it comes to deployment, i don't have to worry about Spacy having a model format that Triton doesn't recognise.
Do you want clearml serving ...
The server is running only the ClearML components. Could you advise on the ELB part, how should we optimise it?
ah... thanks!
Does the enterprise version support natively?
Yeah that'll cover the first two points, but I don't see how it'll end up as a dataset catalogue as advertised.
I think a related question is, ClearML replies heavily on Triton (Good thing) but Triton only support a few frameworks out of the box. So this 'engine' need to make sure its can work with Triton and use all its wonderful features such as request batching, GPU reuse...etc.
Thanks could you share the URL to this full API documentation?
Hi, currently the ClearML SDK only supports python. If i want to run my ML in other languages, can i use a SDK in that language? Or is there other means such as a Web API calls that does the same as the SDK?
Hi, i will have to get back to you again. Need to check every client's repo to determine your hypothesis.
Any idea where i can find the relevant API calls for this?
Transform feature engineering and data processing code into recurring data ingestion workflows. Start building data stores, develop, automate, and schedule complex data processing jobs.
No issues. I know its hard to track open threads with Slack. I wish there's a plugin for this too. 🙂
Create immutable and differentiable versions on-prem or in the cloud with our data agnostic solution.
Hi. Anything that can point to activity by user.
Share data across R&D teams with searchable data catalogs available on any environment.
Yes! I definitely think this is important, and hopefully we will see something thereÂ
 (or at least in the docs)
Hi AgitatedDove14 , any updates in the docs to demonstrate this yet?