Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Any Ideas Why This Is Happening? It Was Fine Yesterday

TenseOstrich47 this looks like elasticserach is out of space...

4 years ago
0 Hi, I Have A Small Question Regarding K8S Clearml-Serving Behavior. I Have In My Cluster One Gpu Of 16Gb Ram, And Another One Of 24 Gb Ram. I Have A Llm Model Fitting The 24Gb But Not The 16Gb Gpu. When I Call The Endpoint, How Will I Know To Which Gpu I

Hi @<1556812486840160256:profile|SuccessfulRaven86>
Every clearml-serving session (you can have multiple different "sessions") is assumed to be homogeneous, this would mean it will serve the same models on as many nodes as possible supporting multiple models per pod.
In your example I think the easiest is to create two serving sessions one with a node selector for the 24GB node and another for the 16GB node, wdyt?

one year ago
0 Hi! Trying To Run The Following Very Basic Code. The First Few Parts Works As They Should:

Hi FunnyTurkey96
what's the clearml server you are using ?

4 years ago
0 Base_Template_Keras_Simply.Py

As I suspected, from your log:
agent.package_manager.system_site_packages = falseWhich is exactly the problem of the missing tensorflow (basically it creates a new venv inside the docker, but without the flag On, it does not inherit the docker preinstalled packages)
This flag should have been true.
Could it be that the clearml.conf you are providing for the glue includes this value?
(basically you should only have the sections that are either credentials or missing from the default, there...

4 years ago
0 I Am Trying To Use

And the server itself? is it http or https ?

4 years ago
0 Hey Guys, Hope You'Re Having A Good Week

Yep 🙂
Basically:
` task = Task.get_task(task_id='aaaa')
while task.status not in ('completed', 'stopped',):

do something ?

sleep(15) `(Notice task.status / task.get_status() will refresh the Task status on every call)

4 years ago
0 Hi All I Am Would Like To Somehow Prevent Clearml Caching From Caching A Task That Hasn'T Uploaded Artifacts (Using Cache_Executed_Step In

But I am considreing just failing the task.

This will of course work, just raise exception in the Task itself, and protect the call from the pipeline logic function with try/except

regrading the second option, try to nullify the hash on the Component Task:

# running the Task component here
# if we do not want someone to use us
Task.current_task()._set_runtime_properties({"pipeline_job_hash": None})
one year ago
0 Hi All, I'M Trying To Deploy Trains On Rancher (Nice Kubernetes Cluster Orchestration Project) Where I'M Quite New To Rancher And Kubernetes. I Have Been Able To Install Trains Using Helm

but I still need the laod ballancer ...

No you are good to go, as long as someone will register the pods IP automatically on a dns service (local/public) you can use the regsitered address instead of the IP itself (obviously with the port suffix)

Thanks for your support

With pleasure!

4 years ago
0 I Am Completely Stuck With The Serving. I Did The Custom Example. I See The Endpoint In

Hi ConvolutedSealion94
Yes this seems like the correct curl
How did you spin the clearml-serving containers? is it with the docker-compose or with the helm chart (I remember that there are some pitfalls with the helm chart, and I would actually start with the local docker-compose to debug it)

2 years ago
0 Hello There! I Was Trying To Update The Url For Debug Samples After Migration Of The Server To A New Domain And Was Following The Steps From Here:

Hi @<1684010629741940736:profile|NonsensicalSparrow35>

But the provided command is missing the url target for the curl so it is not complete.

Not sure I followed. did you specify "NEW_ADDRESS" ?
or is it the in both cases the URL is locahost ?

one year ago
0 Base_Template_Keras_Simply.Py

Assuming from previous threads this is run on K8s , I think a configuration is missing, use system packages:
https://github.com/allegroai/clearml-agent/blob/cb6bdece39751eaef975287609b8bab603f116e5/docs/clearml.conf#L57

4 years ago
0 Base_Template_Keras_Simply.Py

DeliciousBluewhale87 could you send the new log?

4 years ago
0 Unrelated Problem (Or Is It?) The Clearml'S Built In Cleanup Service Fails

Can't figure out what made it get to this point

I "think" this has something to do with loading the configuration and setting up the "StorageManager".
(in other words setting the google.storage)... Or maybe it is the lack of google storage package?!
Let me check

3 years ago
0 Can Someone Confirm That

instead of the one that I want or the one of the env which it is started from.

The default is the python that is used to run the agent.
agent.ignore_requested_python_version = true agent.python_binary = /my/selected/python3.8

4 years ago
0 While We Rerunning Using Agent All Dependencies Ill Be Installed Once It Get Completed Will The Dependencies Will Be Removed Or Not

Hi @<1554275802437128192:profile|CumbersomeBee33>
what do you mean by "will the dependencies will be removed or not" ?
The next time the agent spin a new Task it will create a new venv and delete the previous one

2 years ago
0 Hi Anyone

Hi AstonishingWorm64
I think you are correct, there is external interface to change the docker.
Could you open a GitHub issue so we do not forget to add an interface for that ?
As a temp hack, you can manually clone "triton serving engine" and edit the container image (under the execution Tab).
wdyt?

4 years ago
0 Hey, I Have Many Python Files. In The First Python File I Use The Following Line. Parameters = Task.Connect(Input) Now I Change The Hyperparameters On The Graphical Interface. But Now I Need The Hyperparameters In Every Python File. How Do I Have Access T

Hi ProudChicken98
task.connect(input) preserves the types based on the "input" dict types, on the flip side get_parameters returns the string representation (as stored on the clearml-server).
Is there a specific reason for using get_parameters over connect ?

4 years ago
0 Reducing Docker Container Spin-Up Time With Clearml Agent

Woot woot!
awesome, this RC is stable you can feel free to use it, the official release is probably due to be out next week :)

3 years ago
0 Ok, I Faced Quite Funny Issue. Sorry For Spamming In This Chat, But I Am Just Ramping Up With Clearml And Its A Bit Turbulent.. Issue (As I Understand It) Is Following: My Package That I Use For Model Trainings Has The Same Name As Some Package In Pip (I

Worker just installs by name from pip, and it installs not my package!

Oh dear ...
Did you configure additional pip repositories in the Agent's clearml.conf ? https://github.com/allegroai/clearml-agent/blob/178af0dee84e22becb9eec8f81f343b9f2022630/docs/clearml.conf#L77 It might be that (1) is not enough, as pip will first try to search the package in the pip repository, and only then in the private one. To avoid that, in your code you can point directly to an https of your package` Ta...

3 years ago
0 Hi Everyone! I Am Using Clearml-Serving When I Am Trying To Add New Endpoint Like This

Thanks @<1569496075083976704:profile|SweetShells3> ! let me see if I can reproduce the issue

2 years ago
0 Hi, I Try To Run Locally

@<1523706266315132928:profile|DefiantHippopotamus88> seems like you are missing the ports 🙂

CLEARML_WEB_HOST="
"
CLEARML_API_HOST="
"
CLEARML_FILES_HOST="
"
3 years ago
0 Hello! I’M Currently Using Clearml-Server As An Artifact Manager And Clearml-Serving For Model Inference, With Each Running On Separate Hosts Using Docker Compose. I’Ve Successfully Deployed A Real-Time Inference Model In Clearml-Serving, Configured Withi

Hi @<1697056701116583936:profile|JealousArcticwolf24>
Awesome deployment 🤩
Yes if you need another scalable model serving you can just run another instance of the clearml-serving-inference
https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/docker/docker-compose.yml#L77
So you end up with two of them, one per models environ...

one year ago
0 Hi, Happy Friday To Everyone, Is There Anyone Who Can Ref Me To How You Would Work With Ref/Loading A Dataset With Netapp (Astra Trident) Integration From The Ide.

Hi @<1697419082875277312:profile|OutrageousReindeer5>
Is NetApp S3 protocol enabled or are you referring to NFS mounts?

one year ago
0 Hello, I Am Trying To Retrieve A Simple Dict Artifact Uploaded In A Previous Task With

JitteryCoyote63 with pleasure 🙂
BTW: the Ignite TrainsLogger will be fixed soon (I think it's on a branch already by SuccessfulKoala55 ) to fix the bug ElegantKangaroo44 found. should be RC next week

5 years ago
0 Hi All! I Have Methods Inside Notebooks That I Made Available To Clis Using Nbdev

Hi @<1528908687685455872:profile|MassiveBat21>

However

no useful

template

is created for down stream executions - the source code template is all messed up,

Interesting, could you provide the code that is "created", or even better some way to reproduce it ? It sounds like sort of a bug? or maybe a feature support that is missing.

My question is - what is a best practice in this case to be able to run exported scripts (python code not made availa...

2 years ago
Show more results compactanswers