Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8112 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
4 years ago
0 Hi, In My Setup I Run Multiple Experiments In Parallel From The Same Script. I Understand That There Can Only Be One Execution

Now will these 10 experiments be of different names? How will I know these are part of the 'mnist1' HPO case?

Yes (they will have the specific HP name/value combination).
FYI names are not unique so in theory you could have multiple experiments with the same name.

If you look under the Configuration Tab, you will find all the configuration arguments for the experiment. You can also add specific arguments to the experiment table (click the cogwheel at the right top corner, and select...

4 years ago
0 I Uncommented The Line

So net-net does this mean it’s behaving as expected,

It is as expected.
If no "Installed Packages" are listed, then it cannot pull a cached venv (because requirements.txt is not a full env, and it never analyzed it)).
It does however create a venv cache based on it (after installing it)
The Clone of this Task (i.e. right click on the UI clone experiment, enqueue it, Will use the cached copy becuase the full packages are listed in the "Installed Packages" section of the Task.
Make sens...

2 years ago
0 Regarding The “Classic” Datasets (Not Hyper Datasets): Is There An Option To Do Something Equivalent To Dvc’S “

you can run md5 on the file as stored in the remote storage (nfs or s3)

s3 is implementation specific (i.e. minio weka wassaby etc, might not support it) and I'm actually not sure regrading nfs (I mean you can run it, but it actually means you are reading the data, that said, nfs by definition I'm assuming is relatively fast access)
wdyt?

3 years ago
0 Hi Guys, I Configured A Trains Server And A Trains Agent. I Have Some Code I Want To Run In The Trains Agent, However The Code Is In A Local Branch On My Client (I Cant Push It On Remote Yet Because Of Internal Practices) Is There A Way To Do So? Currentl

SmugOx94 Yes, we just introduced it 🙂 with 0.16.3
Discussion was here (I'll make sure to update the issue that the version is out)
https://github.com/allegroai/trains/issues/222
In your trains.conf add the following line:
sdk.development.store_code_diff_from_remote = trueIt will store the diff from the remote HEAD instead of the local one.

4 years ago
0 Is It Possible To Schedule Pipelines On Events Like Dataset Update?

Basically the idea is that you create the pipeline once (say debug), then once you see it is running, you have a Task of your pipeline in the system (with any custom logic you added). With a Task in the system you can always clone/modify and launch externally (i.e. from code/ui. Make sense ?

3 years ago
0 I Am Trying To Use

yes its the JWT issue

4 years ago
0 Hi, Plotting A Debug Sample With A

Thanks VirtuousFish83 !
This is great

4 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

JitteryCoyote63 I think this only holds for the conda distribution.
(Actually quite interesting, I wonder what happens if you already installed cudatoolkit...)

4 years ago
0 Is Clearml Able To Intercept (Automatically) Metrics Gathered Via

When you have a bit of experience, please suggest a path forward, it will be great to integrate

2 years ago
0 2. I Have A Local Postgresql And Datafetcher Class, Whats The Best Way To Reuse Same Datafetcher In Local Runs With Pipeline. Is It Possible?

Hmm I would recommend passing it as an artifact, or returning it's value from the decorated pipeline function. Wdyt?

one year ago
0 Executed From Within A Pipelinecontroller Task, What Possible Reason Does

[Assuming the above is what you are seeing]
What I "think" is happening is that the Pipeline creates it's own Task. When the pipeline completes, it closes it's own Task, basically making any later calls to Tasl.current_task() return None, because there is no active Task. I think this is the reason that when you are calling process_results(...) you end up with None.
For a quick fix, you can do
pipeline = Pipeline(...) MedianPredictionCollector.process_results(pipeline._task)Maybe we should...

3 years ago
0 Hi, I'Ve Got A Quick Question About

Where is the cleamlr-server running? GCP as well?

3 years ago
0 So, I Have Just Started Using Clearml For Local Data And Experiment Tracking And Its Been Super Helpful. Now That I Am Moving Towards Deploying And Serving The Models Using Clearml-Serving And Triton. I Have Done Some Basic Experimenting With The Provided

Suppose that a new model version 2 is trained, but it does not fulfill our target metrics, is it possible to just save the model to model repo and not serve it, if a model version 1 is already being served?

Sure, just do not "publish" the model, it will be stored in the model repository, fully accessible but the clearml-serving will not serve it 🙂

3 years ago
0 Hi All, Wanted To Know If There’S A Way (That’S Not A Hack) To Configure K8S Agents To Use Github Deploy Keys? As I Understand, Only User/Pass Combinations Are Possible With Agent Pods (Given By

Hi MassiveBat21
CLEARML_AGENT_GIT_USER is actually git personal token
The easiest is to have a read only user/token for all the projects.
Another option is to use the ClearML vault (unfortunately not part of the open source) to automatically take these configuration on a per user basis.
wdyt?

2 years ago
0 Hello! Since Today I Get

Does clearml resolve the CUDA Version from driver or conda?

Actually it starts with the default CUDA based on the host driver, but when it installs the conda env it takes it from the "installed packages" (i.e. the one you used to execute the code in the first place)

Regrading link, I could not find the exact version bu this is close enough I guess:
None

4 years ago
0 My Other Issue Is That If I Want To Compare Two Experiments The Scalar Plots Do Not Load ( Loading Forever ). If I Select To Show Only The Minimum Values That One Loads And Also The Other Menu Points Working In The Comparison Mode Except That.

Hi @<1600299043865497600:profile|MagnificentSeaurchin90>
Any chance you can provide more info on the error?

if I want to compare two experiments the scalar plots do not load ( loading forever ).

I'm assuming the issue is the Plots tab? or is it the Scalars? what do you have in the Plots? can you send an image of the single experiment ?

one year ago
0 Hi All, I'M Trying To Deploy Trains On Rancher (Nice Kubernetes Cluster Orchestration Project) Where I'M Quite New To Rancher And Kubernetes. I Have Been Able To Install Trains Using Helm

Maybe the only thing to worry about is making sure the IP address is stable, so if k8s replaces the node, you do not have to reconfigure the clients 🙂

4 years ago
0 Does The New 2.0 Helm Charts (App Ver 1.1.0) Not Support Nfs?

I think this is the only mount you need:

Data persisted in every Kubernetes volume by ClearML will be accessible in /tmp/clearml-kind folder on the host.

SuccessfulKoala55 is this correct ?

3 years ago
0 Hi All! Let'S Say I Have Two Functions Decorated With

Only those components that are imported in the script where the pipeline is defined would be included in the DAG plot, is that right?

Actually the way it works currently (and we might change it if there is a better way), every time you call PipelineDecorator.component a new component is stored on the Pipeline Task, which is later translated into DaG graph and Table (next version will have a very nice UI to display / edit them).
The idea is first to have a representation of the p...

3 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

The quickest workaround would be, In your final code just do something like:
my_params_for_hpo = {'key': omegaconf.key} task.connect(my_params_for_hpo, name='hpo_params') call_training_with_value(my_params_for_hpo['key'])This will initialize the my_params_for_hpo with the values from OmegaConf, and allow you to override them in the hyperparameyter section (task.connect is two, in manual it stores the data on the Task, in agent mode, it takes the values from the Task and puts them ba...

3 years ago
0 I'Ve Been Trying To Use The

Hi @<1610808279263350784:profile|FriendlyShrimp96>

Is there a way to get a list of variants given a metric, or even just a full list of metrics and variants for a given task id?

Try this
None

from clearml.backend_api.session.client import APIClient

c = APIClient()
metrics = c.events.get_task_metrics(tasks=["TASK_ID_HERE"], event_type="training_debug_image")
print(metrics)

I think API ...

one year ago
Show more results compactanswers