Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi All, I'M New With Clearml And I Have A Question. I Have A Modular Code, And When I'M Trying To Run It In A Remote Machine With The Agent, I Get An Error On The Line 'From X Import Y', Which Says That There Isn'T Such Module X. Any Help? Thanks.

Creating a dataset sounds like a good idea, but that does not seem to be the issue.
Can you verify you can manually clone using the same link (notice the log should specify the exact clone it is using, with the password replaced with *)

3 years ago
0 Hi Guys, Until Today I Always Requested Data Scientists To Use Cli To Create Tasks. After That I Usually Reconfigure Them So They Can Be Pointed On Git Repo And So On. Unfortunately This Is Becoming A Big Task Since Now We Have Pipelines With Many Tasks A

Nice guys! Notice that the clearml-task can auto add the Task.init call on the fly, so you can connect any arbitrary Task and control the argparser arguments (again as parameters to the cleaml-task)
BTW: A fix for the --task-type Issue will be pushed later today 😉

3 years ago
0 Is It Possible To Increase The Polling Interval For K8S Glue? Currently It Is 5 Seconds I Believe. Would Adding An Argument For It Help? Can Do A Pr If So

This is the thread checking the state of the running pods (and updating the Task status, so you have visibility into the state of the pod inside the cluster before it starts running)

3 years ago
0 Hi, Is It Possible To Specify Per Experiment (Task In Clearml) Where The Results (Artifacts) Are Saved?

Because we are working with very big files, having them stored at multiple locations is something we try to avoid

Just so I better understand, is this for storing files as part of a dataset, or as debug samples ?
In other words can two diff processes create the exact same file (image) ?

3 years ago
0 Hi, Can I Choose Not Print The Clearml-Agent Config Logs In The Console? Reason Is We Are Passing Credentials Via Env Var To The K8S Glue And Its Being Displayed In The Console As ...

Hi SubstantialElk6
where exactly in the log do you see the credentials ?

/tmp/.clearml_agent.234234e24s.cfg

What's the exact setup ? (I mean are you using the glue? if that's the case I think the temp config file is only created inside the pod/docker so upon completion it will be deleted along side the pod.

3 years ago
0 Hello, I'M Trying To Save A Keras Model As A Task Artifact, And Then Upload It From Another Task. Does Anyone Know The Syntax For That? What I'Ve Seen Is Not Quite Working.

You can always log it manually:
from clearml import InputModel input_model = InputModel.import_model(weights_url='/tmp/keras_example/weight.6.hdf5')

3 years ago
0 Hi, Does Anyone Know How To Setup Clearml In Google Colab/Jupyterlab? E.G., Where To Put The

Hi HollowDolphin18
Sure just use:
Task.set_credentials( api_host=None, web_host=None, files_host=None, key=None, secret=None, store_conf_file=False )https://github.com/allegroai/clearml/blob/912f6f5ba2328b26de042de03f02de5802df360f/clearml/task.py#L2153

3 years ago
0 I Am Back With Another Question: Is There A File Similar To The

ReassuredTiger98 no, but I might be missing something.
How do you mean project-specific?

3 years ago
0 Hi. Is It Possible To Run Pipelines Clearml Using Yaml Manifests Like Kubeflow Style?

The imports inside the functions are because the function itself becomes a stand-alone job running on a remote machine, not the entire pipeline code. This also automatically picks packages to be installed on the remote machine. Make sense?

2 years ago
0 One More Follow-Up Still; We'Re Trying To Run Non-Gpu Scaler, And I'Ve Finally Sorted Out Subnet And Security Groups Issues, Only To Run Into This:

git config --system credential.helper 'store --file /root/.git-credentials'

Maybe we should use this hack for cloning with user/token in general ...

2 years ago
0 Hi

I think you can watch it after GTC on the nvidia website, and a week after that we will be able to upload it to the youtube channel 🙂

2 years ago
0 Hmm Is There Any Clear (Pun Intended) Documentation On The Roles Of Storagemanager, Dataset And Artefacts? It Seems To Me There Are Various Overlapping Roles And I'M Not Sure I Fully Grasp The Best Way Of Using Them. Especially When Looking At The Way Da

Hi JealousParrot68
I'll try to shed some light on these modules and use cases.
Storagemanager is general speaking, low level access to http/object-storage/files utility. In most cases there is no need to directly use it if objects are already stored/managed on clearml (for example artifacts/models/datasets). But, it is quite handy to use with your S3 buckets etc.

Artifacts: Passing an artifact between Tasks will usually be something like:
` artifact_object = Task.get_task('task_id').artifa...

3 years ago
0 Hi, Together With

"Updates a few seconds ago"

That just means that the process is not dead.

Yes that seemed to be stuck 😞
Any chance you can verify with the RC version?
I'll try to dig into the commits, maybe I can come up with an explanation ...

4 years ago
0 Hi, Together With

BTW:
Just making sure, 74 was not supposed to be the last checkpoint (in other words it is not stuck on leaving the training process, but actually in the middle)

4 years ago
0 Hi, Together With

JitteryCoyote63 while it's running, could you give me a few details on the setup, maybe I can reproduce it.
Is it using pytorch distributed ?
Are all models uploaded to S3 ?
etc.

4 years ago
0 So, Here'S A Question. Does Clearml Automatically Save Everything Necessary To Continue Training A Pytorch Language Model? Specifically, I'Ve Been Looking At The Checkpoint Folders Created When I'M Training A Huggingface Robertaformaskedlm. I Checked What

If you cannot change the "TrainerState" (i.e. inherit and pass it into the code)
you cloud also monkey-patch it, something like
` class OurTrainerState(TrainerState):
def init(...)
...
def load_from_json(cls, json_path: str):
super().load_from_json(json_path))
Task.current_task().upload_artifact(...)

trainer.state = OurTrainerState(trainer.state) `

3 years ago
0 Hi, I Noted That Clearml-Serving Does Not Support Spacy Models Out Of The Box And That Clearml-Serving Only Supports Following;
  1. Correct. Basically the order is restapi body dictionary-> preprocess -> process -> post-process -> restapi dictionary return
2 years ago
3 years ago
0 Hi, I Think I Found A Bug: In The

Thanks StaleKangaroo85 bug is verified. Let me check to see where exactly is the bug.

Two points
Notice that x_labels should be the size of the histogram It seems that you have to pass the labels as well (otherwise you get the trace-0), so if you add labels=['random histogram'] and labels=['random histogram2'] , you'll get the correct legend.Anyhow I'll make sure we also fix it in code so it is automatically labels are [series] if not specified, thanks!

4 years ago
0 Hello, Does Anybody Here Have Much Experience In Creating Sub-Tasks Or Sub-Pipelines? I'M Not Sure The Concept Is Particularly Well Established But The Docs Mention:

using caching where specified but the pipeline page doesn't show anything at all.

What do you mean by " the pipeline page doesn't show anything at all."? are you running the pipeline ? how ?
Notice PipelineDecorator.component needs to be Top level not nested inside the pipeline logic, like in the original example

@PipelineDecorator.component(
        cache=True,
        name=f'append_string_{x}',
    )
one year ago
Show more results compactanswers