Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 418 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 Hi There, Currently I Have A Clearml Pipeline That Takes In A Bunch Of Parameters For Various Tasks And Passes These Parameters Via Parameter_Override For Every Pipe.Add_Step(). However, I Have A Lot Of Parameters, And So My Pipeline Code Is A Little Unwi

Hi @<1633638724258500608:profile|BitingDeer35> ! You could attach the configuration using set_configuration_object None in a pre_execute_callback . The argument is set here: None

Basically, you would have something like:

def pre_callback(pipeline, node, params):
    node.job.task.set_configuration_object(config)...
11 months ago
0 Hi, I Am Switching From Wandb To Clearml In My Pytorch Ddp Training Script. With Wandb I Used To Have Worker Nr 1 Handle Logging To Wandb And Initiating The Connection. If I Simply Exchange Wandb Calls With Clearml Calls, Worker Nr 1, Which Handles The Co

That makes sense. You should generally have only 1 task (initialized in the master process). The other subprocesses will inherit this task which should speed up the process

10 months ago
0 How Does One

Hi @<1654294828365647872:profile|GorgeousShrimp11> ! add_tags is an instance method, so you will need the controller instance to call it. To get the controller instance, you can do PipelineDecorator.get_current_pipeline() then call add_tags on the returned value. So: PipelineDecorator.get_current_pipeline().add_tags(tags=["tag1", "tag2"])

9 months ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! The rc is now out and installable via pip install clearml==1.14.1rc0

11 months ago
0 Hi All, I Have A Query Regarding The Retrieval Of Pipeline Details. I Have Already Created A Pipeline Under The Project Name "<Project_Name>" And The Pipeline Name Is "<Pipeline_Name>". I Would Like To Retrieve The Version Of This Pipeline. I Tried Using

Hi @<1626028578648887296:profile|FreshFly37> ! Indeed, the pipeline gets tagged once it is running. Actually, it just tags itself. That is why you are encountering this issue. The version is derived in 2 ways: either you manually add the version using the version argument in the PipelineController , or the pipeline fetches the latest version out of all the pipelines that have ran, and auto-bumps that.
Please reference this function: [None](https://github.com/allegroai/clearml/blob/05...

10 months ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

because I think that what you are encountering now is an NCCL error

6 months ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Oh I see. I think there is a mismatch between some clearml versions on your machine? How did you run these scripts exactly? (like the CLI, for example python test.py ?)

Or if you ran it via an IDE, what is the interpreter path?

11 months ago
0 Hi! I Would Like To Report 2 "Plt.Imshow" Images. Plain Plotting (I.E. "Plt.Figure()") Showed Only The Second One. When I Tried To Report Through The Logger Via "Report_Confusion_Matrix" It Reported Only The First One. Is There A Better Way Of Doing Thi

Hi @<1714813627506102272:profile|CheekyDolphin49> ! It looks as if we can't report these plots as plotly plots so we default to Debug Samples. You should see both plots under Debug Samples , but make sure you are setting the Metric to -- All --
image

4 months ago
0 Hi, I Am Familiarising Myself With Clearml-Serving And Following The Steps From The

Hi @<1765547897220239360:profile|FranticShark20> ! Do you have any other logs that could help us debug this, such as tritonserver logs?
Also, can you use model.onnx as the model file name both in the upload and default_model_filename, just to make sure this is not a file extension problem (this can happen with triton)

one month ago
0 Hi There, Currently I Have A Clearml Pipeline That Takes In A Bunch Of Parameters For Various Tasks And Passes These Parameters Via Parameter_Override For Every Pipe.Add_Step(). However, I Have A Lot Of Parameters, And So My Pipeline Code Is A Little Unwi
 would that mean that multiple pre_callback()s would have to be defined for every add_step, since every step would have different configs? Sorry if there's something I'm missing, I'm still not quite good at working with ClearML yet.

Yes, you could have multiple callbacks, or you could check the name of each step via node.name and map the name of the node to its config.

One idea would be to have only 1 pipeline config file, that would look like:

step_1:
  # step_1 confi...
11 months ago
0 Hello, Is There A Way To Disable Dataset Caching So That When

Hi FreshParrot56 ! This is currently not supported 🙁

2 years ago
0 Hi, Following

Hi HandsomeGiraffe70 ! We found the cause for this problem, we will release a fix ASAP

2 years ago
0 I Am Currently Training A Yolo Model Using The Yolov5 Framework Within A Container. I Am Using The --Project And --Name Flags During The Training Process, But Unfortunately, The Training Results Are Not Being Sent To The Server. Instead, They Are Being Fo

Hi @<1639074542859063296:profile|StunningSwallow12> !
This happens because the output_uri in Task.init is likely not set.
You could either set the env var CLEARML_DEFAULT_OUTPUT_URI to the file server you want the model to be uploaded to before running train.py or set sdk.development.default_upload_uri: true (or to the file server you want the model to be uploaded to) in your clearml.conf .
Also, you could call Task.init(output_uri=True) in your train.py scri...

11 months ago
0 Why Is Async_Delete Not Working?

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

10 months ago
0 Why Is Async_Delete Not Working?

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! We have someone investigating the UI issue (I mainly work on the sdk). They will get back to you once they find something...

10 months ago
0 Hi All, I Have A Query Regarding The Retrieval Of Pipeline Details. I Have Already Created A Pipeline Under The Project Name "<Project_Name>" And The Pipeline Name Is "<Pipeline_Name>". I Would Like To Retrieve The Version Of This Pipeline. I Tried Using

Hi @<1626028578648887296:profile|FreshFly37> ! You can get the version by doing:

p = Pipeline.get(...)
p._task._get_runtime_properties().get("version")

We will make the version more accessible in a future version

11 months ago
0 Hi, I Am Observing A Strange Behaviour When Loading A Dataset’S Local Copy.

Hi @<1695969549783928832:profile|ObedientTurkey46> ! You could try increasing sdk.storage.cache.default_cache_manager_size to a very large number

3 months ago
0 I Tried Using

Hi @<1523708920831414272:profile|SuperficialDolphin93> ! What if you do just controller.start() (to start it locally). The task should not quit in this case.

one month ago
0 Hi Everyone, I Have A Question About Using

@<1643060801088524288:profile|HarebrainedOstrich43> we released 1.14.1 as an official version

11 months ago
0 Hi There. In A Clearml Pipeline Step With Docker, I Specify The Git Repo And Branch I Want To Use. How Can I Also Specify A Repos Optional Dependecies? It Uses Poetry For Deendency Management

Hi @<1688721797135994880:profile|ThoughtfulPeacock83> ! Make sure you set agent.package_manager.type: poetry in your clearml.conf . If you do, the poetry.lock of pyproject.toml will be used to install the packages. See None

9 months ago
0 Hey, Is There A Way To Set Pipeline Component Return Artifact Compression At A Pipeline Level ? It Would Allow To Make Big Dataframes Flow Across Component Without Having To Resort To Define Temporary Datasets, Currently It'S Generating Only Raw Pickles.

Hi @<1523702000586330112:profile|FierceHamster54> ! This is currently not possible, but I have a workaround in mind. You could use the artifact_serialization_function parameter in your pipeline. The function should return a bytes stream of the zipped content of your data with whichever compression level you have in mind.
If I'm not mistaken, you wouldn't even need to write a deserialization function in your case, because we should be able to unzip your data just fine.
Wdyt?

11 months ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Thank you for reporting. We will get back to you as soon as we have something

11 months ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Could you please share some code that could help us reproduced the issue? I tried cloning, changing parameters and running a decorated pipeline but the whole process worked as expected for me.

11 months ago
0 Hi, I’M Trying To Upload Output Model Files (Like .Pth) To Clearml Server. Assume My

@<1523721697604145152:profile|YummyWhale40> are you able to manually save models from SageMaker using OutputModel ? None

10 months ago
0 Hi, I’M Trying To Integrate Logger In My Pipelinedecorator But I’M Getting This Error -

Your object is likely holding some file descriptor or something like that. The pipeline steps are all running in separate processes (they can even run on different machines while running remotely). You need to make sure that the objects that you are returning are thus pickleable and can be passed between these processes. You can try to see that the logger you are passing around is indeed pickalable by calling pickle.dump(s) on it an then loading it in another run.
The best practice would ...

9 months ago
0 Hi, I Am Struggling For Following Points. 1. Trying To Update Model Metadata Through

Hi @<1654294820488744960:profile|DrabAlligator92> ! The way chunk size works is:
the upload will try to obtain zips that are smaller than the chunk size. So it will continuously add files to the same zip until the chunk size is exceeded. If the chunk size is exceeded, a new chunk (zip) is created. The initial file in this chunk is the file that caused the previous size to be exceeded (regardless of the fact that the file itself might exceed the size).
So in your case: am empty chunk is creat...

12 months ago
Show more results compactanswers