Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 426 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

If the task is running remotely and the parameters are populated, then the local run parameters will not be used, instead the parameters that are already on the task will be used. This is because we want to allow users to change these parameters in the UI if they want to - so the paramters that are in the code are ignored in the favor of the ones in the UI

one year ago
0 Hi There, Is There A Way To Upload/Connect Artifact To A Certain Running/Completed Task, Using A Different Scope Other Then The One That'S Running? (I Mean, Instead Of Use Task.Upload_Artifact, Use Task,Get_Tasks(Task_Id=<Some_Task_Id>) And Then Use This

Hi @<1539417873305309184:profile|DangerousMole43> ! You need to mark the task you want to upload an artifact to as running. You can use task.mark_started(force=True) to do so, then mark it back as completed using task.mark_completed(force=True)

one year ago
0 For Some Reason, When I Try To Load A Dataset (Dataset.Get), Method _Query Task Is Called And This Method Try To Call _Send Method Of Interfacebase Class. This Method May Return None And This Case Is Not Handled By The _Query_Task Method That Tries To Rea

Hello MotionlessCoral18 . I have a few questions that might help us find out why you experience this problem:
Is there any chance you are running the program in offline mode? Is there any other message being logged that might help? The error messages might include Action failed , Failed sending , Retrying, previous request failed , contains illegal schema Are you able to connect to the backend at all from the program you are trying to get the dataset?
Thank you!

2 years ago
0 Hi Everyone! Could Someone Tell How To Use

Hi @<1569496075083976704:profile|SweetShells3> ! Can you reply with some example code on how you tried to use pl.Trainer with launch_multi_node ?

one year ago
0 Hi, After Upgrading To Clearml Sdk 1.6.0, I Am Getting Error When Trying To Work With Google Gcp, Debugging The Code I See This Line In Storagehelper.Check_Write_Permissions :

Hi! Can you please provide us with code that would help us reproduce this issue? Is it just downloading from gcp?

2 years ago
0 Seems Like Clearml Tasks In Offline Mode Cannot Be Properly Closed, We Get

That is a clear bug to me. Can you please open a GH issue?

2 years ago
0 Can Steps Be Removed From Pipelines, And/Or Can Pipelines Be Generally Modified Other Than Adding Steps To Them?

btw, to avoid clutter you could also archive runs you don't need anymore

one year ago
0 We Do Want To Have Control Which Files Are Logged In The Model Registry. There Is Such Option In Task.Init(), Auto_Connect_Frameworks=False Or Injecting A Dict() To It.. But Our Tasks Are Created As Being Part Of A Pipeline With Add_Function_Step(). So We

Hi @<1543766544847212544:profile|SorePelican79> ! You could use the following workaround:

from clearml import Task
from clearml.binding.frameworks import WeightsFileHandler
import torch


def filter_callback(
    callback_type: WeightsFileHandler.CallbackType,
    model_info: WeightsFileHandler.ModelInfo,
):
    print(model_info.__dict__)
    if (
        callback_type == WeightsFileHandler.CallbackType.save
        and "filter_out.pt" in model_info.local_model_path
    ):
        retu...
one year ago
0 Are There Any Resources On How I Can Implement Hyperparameter Optimisation Using Ray Tune On Clearml?

Hi @<1581454875005292544:profile|SuccessfulOtter28> ! You could take a look at how the HPO was built using optuna: None .
Basically: you should create a new class which inherits from SearchStrategy . This class should convert clearml hyper_parameters to some parameters the Ray Tune understands, then create a Tuner and run the Ray Tune hyper paramter optimization.
The function Tuner will optim...

11 months ago
0 Is There Any Way To Get Value Of Step Parameter/Function Kwarg? This Is From Documentation, But Didn'T Manage To Get Value.

Hi @<1702492411105644544:profile|YummyGrasshopper29> ! Parameters can belong to different sections. You should append it before some_parameter . You likely want ${step2.parameters.kwargs/some_parameter}

8 months ago
0 Hi, Do You Know How To Upload Pyspark Dataframes With Clearml As Artifact? For Example, I Have Code:

Hi @<1547752791546531840:profile|BeefyFrog17> ! Are you getting any exception trace when you are trying to upload your artifact?

one year ago
0 Hello All, I Want To Clarify Something. In The
With that said, can I run another thing by you related to this. What do you think about a PR that adds the functionality I originally assumed schedule_function was for? By this I mean: adding a new parameter (this wouldn't change anything about schedule_function or how .add_task() currently behaves) that also takes a function but the function expects to get a task_id when called. This function is run at runtime (when the task scheduler would normally execute the scheduled task) and use ...
one year ago
0 Hi Everyone

Hi @<1546303293918023680:profile|MiniatureRobin9> ! When it comes to pipeline from functions/other tasks, this is not really supported. You could however cut the execution short when your step is being ran by evaluating the return values from other steps.

Note that you should however be able to skip steps if you are using pipeline from decorators

one year ago
0 Reporting Nonetype Scalars.

By default, as 0 values

8 months ago
0 Does Clearml Somehow

Hi UnevenDolphin73 ! We were able to reproduce the issue. We'll ping you once we have a fix as well 👍

2 years ago
0 Can Steps Be Removed From Pipelines, And/Or Can Pipelines Be Generally Modified Other Than Adding Steps To Them?

@<1523701083040387072:profile|UnevenDolphin73> are you composing the code you want to execute remotely by copy pasting it from various cells in one standalone cell?

one year ago
0 Hi All, I Am Currently Have A Pipeline With Multiple Steps Using The Functional Api

Hi @<1523701168822292480:profile|ExuberantBat52> ! During local runs, tasks are not run inside the specified Docker container. You need to run your steps remotely. To do this you need to first create a queue, then run a clearml-agent instance bound to that queue. You also need to specify the queue in add_function_step . Note that the controller can still be ran locally if you wish to do that

one year ago
0 I Uploaded Direct Access File To Clearml Dataset System Like This One. How Can I Access The Link Of The Uploaded Item. Whenever I Try To Call

Hi @<1570583237065969664:profile|AdorableCrocodile14> ! get_local_copy will always copy/download external files to a folder. To get the external files, there is property on the dataset called link_entries which returns a list of LinkEntry objects, which contain a link attribute, and each such link should point to a extrenal file (in this case, your local paths prefixed with file:// )

one year ago
0 I Have Am Issue Getting A Model From The Model Repository When Running A Task In A Remote Worker. I Have A Custom Model That Was Saved With Outputmodel:

Hi @<1523711002288328704:profile|YummyLion54> ! By default, we don't upload the models to our file server, so in the remote run we will try to pull the file from you local machine which will fail most of the time. Specify the upload_uri to the api.files_server entry in your clearml.conf if you want to upload it to the clearml server, or any s3/gs/azure links if you prefer a cloud provider

one year ago
0 Hey All, Hope You'Re Having A Great Day, Having An Unexpected Behavior With A Training Task Of A Yolov5 Model On My Pipeline, I Specified A Task In My Training Component Like This:

FierceHamster54 I understand. I'm not sure why this happens then 😕 . We will need to investigate this properly. Thank you for reporting this and sorry for the time wasted training your model.

2 years ago
0 Hey, Just A Quick Question. I'M Trying To Create A Pipeline And In One Step I'M Passing A Model From The Previous Step. Is It Possible To Get Model By Name And Not By Index. More Concretely I Can Do

@<1531445337942659072:profile|OddCentipede48> Looks like this is indeed not supported. What you could do is return the ID of the task that returns the models, then use Task.get_task and get the model from there. Here is an example:

from clearml import PipelineController


def step_one():
    from clearml import Task
    from clearml.binding.frameworks import WeightsFileHandler
    from clearml.model import Framework

    WeightsFileHandler.create_output_model(
        "obj", "file...
2 years ago
0 Hi, I Have An Issue, But Lets Start With The Description. This Is Snippet Of My Project'S Structure:

@<1554638160548335616:profile|AverageSealion33> looks like hydra pulls the config relative to the scripts directory, and not the current working directory. The pipeline controller actually creates a temp file in /tmp when it pulls the step, so the script's directory will be /tmp and when searching for ../data , hydra will search in / . The .git likely caused your repository to be pulled, so your repo structure was created in /tmp , which caused the step to run correctly...

one year ago
0 Hi, I Have An Issue, But Lets Start With The Description. This Is Snippet Of My Project'S Structure:

@<1554638160548335616:profile|AverageSealion33> Can you run the script with HYDRA_FULL_ERROR=1 . Also, what if you run the script without clearml? Do you get the same error?

one year ago
0 Hi Everyone, I'M Using Torch.Distributed For Training On 2 Gpus. It Works, But Each Gpu Creates A New (Duplicated) Task, And I Prefer To Have Only One Clearml Experiment Running. I Looked Here

Hi @<1578918167965601792:profile|DistinctBeetle43> ! This is currently not possible. A different task will be created for each instance

one year ago
Show more results compactanswers