Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 432 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Regarding pending pipelines: please make sure a free agent is bound to the queue you wish to run the pipeline in. You can check queue information by accessing the INFO section of the controller (as in the first screenshort)
then by pressing on the queue, you should see the worker status. There should be at least one worker that has a blank "CURRENTLY EXECUTING" entry
image
![image](https://clearml-we...

one year ago
0 Hi! I'M Currently Considering Switching To Clearml. In My Current Trials I Am Using Up The Api Calls Very Quickly Though. Is There Some Way To Limit That? The Documentation Is A Bit Sparse On What Uses How Many Api Calls. Is It Possible To Batch Them For

FlutteringWorm14 we do batch the reported scalars. The flow is like this: the task object will create a Reporter object which will spawn a daemon in another child process that batches multiple report events. The batching is done after a certain time in the child process, or the parent process can force the batching after a certain number of report events are queued.
You could try this hack to achieve what you want:
` from clearml import Task
from clearml.backend_interface.metrics.repor...

2 years ago
0 So From What I Can Tell Using

Hi SoggyHamster83 ! Any reason you can't use Task.init?

2 years ago
0 Hello All, Although I Call Pipe.Wait() Or Pipe.Start(Wait=True), The Pipelinecontroller Does Not Wait In The Script Until The Pipeline Actually Terminates And Throws: Warning - Terminating Local Execution Process. Can Someone Please Help Me? Thanks A Lot

Oh I see what you mean. start will enqueue the pipeline, in order for it to be ran remotely by an agent. I think that what you want to call is pipe.start_locally(run_pipeline_steps_locally=True) (and get rid of the wait ).

2 years ago
0 I Have An Environment Error When Running Hpo:

Hi @<1694157594333024256:profile|DisturbedParrot38> ! If you want to override the parameter, you could add a DiscreteParameterRange to hyper_paramters when calling HyperParameterOptimizer . The DiscreteParameterRange should have just 1 value: the value you want to override the parameter with.
You could try setting the parameter to an empty string in order to mark it as cleared

one year ago
0 Hi All

Thank you 😊

9 months ago
2 years ago
0 Hello, I Have A Question Regarding The Usage Of

Hi JumpyDragonfly13 ! Try using get_task_log instead of download_task_log

3 years ago
0 I Would Like To Use Clearml Together With Hydra Multirun Sweeps, But I’M Having Some Difficulties With The Configuration Of Tasks.

Hi SoreHorse95 ! I think that the way we interact with hydra doesn't account for overrides. We will need to look into this. In the meantime, do you also have somesort of stack trace or similar?

2 years ago
0 Hi, Bug Report. I Was Trying To Upload Data To S3 Via Clearml.Dataset Interface

Hi NonchalantGiraffe17 ! Thanks for reporting this. It would be easier for us to check if there is something wrong with ClearML if we knew the number and sizes of the files you are trying to upload (content is not relevant). Could you maybe provide those?

3 years ago
0 Hi, Bug Report. I Was Trying To Upload Data To S3 Via Clearml.Dataset Interface

Perfect! Can you please provide the sizes of the files of the other 2 chunks as well?

3 years ago
0 Hi All

Hi @<1523701523954012160:profile|ShallowCormorant89> ! This is not really supported, but you could use continue_on_fail to make sure you get to your last step: None

2 years ago
0 Hi, I'M Trying To Create A Dataset With 186 Parent Datasets. The Process Fails Due To Oom, The Machine Has 64 Gb Of Ram. Does A Workaround Exists, For Example, Generating Intermediate Datasets ? Or Does The Total Memory Consumed Depends On The Number Of

Hi @<1571308010351890432:profile|HurtAnt92> ! Yes, you can create intermediate datasets. Just batch your datasets, for each batch create new child datasets, then create a dataset that has as parents all of these resulting children.
I'm surprized you get OOM tho, we don't load the files in memory, just the name/path of the files + size, hash etc. Could there be some other factor that causes this issue?

2 years ago
0 I'M A Bit Confused. It Seems Like Something Has Changed With How Clearml Handles Recording Datasets In Tasks. It Used To Be The Case That When I Would Create A Dataset Under A Task, Clearml Would Record The Id Of The Dataset In The Hyperparameters/Datase

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Note that the Datasets section is created only if you get the dataset with an alias? are you sure that number_of_datasets_on_remote != 0 ?
If so, can you provide a short snippet that would help us reproduce? The code you posted looks fine to me, not sure what the problem could be.

2 years ago
0 Hi There, Currently I Have A Clearml Pipeline That Takes In A Bunch Of Parameters For Various Tasks And Passes These Parameters Via Parameter_Override For Every Pipe.Add_Step(). However, I Have A Lot Of Parameters, And So My Pipeline Code Is A Little Unwi

Hi @<1633638724258500608:profile|BitingDeer35> ! You could attach the configuration using set_configuration_object None in a pre_execute_callback . The argument is set here: None

Basically, you would have something like:

def pre_callback(pipeline, node, params):
    node.job.task.set_configuration_object(config)...
one year ago
0 Hey All, Hope You'Re Having A Great Day, Having An Unexpected Behavior With A Training Task Of A Yolov5 Model On My Pipeline, I Specified A Task In My Training Component Like This:

FierceHamster54
initing the task before the execution of the file like in my snippet is not sufficient ?It is not because os.system spawns a whole different process then the one you initialized your task in, so no patching is done on the framework you are using. Child processes need to call Task.init because of this, unless they were forked, in which case the patching is already done.
` But the training.py has already a CLearML task created under the hood since its integratio...

2 years ago
0 Hey Guys! I Would Love To Know How To Integrate Hpo Inside Clearml Pipelines. I Have Made A Continuous Learning Pipeline With Data Etl And Model Training And As A Next Step, It Would Be Cool To Add Hpo. Most Of The Examples On The Website Create A New Ta

Hi @<1676400486225285120:profile|GracefulSquid84> ! Each step is indeed a clearml task. You could try using the step ID. Just make sure you pass the ID to the HPO step (you can do that by simply returning the Task.current_task().id

one year ago
0 Hey All, Is There A Way To Upload A Fiftyone Dataset As An Artifact In A Clearml Pipeline? I Am Getting The Following Error When I Try To Upload It

Hi @<1610083503607648256:profile|DiminutiveToad80> ! You need to somehow serialize the object. Note that we try different serialization methods and default to pickle if none work. If pickle doesn't work then the artifact can't be uploaded by default. But there is a way around it: you can serialize the object yourself. The recommended way to do this is using the serialization_function argument in upload_artifact . You could try using something like dill which can serialize more ob...

one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

@<1657556312684236800:profile|ManiacalSeaturtle63> can you share how you are creating your pipeline?

one year ago
0 Can Steps Be Removed From Pipelines, And/Or Can Pipelines Be Generally Modified Other Than Adding Steps To Them?

Do you want to remove steps/add steps from the pipeline after it has ran basically? If that is the case, then it is theoretically possible, but we don't expose and methods that would allow you to do that...
What you would need to do is modify all the pipeline configuration entries you find in the CONFIGURATION section (see the screenshot), Not sure if that is worth the effort. I would simply create another version of the pipeline with the added/removed steps
![image](https://clearml-web-asset...

one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Oh I see. I think there is a mismatch between some clearml versions on your machine? How did you run these scripts exactly? (like the CLI, for example python test.py ?)

Or if you ran it via an IDE, what is the interpreter path?

one year ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

because I think that what you are encountering now is an NCCL error

one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

@<1626028578648887296:profile|FreshFly37> can you please screenshot this section of the task? Also, how does your project's directory structure look like?
image

one year ago
0 What Exactly Triggers The "Automagic" Logging Of The Model And Weights? I'Ve Pulled My Simple Test Project Out Of Jupyter Lab And The Same Problem Still Exists, So It Isn'T A Jupyter Lab Issues. A Few Things Log, But Never The Model

Hi RoundMole15 ! Are you able to see a model logged when you run this simple example?
` from clearml import Task
import torch.nn.functional as F
import torch.nn as nn
import torch
class TheModelClass(nn.Module):
def init(self):
super(TheModelClass, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
s...

2 years ago
0 We Do Want To Have Control Which Files Are Logged In The Model Registry. There Is Such Option In Task.Init(), Auto_Connect_Frameworks=False Or Injecting A Dict() To It.. But Our Tasks Are Created As Being Part Of A Pipeline With Add_Function_Step(). So We

Hi @<1543766544847212544:profile|SorePelican79> ! You could use the following workaround:

from clearml import Task
from clearml.binding.frameworks import WeightsFileHandler
import torch


def filter_callback(
    callback_type: WeightsFileHandler.CallbackType,
    model_info: WeightsFileHandler.ModelInfo,
):
    print(model_info.__dict__)
    if (
        callback_type == WeightsFileHandler.CallbackType.save
        and "filter_out.pt" in model_info.local_model_path
    ):
        retu...
one year ago
Show more results compactanswers