Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 433 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Cannot Upload A Dataset With A Parent - Seems Very Odd! Clearml Versions I Tried: 1.6.1, 1.6.2 Scenario: * Create Parent Dataset (With Storage On S3) * Upload Data * Close Dataset * Create Child Dataset (Tried With Storage On Both S3 Or On Clearml Serv

Hi RoughTiger69 ! Can you try adding the files using a python script such that we could get an exception traceback, something like this:
` from clearml import Dataset

or just use the ID of the dataset you previously created instead of creating a new one

parent_dataset = Dataset.create(dataset_name="xxxx", dataset_project="yyyyy", output_uri=" ")
parent_dataset.add_files("folder1")
parent_dataset.upload()
parent_dataset.finalize()

child_dataset = Dataset.create(dataset_name="xxxx", dat...

3 years ago
0 Hi Everyone, I Get An Error When I Add An Argument Of Type Enum To A Pipeline Component (@Pipelinedecorator.Component). At The Same Time Pipelines (@Pipelinedecorator.Pipeline) And Normal Functions Work Fine With Enums. The Error Message Looks Like This:

@<1643060801088524288:profile|HarebrainedOstrich43> you are right. we actually attempt to copy the default arguments as well. What happens is that we aggregate these arguments in the kwargs dict, then we dump str(kwargs) in the script of the pipeline step. Problem is, str(dict) actually calls __ repr_ _ on each key/value of the dict, so you end up with repr(MyEnum.FALSE) in your code, which is <MyEnum.FALSE: 'FALSE'> . One way to work around this is to add somet...

one year ago
0 Hi There, Currently I Have A Clearml Pipeline That Takes In A Bunch Of Parameters For Various Tasks And Passes These Parameters Via Parameter_Override For Every Pipe.Add_Step(). However, I Have A Lot Of Parameters, And So My Pipeline Code Is A Little Unwi

Hi @<1633638724258500608:profile|BitingDeer35> ! You could attach the configuration using set_configuration_object None in a pre_execute_callback . The argument is set here: None

Basically, you would have something like:

def pre_callback(pipeline, node, params):
    node.job.task.set_configuration_object(config)...
one year ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

Yes, you need to call the function every time. The remote run might have some parameters populated which you can use, but the pipeline function needs to be called if you actually want to run the pipeline.

one year ago
0 Hello All, I Want To Clarify Something. In The

No need, I think I will review it on Monday

one year ago
0 Hi Everyone, I Have A Question About Using

@<1643060801088524288:profile|HarebrainedOstrich43> we released 1.14.1 as an official version

one year ago
0 Does Clearml Somehow

UnevenDolphin73 looking at the code again, I think it is actually correct. it's a bit hackish, but we do use deferred_init as an int internally. Why do you need to close the task exactly? Do you have a script that would highlight the behaviour change between <1.8.1 and >=1.8.1 ?

2 years ago
0 Hi, After Upgrading To Clearml Sdk 1.6.0, I Am Getting Error When Trying To Work With Google Gcp, Debugging The Code I See This Line In Storagehelper.Check_Write_Permissions :

Hi! Can you please provide us with code that would help us reproduce this issue? Is it just downloading from gcp?

3 years ago
0 Hi Everyone, Weird Problem With Dataset.Get_Local_Copy (Both From Sdk And From Clearml-Data): I Have A Dataset With A Single File And Lots Of S3 Links. Used To Work Perfectly Until Those Files Started Becoming Larger (Or It Is Just A Matter Of Bad Timing

Hi @<1523705721235968000:profile|GrittyStarfish67> ! This looks like a boto3 error. You could try lowering sdk.aws.s3.boto3.max_multipart_concurrency in clearml.conf and setting max_workers=1 when calling Dataset.get_local_copy

one year ago
0 Hi Everyone! I'M Currently Using The Free Hosted Version (Open Source) Of Clearml. I'M Mainly Using Clearml-Data At To Manage Our Datasets At The Moment, And I'Ve Already Hit The Limit For The Free Metrics Storage. Since We Didn'T Store A Lot Of Metrics (

Hi @<1618418423996354560:profile|JealousMole49> ! To disable previews, you need to set all of the values below to 0 in clearml.conf :

dataset.preview.media.max_file_size
dataset.preview.tabular.table_count
dataset.preview.tabular.row_count
dataset.preview.media.image_count
dataset.preview.media.video_count
dataset.preview.media.audio_count
dataset.preview.media.html_count
dataset.preview.media.json_count

Also, I believe you could go through each dataset and remove the `Datase...

one year ago
0 What Exactly Triggers The "Automagic" Logging Of The Model And Weights? I'Ve Pulled My Simple Test Project Out Of Jupyter Lab And The Same Problem Still Exists, So It Isn'T A Jupyter Lab Issues. A Few Things Log, But Never The Model

Hi RoundMole15 ! Are you able to see a model logged when you run this simple example?
` from clearml import Task
import torch.nn.functional as F
import torch.nn as nn
import torch
class TheModelClass(nn.Module):
def init(self):
super(TheModelClass, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
s...

3 years ago
0 Hi, We Have Recently Upgraded To

Regarding number 2. , that is indeed a bug and we will try to fix it as soon as possible

3 years ago
0 If I Ran A Hyperparemeter Sweep And I Wanted To Create A Graph Where The X-Axis Was One Of The Hyperparameters, Let'S Say The Momentum Term Of The Optimizer, And I Wanted To Plot That Vs The Min-Loss Over All Epochs, Is There A Good Way To Do This With Cl

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Can't you just get the values of the hyperparameters and the losses, then plot them with something like mathplotlib then just report the plot to ClearML?

2 years ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

@<1626028578648887296:profile|FreshFly37> I see that create_dataset doesn't have a repo set. Can you try setting it manually via the repo repo_branch repo_commit arguments in the add_function_step method?

one year ago
0 I Uploaded Direct Access File To Clearml Dataset System Like This One. How Can I Access The Link Of The Uploaded Item. Whenever I Try To Call

Hi @<1570583237065969664:profile|AdorableCrocodile14> ! get_local_copy will always copy/download external files to a folder. To get the external files, there is property on the dataset called link_entries which returns a list of LinkEntry objects, which contain a link attribute, and each such link should point to a extrenal file (in this case, your local paths prefixed with file:// )

2 years ago
0 Hi, I'M Trying To Run My Code With A Pipelinecontroller Within A Docker Container Instead Of On My Local Computer. Currently Having Trouble Where The Code Doesn'T Run In The Repo As Expected; "No Repository Found, Storing Script Code Instead" Is The Warni

Hi @<1633638724258500608:profile|BitingDeer35> ! Looks like the SDK doesn't currently allow to create steps/controllers with a designated cwd. You will need to call the set_script function on your step's tasks and on the controller for now.
For the controller: If you are using the PipelineDecorator, you can do something like: PipelineDecorator._singleton._task.set_script(working_dir="something") , before you are running the pipeline function. In the case of regular `PipelineControll...

one year ago
0 Hello, For Datasets, Is There An Explanation Somewhere As To How The Debug Samples Are Created? I'M Not Entirely Sure Which Permission It Uses If The Dataset Is Stored In A Private Bucket

Hi ShortElephant92 ! Random images, audio files, tables (trimmed to a few rows) are sent as Debug Samples for preview. By default, they are sent to our servers. Check this function if you wish to log the samples to another destination https://clear.ml/docs/latest/docs/references/sdk/logger/#set_default_upload_destination .
You could also add these entries in you clearml.conf to not send any samples for preview:
` sdk.dataset.preview.tabular.table_count: 0
sdk.dataset.preview.media.i...

2 years ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Hi!
It is possible to use the same queue for the controller and the steps, but there needs to be at least 2 agents that pull tasks from that queue. Otherwise, if there is only 1 agent, then that agent will be busy running the controller and it won't be able to fetch the steps.

Regarding missing local packages: the step is ran in a temporary directory that is different than the directory the script is originally in. To solve this, you could add all the modules/files you are interested in in a...

one year ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

PanickyMoth78 You might also want to set some lower values for sdk.google.storage.pool_connections/pool_maxsize in your clearml.conf . Newer clearml version set max_workers to 1 by default, and the number of connections should be tweaked using these values. If it doesn't help, please let us know

2 years ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

Can you see your task if you run this minimal example UnevenDolphin73 ?
` from clearml import Task, Dataset

task = Task.init(task_name="name_unique", project_name="project")
d = Dataset.create(dataset_name=task.name, dataset_project=task.get_project_name(), use_current_task=True)
d.upload()
d.finalize() `

3 years ago
0 Hi All, I'Ve Been Experimenting Around With Automating The Data Sync. This Is Related To This Thread:

@<1545216070686609408:profile|EnthusiasticCow4>
This:

            parent = self.clearml_dataset = Dataset.get(
                dataset_name="[LTV] Dataset",
                dataset_project="[LTV] Lifetime Value Model",
            )
            # generate the local dataset
            dataset = Dataset.create(
                dataset_name=f"[LTV] Dataset",
                parent_datasets=[parent],
                dataset_project="[LTV] Lifetime Value Model",
            )

should l...

2 years ago
0 Hello There Again! So, I Discovered By Accident (As It Usually Happens) That Apparently Clearml Uses

Hi @<1724235687256920064:profile|LonelyFly9> ! I assume in this case we fail to retrieve the dataset? Can you provide an example when this happens?

one year ago
0 Hello All, I Want To Clarify Something. In The
With that said, can I run another thing by you related to this. What do you think about a PR that adds the functionality I originally assumed schedule_function was for? By this I mean: adding a new parameter (this wouldn't change anything about schedule_function or how .add_task() currently behaves) that also takes a function but the function expects to get a task_id when called. This function is run at runtime (when the task scheduler would normally execute the scheduled task) and use ...
one year ago
Show more results compactanswers