Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 433 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi, I Know That If You Have A Child Dataset Of A Dataset With Zips, And If The Parent Has Been Cached Locally, The Files In The Zips Would Be Symlinked To The Parent'S In

Hi @<1709015393701466112:profile|ScatteredPeacock14> ! I think you are right. We are going to look into fixing this

one year ago
0 Hi All, I'M Trying To Clone And Run The

Hi @<1587615463670550528:profile|DepravedDolphin12> ! get() should indeed return a python object. What clearml version are you using? Also, can you share the code?

2 years ago
0 Hi, I Am Observing A Strange Behaviour When Loading A Dataset’S Local Copy.

Hi @<1695969549783928832:profile|ObedientTurkey46> ! You could try increasing sdk.storage.cache.default_cache_manager_size to a very large number

one year ago
0 Hi, I Am Struggling For Following Points. 1. Trying To Update Model Metadata Through

There are only 2 chunks because we don't split large files into multiple chunks

one year ago
0 Hi Everyone, I'M Using Torch.Distributed For Training On 2 Gpus. It Works, But Each Gpu Creates A New (Duplicated) Task, And I Prefer To Have Only One Clearml Experiment Running. I Looked Here

Hi @<1578918167965601792:profile|DistinctBeetle43> ! This is currently not possible. A different task will be created for each instance

2 years ago
0 Hi, I Am Struggling For Following Points. 1. Trying To Update Model Metadata Through

Hi @<1654294820488744960:profile|DrabAlligator92> ! The way chunk size works is:
the upload will try to obtain zips that are smaller than the chunk size. So it will continuously add files to the same zip until the chunk size is exceeded. If the chunk size is exceeded, a new chunk (zip) is created. The initial file in this chunk is the file that caused the previous size to be exceeded (regardless of the fact that the file itself might exceed the size).
So in your case: am empty chunk is creat...

one year ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

Hi @<1719524641879363584:profile|ThankfulClams64> ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage you are using tf.Summary which no longer exists since tensorflow 2.2.3 , which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your prog...

one year ago
0 Hey All, Basically, Is There A Way For A Pipeline Task Method (E.G. Registered Via

Hi @<1834401593374543872:profile|SmoggyLion3> ! There are a few things I can think of:

  • If you need to continue a task that is marked as completed, you can do clearml.Task.get_task(ID).mark_stopped(force=True) to mark it as stopped. You can do this in the job that picks up the task and want to continue it before calling Task.init , or in a post_execute_callback in the pipeline itself, so the pipeline function marks itself as aborted. For example:
from clearml import Pipeli...
5 months ago
0 Hi Everyone. If I Edit A File In Configuration Objects In Clearml Ui, Will The New Parameters Be Injected In My Code When I Run This?

Hi PetiteRabbit11 . This snippet works for me:
` from clearml import Task
from pathlib2 import Path

t = Task.init()
config = t.connect_configuration(Path("config.yml"))
print(open(config).read()) Note the you need to use the return value of connect_configuration ` when you open the configuration file

2 years ago
one year ago
0 Hi Everyone! I'Ve A Question Concerning The Integration With Optuna. I'Ve Been Able To Run The Hyperparameter Optimization Sample Successfully (

Hi @<1555000557775622144:profile|CharmingSealion31> ! When creating the HyperParameterOptimizer , pass the argument optuna_sampler=YOUR_SAMPLER .

2 years ago
0 Hi Everyone, I'M Currently Trying To Add A Csv-File That Is Located In An S3-Bucket To An Existing Clearml Dataset Using The Following Code:

You could consider downgrading to something like 1.7.1 in the meantime, it should work with that version

2 years ago
0 For Some Reason, When I Try To Load A Dataset (Dataset.Get), Method _Query Task Is Called And This Method Try To Call _Send Method Of Interfacebase Class. This Method May Return None And This Case Is Not Handled By The _Query_Task Method That Tries To Rea

Hello MotionlessCoral18 . I have a few questions that might help us find out why you experience this problem:
Is there any chance you are running the program in offline mode? Is there any other message being logged that might help? The error messages might include Action failed , Failed sending , Retrying, previous request failed , contains illegal schema Are you able to connect to the backend at all from the program you are trying to get the dataset?
Thank you!

3 years ago
0 Hi, I’M Trying To Integrate Logger In My Pipelinedecorator But I’M Getting This Error -

Each step is a separate task, with its own separate logger. You will not be able to reuse the same logger. Instead, you should get the logger in the step you want to use it calling current_logger

one year ago
0 Hi All, After Upgrading To Sdk 1.8.0 We Are Having Issue Adding External Files To Dataset From Gcs. This Is The Code We Use:

You could try this in the meantime if you don't mind temporary workarounds:
dataset.add_external_files(source_url=" ", wildcard=["file1.csv"], recursive=False)

2 years ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

Hi HomelyShells16 How about doing things this way? does it work for you?
` class ClearmlLightningCLI(LightningCLI):
def init(self, *args, **kwargs):
Task.add_requirements("requirements.txt")
self.task = Task.init(
project_name="example",
task_name="pytorch_lightning_jsonargparse",
)
super().init(*args, **kwargs)

def instantiate_classes(self, *args, **kwargs):
    super().instantiate_classes(*args, **kwargs)
  ...
3 years ago
0 Hi, I Have A Case When I Want To Clone Tasks And Set Some Parameters For Them. I Noticed, That I Can'T Pass Numbers, Only Strings Are Possible There. When I'M Trying To Pass A Number, The Default Value Is Not Overriden. Do You Know Maybe If Numbers Can Be

RoundMosquito25 you might need to use cast=True when you get the parameters.
See this snippet:
` from clearml import Task

t = Task.init()
params = {}
params["Function"] = {}
params["Function"]["number"] = 123
t.set_parameters_as_dict(params)
t.close()

cloned = Task.clone(t.id)
s = cloned.get_parameters_as_dict(cast=True)
s["Function"]["number"] = 321
cloned.set_parameters_as_dict(s)
print(type(cloned.get_parameters_as_dict(cast=True)["Function"]["number"])) # will print 'int' `

2 years ago
0 Reporting Nonetype Scalars.

Hi @<1631102016807768064:profile|ZanySealion18> ! Reporting None is not possible, but you could report np.nan instead.

one year ago
0 I'M A Bit Confused. It Seems Like Something Has Changed With How Clearml Handles Recording Datasets In Tasks. It Used To Be The Case That When I Would Create A Dataset Under A Task, Clearml Would Record The Id Of The Dataset In The Hyperparameters/Datase

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Note that the Datasets section is created only if you get the dataset with an alias? are you sure that number_of_datasets_on_remote != 0 ?
If so, can you provide a short snippet that would help us reproduce? The code you posted looks fine to me, not sure what the problem could be.

2 years ago
2 years ago
0 Hello. I Have A Question Regarding Pipeline Parameters. Is It Possible To Reference Pipeline Parameters In Other Fields Of The

Hi DangerousDragonfly8 ! At the moment, this is not possible, but we do have it in plan (we had some prior requests for this feature)

2 years ago
0 Hi, I Tried This, But Got Unexpected Result When Set

this is a bug, we will fix this asap

3 years ago
0 Hi, I’M Trying To Upload Output Model Files (Like .Pth) To Clearml Server. Assume My

@<1523721697604145152:profile|YummyWhale40> are you able to manually save models from SageMaker using OutputModel ? None

one year ago
0 Hi Everyone! I'M Currently Using The Free Hosted Version (Open Source) Of Clearml. I'M Mainly Using Clearml-Data At To Manage Our Datasets At The Moment, And I'Ve Already Hit The Limit For The Free Metrics Storage. Since We Didn'T Store A Lot Of Metrics (

The config values are not yet documented, but they all default to 10 (except for max_file_size) and represent the number of images/tables/videos etc. that are reported as previews to the dataset. Setting them to 0 disables previewing

To clear the configurations, you should use something like Dataset.list_datasets to get all the dataset IDs, then something like:

from clearml import Task


id_ = "229f14fe0cb942708c9c5feb412a7ffe"
task = Task.get_task(id_)
original_status = task.s...
one year ago
0 Hi, I Have An Issue, But Lets Start With The Description. This Is Snippet Of My Project'S Structure:

@<1554638160548335616:profile|AverageSealion33> Can you run the script with HYDRA_FULL_ERROR=1 . Also, what if you run the script without clearml? Do you get the same error?

2 years ago
0 Hello :wave: ! I am trying to leverage the `retry_on_failure` with a `PipelineController` (using functions aka `add_function_step` ) to update my step parameters for the next retry. My understanding is that the step (init with `function_kwargs`) use a pic

Hi @<1558986821491232768:profile|FunnyAlligator17> ! There are a few things you should consider:

  • Artifacts are not necessarily pickles. The objects you upload as artifacts can be serialized in a variety of ways. Our artifacts manager handles both serialization and deserialization. Because of this, you should not pickle the objects yourself, but specify artifact_object as being the object itself.
  • To get the deserialized artifact, just call task.artifacts[name].get() (not get_local...
2 years ago
Show more results compactanswers