Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 418 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Why Is Async_Delete Not Working?

you might want to prefix both the host in the configuration file and the uri in Task.init / StorageHelper.get with s3. if the script above works if you do that

11 months ago
0 Reporting Nonetype Scalars.

By default, as 0 values

4 months ago
0 Hi, We Have Recently Upgraded To

Regarding 1. , are you trying to delete the project from the UI? (I can't see an attached image in your message)

2 years ago
0 Hi, We Have Recently Upgraded To

OutrageousSheep60 that is correct, each dataset is in a different subproject. That is why bug 2. happens as well

2 years ago
0 Hi, We Have Recently Upgraded To

Regarding number 2. , that is indeed a bug and we will try to fix it as soon as possible

2 years ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

UnevenDolphin73 Yes it makes sense. At the moment, this is not possible. When using use_current_task=True the task gets attached to the dataset and moved under dataset_project/.datasets/dataset_name . Maybe we could make the task not disappear from its original project in the near future.

2 years ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

I don't think the version makes the task disappear. You should still see the task in the Datasets section. Maybe there is something you do with that task/dataset that makes it disappear (even tho it shouldn't)?

2 years ago
0 Hello. I Am Using Hydra As Configuration Manager And I Am Using A Decorator To Specify The File And The Folder It Is Contained In (Typical Hydra Syntax). The Code Now Runs Into This Error That Says, "Primary Config Directory Not Found. Set The Environment

Hi @<1715175986749771776:profile|FuzzySeaanemone21> ! Are you running this remotely? If so, you should work inside a repository such that the agent can clone the repository which should include the config as well. Otherwise, the script will run as a "standalone"

6 months ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Could you please share some code that could help us reproduced the issue? I tried cloning, changing parameters and running a decorated pipeline but the whole process worked as expected for me.

11 months ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

Hi BoredHedgehog47 ! We tried to reproduce this, but failed. What we tried is running the attached main.py which Popen s sub.py .
Can you please run main.py as well and tell us if you still encounter the bug? If not, is there anything else you can think of that could trigger this bug besides creating a subprocess?
Thank you!

2 years ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

If the task is running remotely and the parameters are populated, then the local run parameters will not be used, instead the parameters that are already on the task will be used. This is because we want to allow users to change these parameters in the UI if they want to - so the paramters that are in the code are ignored in the favor of the ones in the UI

9 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

Hi @<1719524641879363584:profile|ThankfulClams64> ! What tensorflow/keras version are you using? I noticed that in the TensorBoardImage you are using tf.Summary which no longer exists since tensorflow 2.2.3 , which I believe is too old to work with tesorboard==2.16.2.
Also, how are you stopping and starting the experiments? When starting an experiment, are you resuming training? In that case, you might want to consider setting the initial iteration to the last iteration your prog...

5 months ago
0 I Get These Warnings Whenever I Run Pipelines And I Have No Idea What It Means Or Where It Comes From:

Hi @<1694157594333024256:profile|DisturbedParrot38> ! We weren't able to reproduce, but you could find the source of the warning by appending the following code at the top of your script:

import traceback
import warnings
import sys

def warn_with_traceback(message, category, filename, lineno, file=None, line=None):
    log = file if hasattr(file,'write') else sys.stderr
    traceback.print_stack(file=log)
    log.write(warnings.formatwarning(message, category, filename, lineno, line))
...
8 months ago
0 Hi All, I'Ve Been Experimenting Around With Automating The Data Sync. This Is Related To This Thread:

Hi @<1545216070686609408:profile|EnthusiasticCow4> ! I have an idea.
The flow would be like this: you create a dataset, the parent of that dataset would be the previously created dataset. The version will auto-bump. Then, you sync this dataset with the folder. Note that sync will return the number of added/modified/removed files. If all of these are 0, then you use Dataset.delete on this dataset and break/continue, else you upload and finalize the dataset.

Something like:

parent =...
one year ago
0 Hello! I Can'T Seem To Be Able To Stop Clearml From Automatically Logging Model Files (Optimizer, Scheduler). It'S A Useful Feature But I'D Like To Have Some Control Over It, So That The Disk Space In My File Storage Isn'T Overused. I'M Using

Hi @<1523701345993887744:profile|SillySealion58> ! We allow finer grained control over model uploads. Please refer to this GH thread for an example on how to achieve that: None

6 months ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

Hi @<1578555761724755968:profile|GrievingKoala83> ! It looks like lightning uses the NODE_RANK env var to get the rank of a node, instead of NODE (which is used by pytorch).
We don't set NODE_RANK yet, but you could set it yourself after launchi_multi_node :

import os    
current_conf = task.launch_multi_node(2)
os.environ["NODE_RANK"] = str(current_conf.get("node_rank", ""))

Hope this helps

6 months ago
0 Hi All, I Observed That When I Get A Dataset With

SmallGiraffe94 You should use dataset_version=2022-09-07 (not version=... ). This should work for your use-case.
Dataset.get shouldn't actually accept a version kwarg, but it does because it accepts some **kwargs used internally. We will make sure to warn the users in case they pass values to **kwargs from now on.
Anyway, this issue still exists, but in another form:
Dataset.get can't get datasets with a non-semantic version, unless the version is sp...

2 years ago
0 Hi Guys, Are There Any Ways To Suppress Clearml’S Console Messages? I’M Not Interested In Messages Like This, Especially About Uploading Models. I Tried Some Stuff With Loggers ” Logging.Basicconfig(Format=‘%(Name)S - %(Levelname)S - %(Message)S’, Level=

Hi @<1715900760333488128:profile|ScaryShrimp33> ! You can set the log level by setting the CLEARML_LOG_LEVEL env var before importing clearml. For example:

import os
os.environ["CLEARML_LOG_LEVEL"] = "ERROR"  # or str(logging.CRITICAL/whatever level) also works 

Note that the ClearML Monitor warning is most likely logged to stdout, in which case this message can't really be suppressed, but model upload related message should be

6 months ago
0 Seems Like Clearml Tasks In Offline Mode Cannot Be Properly Closed, We Get

That is a clear bug to me. Can you please open a GH issue?

one year ago
0 Does Clearml Somehow

UnevenDolphin73 did that fix the logging for you? doesn't seem to work on my machine. This is what I'm running:
` from clearml import Task
import logging

def setup_logging():
level = logging.DEBUG
logging_format = "[%(levelname)s] %(asctime)s - %(message)s"
logging.basicConfig(level=level, format=logging_format)

t = Task.init()
setup_logging()
logging.info("HELLO!")
t.close()
logging.info("HELLO2!") `

2 years ago
0 Does Clearml Somehow

I see. We need to fix both anyway, so we will just do that

2 years ago
0 Does Clearml Somehow

UnevenDolphin73 looking at the code again, I think it is actually correct. it's a bit hackish, but we do use deferred_init as an int internally. Why do you need to close the task exactly? Do you have a script that would highlight the behaviour change between <1.8.1 and >=1.8.1 ?

2 years ago
0 Hello Everyone, I Want To Run A Github Action On Each Repo Pull Request To Create A Task In Clearml To Basically Do Check Of Current Pr Code With Some Scenarios. Clearml Task Gets Repo And Commit Id As Follows (From Console):

Hi @<1693795212020682752:profile|ClumsyChimpanzee88> ! Not sure I understand the question. If the commit ID does not exist remotely, then it can't be pulled. How would you pull the commit to another machine otherwise, is this possible using your current workflow?

8 months ago
0 Is There Any Way To Get Dataset Size Without Downloading State.Json? Im Doing Ds = Clearml.Dataset.Get(Dataset_Id=D_Id), But It Instantly Tries To Download State.Json Which Is On S3. Im Only Interested In Size And File Count Which I Then Get From Calling

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! You could get the Dataset Struct configuration object and get the job_size from there, which is the dataset size in bytes. The task IDs of the datasets are the same as the datasets' IDs by the way, so you can call all the clearml task related function on the task your get by doing Task.get_task("dataset_id")

7 months ago
Show more results compactanswers