Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 425 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

Hi @<1532532498972545024:profile|LittleReindeer37> @<1523701205467926528:profile|AgitatedDove14>
I got the session with a bit of "hacking".
See this script:

import boto3, requests, json
from urllib.parse import urlparse

def get_notebook_data():
    log_path = "/opt/ml/metadata/resource-metadata.json"
    with open(log_path, "r") as logs:
        _logs = json.load(logs)
    return _logs

notebook_data = get_notebook_data()
client = boto3.client("sagemaker")
response = client.create_...
2 years ago
0 Hi, I’M Trying To Upload Output Model Files (Like .Pth) To Clearml Server. Assume My

Hi @<1523721697604145152:profile|YummyWhale40> ! Are you able to upload artifacts of any kind other than models to the CLEARML_DEFAULT_OUTPUT_URI?

one year ago
0 Hi All, After Upgrading To Sdk 1.8.0 We Are Having Issue Adding External Files To Dataset From Gcs. This Is The Code We Use:

You could try this in the meantime if you don't mind temporary workarounds:
dataset.add_external_files(source_url=" ", wildcard=["file1.csv"], recursive=False)

2 years ago
0 Are Clearml Datasets Intended To Be Static, Or Can They Be Dynamic?

Hi @<1523701279472226304:profile|SoreHorse95> ! add_external_files will only stores the links. If the file changes and you don't have a dataset with updated links, I would expect that some caching mechanisms will break, resulting in some files to not be cached/not be downloaded again in the cache after getting the dataset.

one year ago
0 Hi All, I'M Trying To Clone And Run The

Hi @<1587615463670550528:profile|DepravedDolphin12> ! get() should indeed return a python object. What clearml version are you using? Also, can you share the code?

one year ago
0 Hi Guys, Are There Any Ways To Suppress Clearml’S Console Messages? I’M Not Interested In Messages Like This, Especially About Uploading Models. I Tried Some Stuff With Loggers ” Logging.Basicconfig(Format=‘%(Name)S - %(Levelname)S - %(Message)S’, Level=

Hi @<1715900760333488128:profile|ScaryShrimp33> ! You can set the log level by setting the CLEARML_LOG_LEVEL env var before importing clearml. For example:

import os
os.environ["CLEARML_LOG_LEVEL"] = "ERROR"  # or str(logging.CRITICAL/whatever level) also works 

Note that the ClearML Monitor warning is most likely logged to stdout, in which case this message can't really be suppressed, but model upload related message should be

8 months ago
0 Hello All, I’M An Ml Engineer Looking To Transition Our Company To A New Mlops System. Many Of Our Projects Are Currently Built Around Hydra And I’M Attempting To See What I Would Need To Do To Integrate Clearml Into Our Workflow. I’M Fully Aware That You

Hi @<1545216070686609408:profile|EnthusiasticCow4> !

So you can inject new command line args that hydra will recognize.

This is true.

However, if you enable _allow_omegaconf_edit_: True, I think ClearML will "inject" the OmegaConf saved under the configuration object of the prior run, overwriting the overrides

This is also true.

2 years ago
0 Hi, I Have An Issue When Running A Pipeline Controller Remotely In Docker. Basically I Have A Module That Reads A Config File Into A Dict And Calls The Pipeline Controller, Like

Hi @<1570220858075516928:profile|SlipperySheep79> ! What happens if you do this:

import yaml
import argparse
from my_pipeline.pipeline import run_pipeline
from clearml import Task

parser = argparse.ArgumentParser()
parser.add_argument('--config', type=str, required=True)

if __name__ == '__main__':
    if not Task.current_task():
      args = parser.parse_args()
      with open(args.config) as f:
          config = yaml.load(f, yaml.FullLoader)
    run_pipeline(config)
one year ago
0 Hi, I’M Trying To Upload Output Model Files (Like .Pth) To Clearml Server. Assume My

Could you please try with an older sdk version just to make sure there were no regressions?

one year ago
0 Hello, Community, I Hope This Message Finds You All Well. I Am Currently Working On A Project Involving Hyperparameter Optimization (Hpo) Using The Optuna Optimizer. Specifically, I'Ve Been Trying To Navigate The Parameters 'Min_Iteration_Per_Job' And 'M

Hi @<1523703652059975680:profile|ThickKitten19> ! Could you try increasing the max_iteration_per_job and check if that helps? Also, any chance that you are fixing the number of epochs to 10, either through a hyper_parameter e.g. DiscreteParameterRange("General/epochs", values=[10]), or it is simply fixed to 10 when you are calling something like model.fit(epochs=10) ?

11 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

@<1719524641879363584:profile|ThankfulClams64> you could try using the compare function in the UI to compare the experiments on the machine the scalars are not reported properly and the experiments on a machine that runs the experiments properly. I suggest then replicating the environment exactly on the problematic machine. None

7 months ago
0 Hello All, I Want To Clarify Something. In The
With that said, can I run another thing by you related to this. What do you think about a PR that adds the functionality I originally assumed schedule_function was for? By this I mean: adding a new parameter (this wouldn't change anything about schedule_function or how .add_task() currently behaves) that also takes a function but the function expects to get a task_id when called. This function is run at runtime (when the task scheduler would normally execute the scheduled task) and use ...
one year ago
0 I Uploaded Direct Access File To Clearml Dataset System Like This One. How Can I Access The Link Of The Uploaded Item. Whenever I Try To Call

Hi @<1570583237065969664:profile|AdorableCrocodile14> ! get_local_copy will always copy/download external files to a folder. To get the external files, there is property on the dataset called link_entries which returns a list of LinkEntry objects, which contain a link attribute, and each such link should point to a extrenal file (in this case, your local paths prefixed with file:// )

one year ago
0 Hi, We Have Recently Upgraded To

OutrageousSheep60 that is correct, each dataset is in a different subproject. That is why bug 2. happens as well

2 years ago
0 Hey All, Hope You'Re Having A Great Day, Having An Unexpected Behavior With A Training Task Of A Yolov5 Model On My Pipeline, I Specified A Task In My Training Component Like This:

FierceHamster54
initing the task before the execution of the file like in my snippet is not sufficient ?It is not because os.system spawns a whole different process then the one you initialized your task in, so no patching is done on the framework you are using. Child processes need to call Task.init because of this, unless they were forked, in which case the patching is already done.
` But the training.py has already a CLearML task created under the hood since its integratio...

2 years ago
0 Hi All

Hi @<1780043419314294784:profile|LargeHamster21> ! Looks like you are using python3.11 (agent.default_python=3.11), while Pyro4 is incompatible with this python version: None
I would suggest trying to downgrade the python version or migrate to Pyro5

2 months ago
0 Hello All! Is It Possible To Utilize Shared Memory In Clearml For Tasks Like Model Inference, Where Instead Of Transferring Images Over The Network (E.G., Http, Rpc), We Use A Shared Memory Extension? Please Refer To The Link Below:

Hi @<1657918706052763648:profile|SillyRobin38> ! If it is compatible with http/rest, you could try setting api.files_server to the endpoint or sdk.storage.default_output_uri in clearml.conf (depending on your use-case).

11 months ago
0 Hey There, I Am A New User Of Clearml And Really Enjoying It So Far! I Noticed That My Model Checkpoints Are Saved After Each Epoch. Instead I Would Like To Only Save The Best And Last Model Checkpoint. Is That Possible? I Could Not Find Something Regardi

Hi @<1547390464557060096:profile|NuttyKoala57> ! You can use wildcards in auto_connect_framework to filter your models. Check the docs under init: None . You might also want to check out this GH thread for an another way to do this: None

2 years ago
0 Cannot Upload A Dataset With A Parent - Seems Very Odd! Clearml Versions I Tried: 1.6.1, 1.6.2 Scenario: * Create Parent Dataset (With Storage On S3) * Upload Data * Close Dataset * Create Child Dataset (Tried With Storage On Both S3 Or On Clearml Serv

Hi RoughTiger69 ! Can you try adding the files using a python script such that we could get an exception traceback, something like this:
` from clearml import Dataset

or just use the ID of the dataset you previously created instead of creating a new one

parent_dataset = Dataset.create(dataset_name="xxxx", dataset_project="yyyyy", output_uri=" ")
parent_dataset.add_files("folder1")
parent_dataset.upload()
parent_dataset.finalize()

child_dataset = Dataset.create(dataset_name="xxxx", dat...

2 years ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

Hi HomelyShells16 How about doing things this way? does it work for you?
` class ClearmlLightningCLI(LightningCLI):
def init(self, *args, **kwargs):
Task.add_requirements("requirements.txt")
self.task = Task.init(
project_name="example",
task_name="pytorch_lightning_jsonargparse",
)
super().init(*args, **kwargs)

def instantiate_classes(self, *args, **kwargs):
    super().instantiate_classes(*args, **kwargs)
  ...
2 years ago
Show more results compactanswers