Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 433 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 I Have An Issue Using Clearml Integrations With Kerastuner. I Have Followed The Guide In

Hi @<1581454875005292544:profile|SuccessfulOtter28> ! The logger is likely outdated. Can you please open a Github issue about it?

one year ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

Can you see your task if you run this minimal example UnevenDolphin73 ?
` from clearml import Task, Dataset

task = Task.init(task_name="name_unique", project_name="project")
d = Dataset.create(dataset_name=task.name, dataset_project=task.get_project_name(), use_current_task=True)
d.upload()
d.finalize() `

3 years ago
0 Does Clearml Somehow

UnevenDolphin73 did that fix the logging for you? doesn't seem to work on my machine. This is what I'm running:
` from clearml import Task
import logging

def setup_logging():
level = logging.DEBUG
logging_format = "[%(levelname)s] %(asctime)s - %(message)s"
logging.basicConfig(level=level, format=logging_format)

t = Task.init()
setup_logging()
logging.info("HELLO!")
t.close()
logging.info("HELLO2!") `

2 years ago
0 Hi Everyone, I Get An Error When I Add An Argument Of Type Enum To A Pipeline Component (@Pipelinedecorator.Component). At The Same Time Pipelines (@Pipelinedecorator.Pipeline) And Normal Functions Work Fine With Enums. The Error Message Looks Like This:

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! At the moment, we don't support default arguments that are typed via a class implemented in the same module as the function.
The way pipelines work is: we copy the code of the function steps (eventually their decorator's as well if declared in the same file), then we copy all the imports in the module. Problem is, we don't copy classes.
You could have your enum in a separate file, import it and it should work

one year ago
0 Hi! Is There A Way To

Hi @<1523707653782507520:profile|MelancholyElk85> ! I don't think this is possible at the moment 😕 Feel free to open a GH issue that proposes this feature tho

2 years ago
0 Hi! I'M Currently Considering Switching To Clearml. In My Current Trials I Am Using Up The Api Calls Very Quickly Though. Is There Some Way To Limit That? The Documentation Is A Bit Sparse On What Uses How Many Api Calls. Is It Possible To Batch Them For

FlutteringWorm14 we do batch the reported scalars. The flow is like this: the task object will create a Reporter object which will spawn a daemon in another child process that batches multiple report events. The batching is done after a certain time in the child process, or the parent process can force the batching after a certain number of report events are queued.
You could try this hack to achieve what you want:
` from clearml import Task
from clearml.backend_interface.metrics.repor...

3 years ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! The rc is now out and installable via pip install clearml==1.14.1rc0

one year ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

PanickyMoth78 You might also want to set some lower values for sdk.google.storage.pool_connections/pool_maxsize in your clearml.conf . Newer clearml version set max_workers to 1 by default, and the number of connections should be tweaked using these values. If it doesn't help, please let us know

2 years ago
0 Hello! I Have The Following Error In The Task'S Console:

Btw, to specify a custom package, add the path to that package to your requirements.txt (the path can also be a github link for example).

2 years ago
0 Anyone Here With Any Idea Why My Service Tasks Get Aborted When Going To Sleep?

Hi @<1523701868901961728:profile|ReassuredTiger98> ! Looks like the task actually somehow gets ran by both an agent and locally at the same time, so one of the is aborted. Any idea why this might happen?

2 years ago
0 Anyone Here With Any Idea Why My Service Tasks Get Aborted When Going To Sleep?

There might be something wrong with the agent using ubuntu:22.04 . Anyway, good to know everything works fine now

2 years ago
0 Hi Everyone. Anyone Else Encountering Model Upload Failure To S3 On Clearml 1.12.0? I Get 0:21:32,292 - Clearml.Storage - Error - Failed Uploading: ‘Lazyevalwrapper’ Object Cannot Be Interpreted As An Integer 2023-07-31 10:21:32,499 - Clearml.Storage - E

Hi @<1523705721235968000:profile|GrittyStarfish67> ! Please install the latest RC: pip install clearml==1.12.1rc0 to fix this. We will have an official release soon as well

2 years ago
0 Hi

Hi @<1546303293918023680:profile|MiniatureRobin9> The PipelineController has a property called id , so just doing something like pipeline.id should be enough

one year ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

Hi HomelyShells16 How about doing things this way? does it work for you?
` class ClearmlLightningCLI(LightningCLI):
def init(self, *args, **kwargs):
Task.add_requirements("requirements.txt")
self.task = Task.init(
project_name="example",
task_name="pytorch_lightning_jsonargparse",
)
super().init(*args, **kwargs)

def instantiate_classes(self, *args, **kwargs):
    super().instantiate_classes(*args, **kwargs)
  ...
3 years ago
0 Hello, For Some Reason My Upload Speed To S3 Is Insanely Slow, I Noticed In Logs That It Upoads To /Tmp Folder. What Does That Mean? Why Tmp?

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! What function are you using to upload the data?

one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Hi!
It is possible to use the same queue for the controller and the steps, but there needs to be at least 2 agents that pull tasks from that queue. Otherwise, if there is only 1 agent, then that agent will be busy running the controller and it won't be able to fetch the steps.

Regarding missing local packages: the step is ran in a temporary directory that is different than the directory the script is originally in. To solve this, you could add all the modules/files you are interested in in a...

one year ago
0 Hello Everyone, I Want To Run A Github Action On Each Repo Pull Request To Create A Task In Clearml To Basically Do Check Of Current Pr Code With Some Scenarios. Clearml Task Gets Repo And Commit Id As Follows (From Console):

Hi @<1693795212020682752:profile|ClumsyChimpanzee88> ! Not sure I understand the question. If the commit ID does not exist remotely, then it can't be pulled. How would you pull the commit to another machine otherwise, is this possible using your current workflow?

one year ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

Can you please provide a minimal example that may make this happen?

3 years ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

If the task is running remotely and the parameters are populated, then the local run parameters will not be used, instead the parameters that are already on the task will be used. This is because we want to allow users to change these parameters in the UI if they want to - so the paramters that are in the code are ignored in the favor of the ones in the UI

one year ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

how about this one?

import clearml
import os
print("\n".join(open(os.path.join(clearml.__path__[0], "automation/controller.py")).read().split("\n")[310:320]))
one year ago
0 For Some Reason I Can'T Delete A Pipeline Projet, The Deletion Is Running Indefinitely. Is There A Way To Force The Deletion Of A Project Via The Apiclient?

Hi SmugSnake6 ! If you want to delete a project using the APIClient :
from clearml.backend_api.session.client import APIClient from clearml.backend_interface.util import exact_match_regex api_client = APIClient() id = api_client.projects.get_all(name=exact_match_regex("pipeline_project/.pipelines/pipeline_name"), search_hidden=True)[0].id api_client.projects.delete(project=id)Notice that tasks need to be archived

3 years ago
0 Hello! When I Squash Multiple Datasets (E.G.

Hi SmallGiraffe94 ! Dataset.squash doesn't set as parents the ids you specify in dataset_ids . Also, notice that the current behaviour of squash is pulling the files from all the datasetes from a temp folder and re-uploading them. How about creating a new dataset with id1, id2, id3 as parents Dataset.create(..., parent_datasets=[id1, id2, id3]) instead? Would this fit your usecase?

3 years ago
0 Hi, I Have An Issue, But Lets Start With The Description. This Is Snippet Of My Project'S Structure:

Hi @<1554638160548335616:profile|AverageSealion33> ! We pull git repos to copy the directory your task is running in. Because you deleted .git , we can't do that anymore. I think that, to fix this, you could just run the agent in the directory .git previously existed.

2 years ago
0 Hi, I Am Trying To Use

Hi DrabOwl94 Looks like this is a bug. Strange no one found it until now. Anyway, you can just add a --params-override at the end of the command line and it should work (and --max-iteration-per-job <YOUR_INT> and --total-max-job <YOUR_INT> as Optuna requires this). We will fix this one in the next patch.
Also, could you please open a Github issue? It should contain your command line and this error.
Thank you

2 years ago
0 Hi All, After Upgrading To Sdk 1.8.0 We Are Having Issue Adding External Files To Dataset From Gcs. This Is The Code We Use:

this only affects single files, if you wish to add directories (with wildcards as well) you should be able to

2 years ago
0 Hello Everyone! I Ran A Test Experiment And Got An Error. I'M Running On An M1 Mac. Worker Local Without Gpu. Has Anyone Already Solved This Problem?

We used to have "<=20" as the default pip version in the agent. Looks like this default value still exists on your machine. But that version of pip doesn't know how to install your version of pytorch...

2 years ago
Show more results compactanswers