Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 418 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hello. When I Use

Hi DangerousDragonfly8 ! The file is there to test the upload to the bucket, as the name suggests. I don't think deleting it is a problem, and we will likely do that automatically in a future version

2 years ago
0 Hi, With Clearml-Agent 1.5.1, I Tried To Run An Experiment Within A Docker With Image Python3:8 And It Failed Executing The Task While Trying To Call Python3.9. I Am Not Sure Why It'S Using Python3.9, Since The Agent.Default_Python Is 3.8 And The Image Is

Yes, so even if you use a docker image with 3.8, the agent doesn't really know that you have 3.8 installed. If it is ran with 3.9, it will assume that is the desired version you want to use. So you need to change it in the config.
Not really sure why default_python is ignored (we will need to look into this), but python_binary should work...

2 years ago
0 Hi Everyone, I Get An Error When I Add An Argument Of Type Enum To A Pipeline Component (@Pipelinedecorator.Component). At The Same Time Pipelines (@Pipelinedecorator.Pipeline) And Normal Functions Work Fine With Enums. The Error Message Looks Like This:

@<1643060801088524288:profile|HarebrainedOstrich43> you are right. we actually attempt to copy the default arguments as well. What happens is that we aggregate these arguments in the kwargs dict, then we dump str(kwargs) in the script of the pipeline step. Problem is, str(dict) actually calls __ repr_ _ on each key/value of the dict, so you end up with repr(MyEnum.FALSE) in your code, which is <MyEnum.FALSE: 'FALSE'> . One way to work around this is to add somet...

11 months ago
0 Hello! When Running This Code:

Please let me know if this works!

one year ago
0 Hello! When Running This Code:

Hi @<1523702652678967296:profile|DeliciousKoala34> ! Looks like this is a bug in set_metadata . The model ID is not set, and set_metadata doesn't set it automatically. I would first upload the model file, then set the meta-data to avoid this bug. You can call update_weights to do that. None

one year ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

Can you see your task if you run this minimal example UnevenDolphin73 ?
` from clearml import Task, Dataset

task = Task.init(task_name="name_unique", project_name="project")
d = Dataset.create(dataset_name=task.name, dataset_project=task.get_project_name(), use_current_task=True)
d.upload()
d.finalize() `

2 years ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

Hi PanickyMoth78 ! I ran the script and yes, it does take a lot more memory than it should. There is likely a memory leak somewhere in our code. We will keep you updated

2 years ago
0 Hello, Im Having Huge Performance Issues On Large Clearml Datasets How Can I Link To Parent Dataset Without Parent Dataset Files. I Want To Create A Smaller Subset Of Parent Dataset, Like 5% Of It. To Achieve This, I Have To Call Remove_Files() To 60K It

otherwise, you could run this as a hack:

        dataset._dataset_file_entries = {
            k: v
            for k, v in self._dataset_file_entries.items()
            if k not in files_to_remove  # you need to define this
        }

then call dataset.remove_files with a path that doesn't exist in the dataset.

7 months ago
0 Hi All! Question About Pipelines Using Decorators. The First Step Of My Pipeline Uses A Specific Repo, Specified Using

Hi ObedientDolphin41 ! Python allows you to decorate functions dynamically. See this example:
` from clearml.automation.controller import PipelineDecorator

@PipelineDecorator.component(repo=" ", repo_branch="master")
def step_one():
print('step_one')
return 1

def step_two_dynamic_decorator(repo=" ", repo_branch="master"):
@PipelineDecorator.component(repo=repo, repo_branch=repo_branch)
def step_two(arg):
print("step_two")
return arg
return step...

2 years ago
0 Hi Everyone, I'M Currently Trying To Add A Csv-File That Is Located In An S3-Bucket To An Existing Clearml Dataset Using The Following Code:

You could consider downgrading to something like 1.7.1 in the meantime, it should work with that version

2 years ago
0 I Have Am Issue Getting A Model From The Model Repository When Running A Task In A Remote Worker. I Have A Custom Model That Was Saved With Outputmodel:

Hi @<1523711002288328704:profile|YummyLion54> ! By default, we don't upload the models to our file server, so in the remote run we will try to pull the file from you local machine which will fail most of the time. Specify the upload_uri to the api.files_server entry in your clearml.conf if you want to upload it to the clearml server, or any s3/gs/azure links if you prefer a cloud provider

one year ago
0 Hi Everyone. If I Edit A File In Configuration Objects In Clearml Ui, Will The New Parameters Be Injected In My Code When I Run This?

Hi PetiteRabbit11 . This snippet works for me:
` from clearml import Task
from pathlib2 import Path

t = Task.init()
config = t.connect_configuration(Path("config.yml"))
print(open(config).read()) Note the you need to use the return value of connect_configuration ` when you open the configuration file

2 years ago
0 Does Clearml Somehow

UnevenDolphin73 looking at the code again, I think it is actually correct. it's a bit hackish, but we do use deferred_init as an int internally. Why do you need to close the task exactly? Do you have a script that would highlight the behaviour change between <1.8.1 and >=1.8.1 ?

2 years ago
0 Does Clearml Somehow

I see. We need to fix both anyway, so we will just do that

2 years ago
0 Does Clearml Somehow

So the flow is like:
MASTER PROCESS -> (optional) calls task.init -> spawns some children CHILD PROCESS -> calls Task.init. The init is deferred even tho it should not be?
If so, we need to fix this for sure

2 years ago
0 Hi! I'M Currently Considering Switching To Clearml. In My Current Trials I Am Using Up The Api Calls Very Quickly Though. Is There Some Way To Limit That? The Documentation Is A Bit Sparse On What Uses How Many Api Calls. Is It Possible To Batch Them For

FlutteringWorm14 we do batch the reported scalars. The flow is like this: the task object will create a Reporter object which will spawn a daemon in another child process that batches multiple report events. The batching is done after a certain time in the child process, or the parent process can force the batching after a certain number of report events are queued.
You could try this hack to achieve what you want:
` from clearml import Task
from clearml.backend_interface.metrics.repor...

2 years ago
0 Hello! I Have The Following Error In The Task'S Console:

[package_manager.force_repo_requirements_txt=true] Skipping requirements, using repository "requirements.txt"
Try adding clearml to the requirements

2 years ago
0 Dear Community, I Have Tried To Use

@<1668427963986612224:profile|GracefulCoral77> You can both create a child or keep the same dataset as long as it is not finalized.
You can skip the finalization using the --skip-close argument. Anyhow, I can see why the current workflow is confusing. I will discuss it with the team, maybe we should allow syncing unfinalized datasets as well.

11 months ago
0 Hello! I Have The Following Error In The Task'S Console:

can you try setting the repo when calling add_function_step ?

2 years ago
2 years ago
0 Hello, Is There A Way To Disable Dataset Caching So That When

Hi FreshParrot56 ! This is currently not supported 🙁

2 years ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

HomelyShells16 looks like some changes have been made to jsonargparse and pytorch_lightning since we released this binding feature. could you try with jsonargparse==3.19.4 and pytorch_lightning==1.5.0 ? (no namespace parsing hack should be needed with these versions I believe)

2 years ago
0 So From What I Can Tell Using

ShinyPuppy47 do you have a small example we could take a look at?

2 years ago
0 Hi Everyone, I Have A Question About Using

Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Could you please share some code that could help us reproduced the issue? I tried cloning, changing parameters and running a decorated pipeline but the whole process worked as expected for me.

11 months ago
0 Hi Everyone, I'M Currently Trying To Add A Csv-File That Is Located In An S3-Bucket To An Existing Clearml Dataset Using The Following Code:

Hi EnergeticGoose10 . This is a bug we are aware of. We have already prepared a fix and we will release it ASAP.

2 years ago
0 So From What I Can Tell Using

ShinyPuppy47 Try this: use task = Task.init(...) (no create ) then call task.set_base_docker

2 years ago
0 Hi, I Am Using

Hi @<1576381444509405184:profile|ManiacalLizard2> ! Can you please share a code snippet that I could run to investigate the issue?

8 months ago
2 years ago
Show more results compactanswers