Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 418 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 Hi Everyone, Weird Problem With Dataset.Get_Local_Copy (Both From Sdk And From Clearml-Data): I Have A Dataset With A Single File And Lots Of S3 Links. Used To Work Perfectly Until Those Files Started Becoming Larger (Or It Is Just A Matter Of Bad Timing

Hi @<1523705721235968000:profile|GrittyStarfish67> ! This looks like a boto3 error. You could try lowering sdk.aws.s3.boto3.max_multipart_concurrency in clearml.conf and setting max_workers=1 when calling Dataset.get_local_copy

one year ago
0 Hi All

That's unfortunate. Looks like this is indeed a problem 😕 We will look into it and get back to you.

one year ago
0 What Exactly Triggers The "Automagic" Logging Of The Model And Weights? I'Ve Pulled My Simple Test Project Out Of Jupyter Lab And The Same Problem Still Exists, So It Isn'T A Jupyter Lab Issues. A Few Things Log, But Never The Model

Hi RoundMole15 ! Are you able to see a model logged when you run this simple example?
` from clearml import Task
import torch.nn.functional as F
import torch.nn as nn
import torch
class TheModelClass(nn.Module):
def init(self):
super(TheModelClass, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
s...

2 years ago
0 Hi Everyone! Before I Ask A Question - You Develop A Very Cool Tool. Thanks! I Tried To Write Simple Pipeline Using The Pipelinedecorator. But Found That The Clearml Modifies The Code, For Example It Replaces

Hi @<1523703107031142400:profile|FlatOctopus65> ! python3.9 introduced a breaking change for codebases that parse code containing slices. You can read more about it here: None . Notable:

* The code that produces a Python code from AST will need to handle indexing with tuples specially (see Tools/parser/unparse.py) because d[(a, b)] is valid syntax (although parenthesis are redundant), but d[(a, b:c)] is not.

What you could do is downgrade to...

2 years ago
0 Hi, Do You Know How To Upload Pyspark Dataframes With Clearml As Artifact? For Example, I Have Code:

Anyhow, there is a serialization_function argument you could use in upload_artifact. I could imagine that we don’t properly serialize your artifacts. You could use the argument to pass a callback that would eficiently serialize the artifact. Notice that getting the artifact back requires a deserialization function

11 months ago
0 Hey All, Is There Any Reason The Python Sdk

Do you have the full exception trace?

2 years ago
0 Hi, I Am Using

Hi @<1576381444509405184:profile|ManiacalLizard2> ! Can you please share a code snippet that I could run to investigate the issue?

8 months ago
0 Hi, I Am Trying To Upload A Model Using Pipelinecontroller But I Get The Following Error. Clearml==1.8.3 Can Anyone Help Here?

Don't call PipelineController functions after start has finished. Use a post_execute_callback instead
` from clearml import PipelineController

def some_step():
return

def upload_model_to_controller(controller, node):
print("Start uploading the model")

if name == "main":
pipe = PipelineController(name="Yolo Pipeline Controller", project="yolo_pipelines", version="1.0.0")

pipe.add_function_step(
    name="some_step",
    function=some_st...
2 years ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

Can you see your task if you run this minimal example UnevenDolphin73 ?
` from clearml import Task, Dataset

task = Task.init(task_name="name_unique", project_name="project")
d = Dataset.create(dataset_name=task.name, dataset_project=task.get_project_name(), use_current_task=True)
d.upload()
d.finalize() `

2 years ago
0 Since Clearml 1.6.3, A Dataset Attached To A Task Now Renames That Task By Adding A

Can you please provide a minimal example that may make this happen?

2 years ago
0 Hello Everyone! I Am Using Clearml To Manage My Model Training For A Thesis Project. I Am Currently On The Stage Of Hyper-Parameter Tuning My Yolov5U Model And Am Testing Out The

Hi @<1691620883078057984:profile|ConfusedSeaanemone5> ! Those are the only 3 charts that the HPO constructs and reports. You could construct other charts/plots yourself and report them when a job completes using the job_completed_callback parameter.

8 months ago
0 Hi Clearmlers, I'M Trying To Create A Dataset With Tagged Batches Of Data. I Firstly Create An Empty Dataset With Dataset_Name = 'Name_Dataset', And Then Create A Another Tagged Dataset With The First Batch And With Parent_Datasets=['Name_Dataset']. It'S

Hi @<1668427950573228032:profile|TeenyShells80> , the parent_datasets should be a list of dataset IDs or clearml.Dataset objects, not dataset names. Maybe that is the issue

10 months ago
one year ago
0 Hi! Is There A Way To

I left another comment today. It’s about something raising an exception when creating a set from the file entries

one year ago
0 Hi! Is There A Way To

We would appreciate a PR! Just open a GH issue, the the PR and we will review it

one year ago
0 Hi! Is There A Way To

Hi @<1523707653782507520:profile|MelancholyElk85> ! I left you a comment on the PR

one year ago
0 Hi, With Clearml-Agent 1.5.1, I Tried To Run An Experiment Within A Docker With Image Python3:8 And It Failed Executing The Task While Trying To Call Python3.9. I Am Not Sure Why It'S Using Python3.9, Since The Agent.Default_Python Is 3.8 And The Image Is

Yes, so even if you use a docker image with 3.8, the agent doesn't really know that you have 3.8 installed. If it is ran with 3.9, it will assume that is the desired version you want to use. So you need to change it in the config.
Not really sure why default_python is ignored (we will need to look into this), but python_binary should work...

one year ago
0 Does Clearml Somehow

UnevenDolphin73 looking at the code again, I think it is actually correct. it's a bit hackish, but we do use deferred_init as an int internally. Why do you need to close the task exactly? Do you have a script that would highlight the behaviour change between <1.8.1 and >=1.8.1 ?

2 years ago
0 Hi! Is There A Way To

@<1523707653782507520:profile|MelancholyElk85> my bad, I forgot to press on "Submit Review" :face_palm:

one year ago
0 I Configured S3 Storage In My Clearml.Conf File On A Worker Machine. Then I Run Experiment Which Produced A Small Artifact And It Doesn'T Appear In My Cloud Storage. What Am I Doing Wrong? How To Make Artifacts Appear On My S3 Storage? Below Is A Sample O

@<1526734383564722176:profile|BoredBat47> Yeah. This is an example:

 s3 {
            key: "mykey"
            secret: "mysecret"
            region: "us-east-1"
            credentials: [
                 {
                     bucket: "
"
                     key: "mykey"
                     secret: "mysecret"
                    region: "us-east-1"
                  },
            ]
}
# some other config
default_output_uri: "
"
one year ago
0 So From What I Can Tell Using

ShinyPuppy47 Try this: use task = Task.init(...) (no create ) then call task.set_base_docker

2 years ago
0 Hello! I Have The Following Error In The Task'S Console:

Btw, to specify a custom package, add the path to that package to your requirements.txt (the path can also be a github link for example).

2 years ago
0 Hello, Community, I Hope This Message Finds You All Well. I Am Currently Working On A Project Involving Hyperparameter Optimization (Hpo) Using The Optuna Optimizer. Specifically, I'Ve Been Trying To Navigate The Parameters 'Min_Iteration_Per_Job' And 'M

Hi @<1523703652059975680:profile|ThickKitten19> ! Could you try increasing the max_iteration_per_job and check if that helps? Also, any chance that you are fixing the number of epochs to 10, either through a hyper_parameter e.g. DiscreteParameterRange("General/epochs", values=[10]), or it is simply fixed to 10 when you are calling something like model.fit(epochs=10) ?

8 months ago
0 Hi All, How Can I Get The Status Of A Component From Another Component In The Clearml Pipeline (End, Pending, Running)? I Want To Run The Triton Server As A "Daemon" Thread Inside The Component And So That Other Pipeline Components Can Access It (Request)

Hi @<1603198163143888896:profile|LonelyKangaroo55> ! Each pipeline component runs in a task. So you first need the IDEs of each component you try to query. The you can use Task.get_task None to get the task object, the you can use Task,get_status to get the status None .

To get the ids, you can use something like [None](https://clear.ml/docs/...

one year ago
0 Hello! I Have The Following Error In The Task'S Console:

can you try setting the repo when calling add_function_step ?

2 years ago
Show more results compactanswers