Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 433 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Why Is Async_Delete Not Working?

Hi @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the boto3 python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.

one year ago
0 Hi! I'M Running Launch_Multi_Mode With Pytorch-Lightning

Hi @<1578555761724755968:profile|GrievingKoala83> ! It looks like lightning uses the NODE_RANK env var to get the rank of a node, instead of NODE (which is used by pytorch).
We don't set NODE_RANK yet, but you could set it yourself after launchi_multi_node :

import os    
current_conf = task.launch_multi_node(2)
os.environ["NODE_RANK"] = str(current_conf.get("node_rank", ""))

Hope this helps

one year ago
0 Hi All, I Have A Question Regarding

@<1634001100262608896:profile|LazyAlligator31> it looks like the args get passed to a python thread. so the should be specified the same way as you would pass them to the args argument in a thread (so a tuple of positional arguments): func_args=("something", "else") . It looks like passing kwargs is not directly supported, but you could build a partial :

from functools import partial
scheduler.add_task(schedule_function=partial(clone_enqueue, arg_1="something", arg_2="else")...
one year ago
0 Hi, I Have Noticed That Dataset Has Started Reporting My Dataset Head As A Txt File In "Debug Samples -> Metric: Tables". Can I Disable It? Thanks!

Hi HandsomeGiraffe70 ! You could try setting dataset.preview.tabular.table_count to 0 in your clearml.conf file

3 years ago
0 Hi There, Is There A Way To Upload/Connect Artifact To A Certain Running/Completed Task, Using A Different Scope Other Then The One That'S Running? (I Mean, Instead Of Use Task.Upload_Artifact, Use Task,Get_Tasks(Task_Id=<Some_Task_Id>) And Then Use This

Hi @<1539417873305309184:profile|DangerousMole43> ! You need to mark the task you want to upload an artifact to as running. You can use task.mark_started(force=True) to do so, then mark it back as completed using task.mark_completed(force=True)

one year ago
0 Hi Everyone. If I Edit A File In Configuration Objects In Clearml Ui, Will The New Parameters Be Injected In My Code When I Run This?

Hi PetiteRabbit11 . This snippet works for me:
` from clearml import Task
from pathlib2 import Path

t = Task.init()
config = t.connect_configuration(Path("config.yml"))
print(open(config).read()) Note the you need to use the return value of connect_configuration ` when you open the configuration file

2 years ago
0 Hi All

Hi @<1780043419314294784:profile|LargeHamster21> ! Looks like you are using python3.11 (agent.default_python=3.11), while Pyro4 is incompatible with this python version: None
I would suggest trying to downgrade the python version or migrate to Pyro5

10 months ago
0 Hi Guys, I'M Trying To Familiarize Myself With Hyperparameter Optimization Using Clearml. It Seems Like There Is A Discrepancy Between

Hi GiganticMole91 . You could use something like
` from clearml.automation import DiscreteParameterRange

HyperParameterOptimizer(
...,
hyper_parameters=[DiscreteParameterRange("epochs", values=[100]), ...] # epochs is static, ... represent the other params
) to get the same behaviour --params-override ` provides

3 years ago
0 I Configured S3 Storage In My Clearml.Conf File On A Worker Machine. Then I Run Experiment Which Produced A Small Artifact And It Doesn'T Appear In My Cloud Storage. What Am I Doing Wrong? How To Make Artifacts Appear On My S3 Storage? Below Is A Sample O

@<1526734383564722176:profile|BoredBat47> Yeah. This is an example:

 s3 {
            key: "mykey"
            secret: "mysecret"
            region: "us-east-1"
            credentials: [
                 {
                     bucket: "
"
                     key: "mykey"
                     secret: "mysecret"
                    region: "us-east-1"
                  },
            ]
}
# some other config
default_output_uri: "
"
2 years ago
0 Hi Team,When Clearml-Agent Is Used To Run The Code,I T Will Setup The Environment ,How It Take The Python Package Version?

Hi @<1533257278776414208:profile|SuperiorCockroach75> Try setting packages in your pipline component to your requirements.txt or simply add the list of packages (with the specific versions). None

2 years ago
0 Hi, I Have An Issue When Running A Pipeline Controller Remotely In Docker. Basically I Have A Module That Reads A Config File Into A Dict And Calls The Pipeline Controller, Like

Hi @<1570220858075516928:profile|SlipperySheep79> ! What happens if you do this:

import yaml
import argparse
from my_pipeline.pipeline import run_pipeline
from clearml import Task

parser = argparse.ArgumentParser()
parser.add_argument('--config', type=str, required=True)

if __name__ == '__main__':
    if not Task.current_task():
      args = parser.parse_args()
      with open(args.config) as f:
          config = yaml.load(f, yaml.FullLoader)
    run_pipeline(config)
2 years ago
0 Hi, I Have An Issue, But Lets Start With The Description. This Is Snippet Of My Project'S Structure:

@<1554638160548335616:profile|AverageSealion33> looks like hydra pulls the config relative to the scripts directory, and not the current working directory. The pipeline controller actually creates a temp file in /tmp when it pulls the step, so the script's directory will be /tmp and when searching for ../data , hydra will search in / . The .git likely caused your repository to be pulled, so your repo structure was created in /tmp , which caused the step to run correctly...

2 years ago
0 Reporting Nonetype Scalars.

By default, as 0 values

one year ago
0 Hello, For Some Reason My Upload Speed To S3 Is Insanely Slow, I Noticed In Logs That It Upoads To /Tmp Folder. What Does That Mean? Why Tmp?

@<1590514584836378624:profile|AmiableSeaturtle81> note that we zip the files before uploading them as artifacts to the dataset task. Any chance you are specifying the default output uri as being a local path, such as /tmp ?

one year ago
0 Can Steps Be Removed From Pipelines, And/Or Can Pipelines Be Generally Modified Other Than Adding Steps To Them?

@<1523701083040387072:profile|UnevenDolphin73> are you composing the code you want to execute remotely by copy pasting it from various cells in one standalone cell?

one year ago
0 Hi, I'M Running

hi OutrageousSheep60 ! We didn't release an RC yet, we will a bit later today tho. We will ping you when it's ready, sorry for the delay

2 years ago
0 Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Regarding pending pipelines: please make sure a free agent is bound to the queue you wish to run the pipeline in. You can check queue information by accessing the INFO section of the controller (as in the first screenshort)
then by pressing on the queue, you should see the worker status. There should be at least one worker that has a blank "CURRENTLY EXECUTING" entry
image
![image](https://clearml-we...

one year ago
0 I Know At Least One Other Person Has Posted About This Previously, But When I Interact With

Hi @<1533620191232004096:profile|NuttyLobster9> We likely print the warning by mistake. We will look into it soon and handle it properly

2 years ago
0 Hi, I'm using `PipelineController` to launch remote pipelines from a local orchestration script. For each input file, I create a pipeline like this sequentially: ```for file in files: pipeline = PipelineController(...) pipeline.add_step(...) pip

Hi @<1861218295315697664:profile|FlutteringLobster45> ! You should use PipelineController.create and PipelineController.enqueue in this case, to create a pipeline without executing it at all remotely and enqueue it for remote execution

3 months ago
0 Https://Clearml.Slack.Com/Archives/Ctk20V944/P1713357955958089

@<1523701949617147904:profile|PricklyRaven28> Can you please try clearml==1.16.2rc0 ? We have released a fix that will hopefully solve your problem

one year ago
0 Hello All! Is It Possible To Utilize Shared Memory In Clearml For Tasks Like Model Inference, Where Instead Of Transferring Images Over The Network (E.G., Http, Rpc), We Use A Shared Memory Extension? Please Refer To The Link Below:

I think I understand. In general, if your communication worked without clearml, it should also work when using clearml.
But you won't be able to upload an artifact using None for example, to the shared memory. Same thing for debug samples etc.

one year ago
0 Hello There! After Updating Clearml Server To The Latest Version I'M Not Able To Download Old Datasets. I Got An Error

Hi EcstaticMouse10 ! Are you using the latest clearml sdk version? If not, can you please upgrade and tell us if you still have this issue?

2 years ago
0 Hi! I Would Like To Report 2 "Plt.Imshow" Images. Plain Plotting (I.E. "Plt.Figure()") Showed Only The Second One. When I Tried To Report Through The Logger Via "Report_Confusion_Matrix" It Reported Only The First One. Is There A Better Way Of Doing Thi

Hi @<1714813627506102272:profile|CheekyDolphin49> ! It looks as if we can't report these plots as plotly plots so we default to Debug Samples. You should see both plots under Debug Samples , but make sure you are setting the Metric to -- All --
image

one year ago
0 Hello All, I Want To Clarify Something. In The

I think we should just have a new parameter

one year ago
0 Does Clearml Somehow

So the flow is like:
MASTER PROCESS -> (optional) calls task.init -> spawns some children CHILD PROCESS -> calls Task.init. The init is deferred even tho it should not be?
If so, we need to fix this for sure

2 years ago
0 Does Clearml Somehow

Hi UnevenDolphin73 ! We were able to reproduce the issue. We'll ping you once we have a fix as well 👍

2 years ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

HomelyShells16 looks like some changes have been made to jsonargparse and pytorch_lightning since we released this binding feature. could you try with jsonargparse==3.19.4 and pytorch_lightning==1.5.0 ? (no namespace parsing hack should be needed with these versions I believe)

3 years ago
0 So From What I Can Tell Using

Hi SoggyHamster83 ! Any reason you can't use Task.init?

2 years ago
Show more results compactanswers