Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SoreSparrow36
Moderator
4 Questions, 47 Answers
  Active since 21 July 2023
  Last activity 7 months ago

Reputation

0

Badges 1

22 × Eureka!
0 Votes
5 Answers
2K Views
0 Votes 5 Answers 2K Views
How can I control the ~/clearml.conf file being used by agent-services in the docker-compose stack for clearml-server ? namely, if I enqueue a task, I notice...
2 years ago
0 Votes
19 Answers
2K Views
0 Votes 19 Answers 2K Views
2 years ago
0 Votes
3 Answers
900 Views
0 Votes 3 Answers 900 Views
is there somewhere I can track upcoming releases by any chance? Trying to plan an upgrade of our services. namely I'm wondering if I need to continue using m...
10 months ago
0 Votes
10 Answers
853 Views
0 Votes 10 Answers 853 Views
7 months ago
0 Does Anyone Have Experience With Integrating Clearml And Slurm? If So, What Pattern Did You Use? (Did You Submit Tasks And Just Use Clearml As Tracker, Or Did You Start Agents With Slurm?) Would Love To Hear From The Community Before Trying To Diy

i think he's saying you'd want an intermediary layer that acts like the daemon .
why not run the daemon directly im not sure, but i suspect its bc it doesn't have an "end time" for execution (stays up)

7 months ago
0 Hello! I Created A

the clearml github, search for a file named cleanup service dot py (or something to that effect)

2 years ago
0 Hello! I Created A

credentials for the server to do things with s3 will be in /opt/clearml/apiserver.conf.

2 years ago
0 I Am Still Going Through All The Docs And Intro Videos … But: Is The Only Way To Create A New Experiment To Run The Script That Contains The Experiment At Least Once? I Wonder About This B.C. Most Of What I Want To Run Are Quite Long Jobs, So Even Running

Yup if you scroll through the logs in the console, near the top (post config dump), you’ll see a git clone and checkout to the specific hash.

PS You can actually change this parameter in an experiment’s configuration if it is in draft mode.

2 years ago
0 Hello! I Created A

I think you’d have to run the cleanup service. That’s what seems to be what is controlling deletion based on archived status and some other temporal filters

2 years ago
0 How Can I Control The

thank you!
I'll add a volume mount to the services-agent container, and from what I understand that will become the template it uses?

is this the structure of the file?
None

or is it the "dot" syntax (like what shows up in the console when the task executes / your snippet)?

2 years ago
0 How Would Ya'Ll Approach Backing Up The Elastic-Search/Redis/Etc. Data In Self-Hosted Clearml? Any Drawbacks/Risks Of Doing A Simple Process That Periodically Zips Up The

Can vouch, this works well. Had my server hard reboot (maybe bc of clearml? maybe bc of hardware, maybe both… haven’t figured it out), and busy remote workers still managed to update the backend once it came back up.

Re: backups… what would happen if zipped while running but no work was being performed? Still an issue potentially?

and what happens if docker compose down is run while there’s work in the services queue? Will it be restored? What are the implications if a backup is perform...

2 years ago
0 Hello! I Created A

Might be under examples

2 years ago
0 Hey All, Very New To Clearml! I Am Trying To Design An Hpo Setup Using The Optuna Configuration, And I'M Working On Getting My Template Trainer Set Up. The Issue I'M Having Is It'S Unclear To Me How To Define One Of My Hyperparameters Whose Size Is Dynami

youre basically asking to sample from a distribution where not all parameters are mutually independent .

the short answer is no- this is not directly supported . optuna needs each hyperparam to be independent, so its up to you to handle the dependencies between parameters yourself unfortunately .

your solution of defining them independently and then using num_layers to potentially ignore other parameters is a valid one .

7 months ago
0 I Am Still Going Through All The Docs And Intro Videos … But: Is The Only Way To Create A New Experiment To Run The Script That Contains The Experiment At Least Once? I Wonder About This B.C. Most Of What I Want To Run Are Quite Long Jobs, So Even Running

you can put task.execute_remotely() to create it in draft mode. I've taken to configuring defaults to run things very quickly just in case i forget though (e.g. placeholder string for dataset, bail out early if not changed… or just do one epoch on a small subset of samples, etc).

2 years ago
0 Hi All

oh i see. you're talking about the agent-services, not a separate agent in a container.
yup, I've got the same thing going there.
fwiw...
for me, HOST_IP is 0.0.0.0 and the other "HOSTS" env vars don't contain "http" in them.
and my server is publicly reachable, not sure if that matter either.
image

2 years ago
0 Hi All

I ran into something similar during deployment. Hopefully this helps with your debugging: if the agent was launched separately from the rest of the stack, it may not have proper docker-DNS resolution to None . (e.g. if in the same docker-compose, perhaps you didnt add the backend network field, or if it was launched separately through docker run without an explicit external network defined)

if the agent's on the same machine, try docker network connect to add...

2 years ago
0 Hi All

hm, you should be able to hit None if docker networking is working properly. it shouldn't need to go through the internet to get back to your machine.

2 years ago
0 Is There Somewhere I Can Track Upcoming Releases By Any Chance? Trying To Plan An Upgrade Of Our Services. Namely I'M Wondering If I Need To Continue Using My Own Forked Image Of

ah, thank you for the clarity. A quarterly release schedule makes sense, it's about what I've observed.
Let me know if I can be of any assistance in early testing!

10 months ago
0 Does Anyone Have Experience With Integrating Clearml And Slurm? If So, What Pattern Did You Use? (Did You Submit Tasks And Just Use Clearml As Tracker, Or Did You Start Agents With Slurm?) Would Love To Hear From The Community Before Trying To Diy

ah . that's a shame its under Enterprise only . no wonder I missed it .

im helping train my friend @<1798162804348293120:profile|FlutteringSeahorse49> on clearml to assist with his astrophysics research, and his university has a slurm cluster . So we're trying to figure out if we can launch an agent process on the cluster to pull work from the clearml queue (fwiw: on their cluster containers is not supported ) .

7 months ago
0 I Just Encountered A Really Frightening Bug. Best I Can Explain What Happened Was This: Data Scientist Created New Venv, Installed Clearml==1.11.0 Instead Of Clearml[S3]==1.11.1, And Upon Re-Running A Pipeline From Cli, The Entire Project "Disappeared" (W

one note is that it happened after I tried deploying a set of workers to a new queue, which she tried to use to run the tasks in parallel instead of our default queue which is only serviced by one worker (a container i built)

2 years ago
0 I Am Still Going Through All The Docs And Intro Videos … But: Is The Only Way To Create A New Experiment To Run The Script That Contains The Experiment At Least Once? I Wonder About This B.C. Most Of What I Want To Run Are Quite Long Jobs, So Even Running

For reproducibility, it kind of makes sense though. The existence of the file is contingent on the worker cloning the source code. I'm sure things can be done to maintain state differently but I personally adapted to the git-based workflow for managing files pretty quickly.

though yes I will admit I had the same thought first: why must I run it each time?

Beware: squash merges will ruin the ability to reproduce the experiment at that time since the git commit will be lost (presuming th...

2 years ago
0 Does Anyone Have Experience With Integrating Clearml And Slurm? If So, What Pattern Did You Use? (Did You Submit Tasks And Just Use Clearml As Tracker, Or Did You Start Agents With Slurm?) Would Love To Hear From The Community Before Trying To Diy

@<1798162804348293120:profile|FlutteringSeahorse49> wants to start HPO though, so the desire is to deploy agents to listen to queues on the slurm cluster (perhaps the controller runs on his laptop).

would that still make sense?

7 months ago
0 Hey All, Very New To Clearml! I Am Trying To Design An Hpo Setup Using The Optuna Configuration, And I'M Working On Getting My Template Trainer Set Up. The Issue I'M Having Is It'S Unclear To Me How To Define One Of My Hyperparameters Whose Size Is Dynami

you could also take the route of NOT specifying num_layers, and instead write your own code to create a set of viable layer designs to choose from and pass that as a parameter, so optuna selects from a countable set instead of suggesting integer values .

the downside of this is the lack of gradient information in the optimization process

7 months ago
2 years ago
0 Hi Guys, I'M Trying To Deploy An Image Segmentation Model, So I Expect That The Front-End Of The Endpoint Will Allow Users To Upload Images, Get Their Segmented Images & Option To Annotate The Images If The Results Are Not Good Enough. My Question Is: How

If you can hit the endpoint with curl, you for sure can hook it up to many frontend frameworks.

Personal recs: gradio, streamlit

Abstract the interaction into a function call, and wrap it all in some UI elements using python.

2 years ago
0 I Just Encountered A Really Frightening Bug. Best I Can Explain What Happened Was This: Data Scientist Created New Venv, Installed Clearml==1.11.0 Instead Of Clearml[S3]==1.11.1, And Upon Re-Running A Pipeline From Cli, The Entire Project "Disappeared" (W

the project wasn't hidden before. I'm aware of the pipeline tasks being hidden, that makes sense for organization. but the actual project itself as an entirety has a ghost icon.

she created a new project and started working in there, it was visible in the UI... and just now it disappeared again. it's kind of like running the pipeline makes it disappear.

2 years ago
0 Hi Everybody! I'M Running An Example Pipeline From A Web Ui. I Notice Very Strange Behavior. After The First Local Run, I Can Create A New Run And Pass Initialization Parameters There, But After A Successful Run, I Lose The Ability To Create New Runs With

tasks that create pipelines feels like a hack and i found they dont show up in the UI (have to use the link in the console).

I've found that sometimes i need to right click "Run" a couple of times before the parameters are filled in properly.

2 years ago
Show more results compactanswers