Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 In My Git Repo, I Have A

Yea the "-e ." seems to fit this problem the best.

👍

It seems like whatever I add to

docker_bash_setup_script

is having no effect.

If this is running with the k8s glue, there console out of the docker_bash_setup_script ` is currently Not logged into the Task (this bug will be solved in the next version), But the code is being executed. You can see the full logs with kubectl, or test with a simple export test

docker_bash_setup_script
` export MY...

one year ago
0 When Running Jobs, My Pipeline Controller Always Updates To The Latest Git Commit Id But Sometimes My Pipeline Steps Do Not. This Appears To Be Somewhat Random So I Believe It Is Due To Caching. Has Anyone Else Encountered This Or Have Any Idea How To Fix

AdventurousRabbit79 are you passing cache_executed_step=False to the PipelineController ?
https://github.com/allegroai/clearml/blob/332ceab3eadef4997e897d171957975a247a6dc1/clearml/automation/controller.py#L129
Could you send a usage example ?

my pipeline controller always updates to the latest git commit id

This will only happen if the Task the pipeline creates has no specific commit ID, and instead just uses the latest from the git repo. Is this the case ?

3 years ago
0 Hi, Can We Custom Default Output_Uri For

Hi QuaintJellyfish58
You can always set it inside the function, with
Task.current_task().output_uri = "s3://"I have to ask, I would assume the agents are pre-configured with "default_output_uri" in the clearml.conf, why would you need to set it manually?

one year ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

I figured out the problem...

Nice!

Unfortunately, the hyperparameters in configuration object seems to be superior to the hyperparameters in Hyperparameter section

Hmm what do you mean by that ? how did you construct the code itself? (you should be able to "prioritize" one over the over)

2 years ago
0 For Remote Execution Where The Queue Has

@<1523701083040387072:profile|UnevenDolphin73> it's looking for any of the files:
None

one year ago
0 When I Setup My Local Virtual Environment I Use A Combination Of Conda And Pip. I Use Conda As My Environment Manager, And Then Use Pip For Packages That Are Not In The Conda Repositories.

Thanks VivaciousPenguin66 !
BTW: if you are running the local code with conda, you can set the agent to use conda as well (notice that if you are running locally with pip, the agent's conda env will use pip to install the packages to avoid version mismatch)

3 years ago
0 Hi, I'M Getting A Lot Of The Following Logs

Hi PompousBeetle71
Could you test the latest RC, I think the warning were fixed:
pip install trains==0.16.2rc0Let me know...

4 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

Hi CurvedHedgehog15

I would like to optimize hparams saved in Configuration objects.

Yes, this is a tough one.
Basically the easiest way to optimize is with hyperparameter sections as they are basically key/value you can control from the outside (see the HPO process)
Configuration objects are, well, blobs of data, that "someone" can parse. There is no real restriction on them, since there are many standards to store them (yaml,json.init, dot notation etc.)
The quickest way is to add...

2 years ago
0 How Do I Create Sub Projects With The New Version 1.0?

Add '/' , like you would with a file system.
Task.init(project_name='main_project/sub_project', task_name='test')

3 years ago
0 Is There A Way To Output The Cleamrl Reports Scalars / Configuration Etc Into A Output Pdf ? If Not Available, Is It On The Near Term Pipeline ?

Hi DeliciousBluewhale87
You mean per Task? Is it reporting? Is it like the project overview?

3 years ago
0 Hi, I Tried To Setup Clearml Serving And Ran The Example Given

Hi GrittyHawk31

but it could not connect to the grafana dashboard through port 3000, is there any particular reason for that? I may have missed something.

Did you run the full docker-compose.yml ?
Are you able to curl to the endpoints ?

2 years ago
0 Hi! Trying To Run The Following Very Basic Code. The First Few Parts Works As They Should:

2021-07-11 19:17:32,822 - clearml.Task - INFO - Waiting to finish uploads

I'm assuming a very large uncommitted changes 🙂

3 years ago
0 Hi All! I Can'T Use Scalar Tab In All Experiments Due To Elastic Search Error:

Hi @<1569496075083976704:profile|SweetShells3>
Are you using the standard docker-compose ? are using the default elastic container ?
What exactly changed ?

one year ago
0 Hi. I Spent Some Time This Week Trying To Optimise File Transfer Time In And Out Of Processes That Use Google'S Gcs (In Vertex Ai Pipelines). It Seems That In The Case Where I Have A Lot Of Very Small Files, It Made More Sense To Tar.Gz Them And Send A Bi

Generally speaking, for the exact reason if you are passing a list of files, or a folder, it will actually zip them and upload the zip file. Specifically to pipeline it should be similar. BTW I think you can change the number of parallel upload threads in StorageManager, but as you mentioned it is faster to zip into one file. Make sense?

one year ago
0 Hey! I Would Like To Connect To Same Task From Multiple Consumer And Upload Debug Image. Is It Possibile? It Seems Like I Can Connect To The Task. Get The Logger But Nothing Is Uploaded.

Should work out of the box, as long as the task was started. You can forcefully start the task with:
task.mark_started()

4 years ago
0 I Have Install A Python Environment By Virtualenv Tool, Let'S Say

I have install a python environment by virtualenv tool, let's say

/home/frank/env

and python is

/home/frank/env/bin/python3.

How to reuse the virtualenv by setting clearml agent?

So the agent is already caching the entire venv for you, nothing to worry about, just make sure you have this line in clearml:
https://github.com/allegroai/clearml-agent/blob/249b51a31bee97d63f41c6d5542e657962008b68/docs/clearml.conf#L131
No need to provide it an existing...

one year ago
0 So, I Did A Slew Of Pretrainings, Then Finetuned Those Pretrained Models. Is There A Way To Go Backwards From The Finetuning Task Id To The Pretraining Task Id? What I Tried Was:

Thanks SmallDeer34 , I think you are correct, the 'output' model is returned properly, but "input" are returned as model name not model object.
Let me check something

2 years ago
0 Hey All, Hope You’Re All Doing Well. I’M Running A Self-Deployed Server (0.17, I Think, Where Can You Find The Version In Use?). I’M Having Trouble With The Automatic Plot Capture. If I Run

Sure thing, hopefully I'll remember to ping tomorrow once GitHub is synced, I'd appreciate it if you could verify the fix works 🙂

3 years ago
0 Hello All, I'M Trying To Adapt Clearml With My Workflow. I Installed A Server At My Server, With Workers Attached To It. I'M Trying To Execute A Task From My Local Within One Of My Workers. Trying To Use Docker Mode And A Custom Image. I Also Have A Local

ZanyPig66 this should have worked, any chance you can send the full execution log (in the UI "results -> console" download full log) and attach it here? (you can also DM it so it is not public)

2 years ago
0 Any Idea Why Only A Single Instance Of Mujoco Can Be Run With Clearml-Agent? I Run 2 Clearm-Agents, One Per Gpu On My Workstation. However, The Second Task Failes With One Of The Following Errors:

Well (yes, I think), the environment section is used mostly for logging, the next version will have full support by the clearml-agent (due next week), and the next release of clearml-server will add basj-script support.

3 years ago
Show more results compactanswers