Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 8 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Quick Question On

(Do notice that even though you can spin two agents on the same GPU, the nvidia drivers cannot share allocated GPU memory, so if one Task consumes too much memory the other will not have enough free GPU memory to run)

Basically the same restriction as manually launching two processes using the same GPU

4 years ago
0 So I'M In A Colab Notebook, And After Running My Trainer(), How Do I Upload My Test Metrics To Clearml? Clearml Caught These Metrics And Uploaded Them:

By default the pl Trainer will output everything to TB, which we automatically store. But verify that TB is installed

3 years ago
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In
  • Components anyway need to be available when you define the pipeline controller/decorator, i.e. same codebaseNo you an specify a different code base, see here:
    None
  • The component code still needs to be self-composed (or, function component can also be quite complex)Well it can address the additional repo (it will be automatically added to the PYTHONPATH), and you c...
one year ago
0 Hi People! I Think The Clearml

Is it the same with the latest RC 1.10.0rc?

one year ago
0 When Running An Experiment From A Notebook, It Knows It’S A Notebook And Automatically Adds The Notebook As An Artifact Right? And The Uncommited Changes Becomes The Nottebook Converted To A Script? In One Case I Am Seeing Actual Git Diff Coming In Instea

I always have my notebooks in git repo but suddenly it's not running them correctly.

What do you mean?

Can I switch off git diff (change detection?)

Yes, Task.init(..., auto_connect_frameworks={"detect_repository": False})

3 years ago
0 Is It Possible To Set An Environment Variable For A Task?

Have a wrapper over Task to ensure S3 usage, tags, version number etc and project name can be skipped and it picks from the env var

Cool. Notice that when you clone the Task and the agents executes it, the project is already defined, so this env variable is meaningless, no ?

3 years ago
0 Greetings, Could You Please Clarify If It Is Possible To Reinstall All Packages Every Time? For Example, I Tried To Start The Agent With Docker Options And Got The Following Message:

Hi ResponsiveCamel97
Let me explain how it works, essentially it creates a new venv inside the docker, inheriting all the packages form the main system packages.
This allows it to use the installed packages if the version match, and upgrade/change if you need, all without the need to rebuild a new container. Make sense ?

3 years ago
0 I Have A Question Regarding Reducing Execution Time Of Pulling Results From The Server With The Python Api. As Part Of Some Pipeline, After Running Hpo I Am Pulling All The Results From My Optimizer Task And Also Pulling All The Scalars Associated With Th

You can try just pulling the "metric" section of the Task, but I cannot imaging the network bandwidth is the issue?
Could it be load on the clearml-server (i.e. it needs to handle lots of requests ?)

3 years ago
0 I Have A Question Regarding Reducing Execution Time Of Pulling Results From The Server With The Python Api. As Part Of Some Pipeline, After Running Hpo I Am Pulling All The Results From My Optimizer Task And Also Pulling All The Scalars Associated With Th

Hmm check if this one works:
optimizer._get_child_tasks_ids( parent_task_id=optimizer._job_parent_id or optimizer._base_task_id, order_by=optimizer._objective_metric._get_last_metrics_encode_field(), additional_filters={'page_size': int(top_k), 'page': 0})If it does, let's PR it as a dedicated function

3 years ago
0 Hi, I’M Having Troubles Initializing Connection To Clearml (“Error: Could Not Verify Credentials:“). Who Can Help? Thanks

Hi IrateBee40
What do you have in your ~/clearml.conf ?
Is it pointing to your clearml-server ?

3 years ago
0 I Have A Question Regarding Reducing Execution Time Of Pulling Results From The Server With The Python Api. As Part Of Some Pipeline, After Running Hpo I Am Pulling All The Results From My Optimizer Task And Also Pulling All The Scalars Associated With Th

or creating a dedicated function I would suggest also including the actual sampled point in the HP space.

Could you expand ?

This would be the most common use case, and essentially the reason for running the HPO understanding the sensitivity of metrics with respect to hyper-parameters

Does this relates to:
https://github.com/allegroai/clearml/issues/430

manually" filtering the keys I've put in for the HP space. I find it a bit strange that they are not saved as part of t...

3 years ago
0 Hi Everyone! I Have A Question About The Pipeline Controller: I Would Like To Build A Ml Pipeline Similar To The One At

Hi LovelyHamster1
That is a good point, sine the Pipeline kind of assumes the task are already in the system, it clone them (leaving you with the original Draft Task).
I think we should add a flag to that pipeline that if the Task is in draft it will use it (instead of cloning it) Since it seems your pipeline is quite straight forward, I'm not sure you actually need the pipeline controller class, you can perform the entire thing manually, see example here: https://github.com/allegroai/clea...

3 years ago
0 Hi, Can We Upload Our Project Repository To Trains Server? If We Can, How Should We Do? I Know When We Write "Task.Init()", It Uploads Our Experiment Into Server, But It Also Run The Experiment. However, I Want To Upload All My Experiments In Draft Status

MysteriousBee56 I would do Task.create()
you can get the full Task internal representation with task.data
Then call task._edit(script={'repo': ...}) to edit/update all the Task entries.
You can check the dull details of the task object here: https://github.com/allegroai/trains/blob/master/trains/backend_api/services/v2_8/tasks.py#L954
BTW: when you have a sample script working, consider PR-ing it, I'm sure it will be useful for others πŸ™‚ (also a great way to get us involved with debuggin...

4 years ago
0 How Can I Remove A Service With Clearml-Serving?

Hi ConvolutedSealion94
You can archive / delete the SERVING-CONTROL-PLANE Task from the DevOps project in the UI.
Do notice you will need to make sure the clearml-serving is updated with a new sesison ID or remove it (i.e. take down the pods / docker-compose)
Make sense ?
Were you able to interact with the service that was spinned? (how was it spinned?)

2 years ago
0 Hi, I Want To Install A Local Package Using Our Package Index, But I'M Struggling With Trying To Make Pip Trust This Host. If I Want To Install It In My Venv I Can Just Pass The

are you referring toΒ 

extra_docker_shell_

scrip

t

Correct

the thing is that this runs before you create the virtual environment, so then in the new environment those settings are no longer there

Actually that is better, because this is what we need to setup the pip before it is used. So instead of passing --trusted-host just do:
` extra_docker_shell_script: ["echo "[global] \n trusted-host = pypi.python.org pypi.org files.pythonhosted.org YOUR_S...

3 years ago
0 Hi All! I'M Using Clearml With Hydra As Configuration Manager. I'M Trying To Rerun A Task By Overriding Some Of The Configurations From The Ui. I Tried To Change The Config_Name Args In The Args Section And Also The Omegaconf Configuration In Configuratio

Hi LovelyHamster1
As you noted, passing overrides in Args/overrides , for example ['training.max_epochs=1000']
should work when running with the agent.

Could you verify with the latest RC, there was a fix to support the latest hydra version
pip install clearml==0.17.5rc5

3 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

However, are you thinking of including this callbacks features in the new pipelines as well?

Can you see a good use case ? (I mean the infrastructure supports it, but sometimes too many arguments is just confusing, no?!)

3 years ago
0 Question About The Usage Of Trains Agents. In Our Company We Have 3 Hpc Servers, Two Of Them Have Multiple Gpus, One Is Cpu Only. I Saw In The Docs The Multiple Agents Can Be Run Separately Assigning Gpus In Whatever Manner You Want. My Questions Are 1

Hi WackyRabbit7 ,
Running in Docker mode provides you greater flexibility in terms of environment control, from switching cuda versions, to pre-compiled packages that are needed (think apt-get) etc. Specifically for DL if you are using multiple tensorflow versions, they are notorious for compiling against a specific CUDA version, and the only easy way to be able to switch between them would be different dockers. If your are a PyTorch user, then you are in luck, they have all the pytorch ver...

4 years ago
0 Is It Possible To View The Actual Code Of A Task? As In The Script That Created The Task?

WackyRabbit7 if this is a single script running without git repo, you will actually get the entire code in the uncommitted changes section.
Do you mean get the code from the git repo itself ?

4 years ago
0 With

So when the agent fire up it get's the hostname, which you can then get from the API,

I think it does something like "getlocalhost", a python function that is OS agnostic

3 years ago
0 Hi, I Would Like To Pass In Some Pip Arguments That Clearml-Agent Would Include When Setting Up The Venv On The Containers. How Should I Specify This? The Argument In Question Are --Trusted-Host And --Find-Links . I Need Them As I'Ve Installed A Pypi Repo

The --template-yaml allows you to use foll k8s YAML template (the overrides is just overrides, which do not include most of the configuration options. we should probably deprecate it

3 years ago
0 Hi Guys! Is There A Way To Tell An Agent To Run A Task In An Existing Venv (Without Creating A New One)?

ExcitedFish86 this is a general "dummy agent" that tasks and executes them (no env created, no code cloned, as you suggested)

hows does this work with HPO?

The HPO clones Tasks, changes arguments, push them into a queue, and monitors the metrics in real time. The missing part (from my understanding) was the the execution of the Tasks themselves required setup, and that you wanted multiple machine support, in order to overcome it, I post a dummy agent that just runs the Tasks.
(Notice...

2 years ago
0 Hi Everyone, Does Anybody Now If The Latest Release 1.15 Is Still Vulnerable To

Hi @<1689808977149300736:profile|CharmingKoala14> , let me double check that

8 months ago
0 I Am Also Experiencing A Weird Behaviour When Running A Script Using The Module Flag. For Example I Run:

Assuming git repo looks something like:
.git readme.txt module | +---- script.pyThe working directory should be "."
The script path should be: "-m module.scipt"
And under the Configuration/Args, you should have:
args1 = value args2 = another_value
Make sense?

4 years ago
Show more results compactanswers