Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8044 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 I Have A Question Regarding Reducing Execution Time Of Pulling Results From The Server With The Python Api. As Part Of Some Pipeline, After Running Hpo I Am Pulling All The Results From My Optimizer Task And Also Pulling All The Scalars Associated With Th

I pull all the parameters, and then manually filter on the HP keys (manually=I have to plug them in, they are not part of optimizer object)

So is this an improvement to optimizer._get_child_tasks_ids(...) interface ?
e.g. return a structure like:
[ { 'id': task_id, 'hp1': value, 'hp2': value, 'hp3': value, 'objective': dict(title='title', series='series', value=42 }, ]

2 years ago
0 Hi Everyone, I'M Using The

AttractiveCockroach17 could it be Hydra actually kills these processes?
(I'm trying to figure out if we can fix something with the hydra integration so that it marks them as aborted)

2 years ago
3 years ago
0 Hi, Can’T I Embed Scalars To Notion Using Clearml Sdk?

Hi @<1524922424720625664:profile|TartLeopard58>

can’t i embed scalars to notion using clearml sdk?

I think that you need the hosted version for it (it needs some special CORS stuff on the server side to make it work)
Did you try in the clearml report? does that work?

one year ago
0 Hello, In The Following Context:

Hi JitteryCoyote63
If you want to refresh the task object, call task.reload() It will also refresh the artifacts.
The reason for not always do so when accessing the .artifacts objects is for speed optimization (It might be slow compared to dict access, and we assume users will expect it to behave the dict)

4 years ago
0 Is There Any Reason Why Doing The Following Is Not Possible? Am I Doing It Right? I Want To Run A Pipeline With Different Parameters But I Get The Following Error?

but does that mean I have to unpack all the dictionary values as parameters of the pipeline function?

I was just suggesting a hack πŸ™‚ the fix itself is transparent (I'm expecting it to be pushed tomorrow), basically it will make sure the sample pipeline will work as expected.
regardless and out of curiosity, if you only have one dict passed to the pipeline function, why not use named arguments ?

2 years ago
0 Hi, I Have Another Problem

Hi JitteryCoyote63
What do you have in the agent.cuda_version ?
(you can see it printed at the beginning of the log)

4 years ago
0 Hi, We Have A Bit Old Open Source Clearml Instance. I Want To Create A New Instance On A New Infrastructure. Is There An Easy Way To Migrate Data Between Clearml Instances?

Hi @<1544128915683938304:profile|DepravedBee6>
You mean like backup the entire instance and restore it on another machine? Or are you referring to specific data you want to migrate?

BTW if you are upgrading old versions of the server I would recommend upgrading to every version in the middle (there are some migration scripts that need to be run in a few of them)

one year ago
0 Encountered An Odd Bug. Upon Attempting To Write Images To Clearml (3D Projected, Matplotlib),

The issue only arises upon sending Images. (Both numpy, mpl and PIL)

BTW: they should appear under debug-samples Tab in the results

3 years ago
0 I Want To Execute A Script Via Trains-Agent, But I Want To Be Able To Provide The Location Of A Config File By Specifying The Path Before Trains-Agent Executes The Script (Like A Flag Or Command Line Argument). How Can I Accomplish This?

Can I change the parameters before executing the draft task

Yes you can, after you clone the experiment everything becomes editable, so you can edit the config in the UI.
For example, let's assume I have config.yml, and in my code I do:
my_file = task.connect_configuration('config.yml') with open(my_file, 'rt') as f: ...Then after I clone it in the UI and edit the configuration, when it will be executed remotely,
my_file will contain the content of the configuration as s...

3 years ago
0 Hi, I Am Trying To Upload A Plot To An Existing Task Using The

Weird that this code is also uploading to the 'Plots'. I replicated the same thing as my main script, but main script is still uploading to Debug Samples.

SmarmyDolphin68 are you saying the same code behaves differently ?

3 years ago
0 Hello Everyone! I'M Trying To Add Functionality Where I Need To Rotate Artifacts. Psedocode:

Hi GrotesqueDog77

and after some time I want to delete artifact with

You can simply upload with the same local file name and same artifact name, it will override the target storage. wdyt?

one year ago
0 Hi, Is It Possible To Re-Use Task-Id, But Keep The Old Execution Tab ? (Git Diff Specifically).

Hi BoredPigeon26
what do you mean by "reuse the task" ? is this manual execution (i.e. from code)?
How about archiving the old version?
You can also force Task.init to always create a new Task (which preserves the previous run alongside the execution tab)
Basically what's the specific use case ?

2 years ago
0 Hello, I Am Trying To Retrieve A Simple Dict Artifact Uploaded In A Previous Task With

JitteryCoyote63 with pleasure πŸ™‚
BTW: the Ignite TrainsLogger will be fixed soon (I think it's on a branch already by SuccessfulKoala55 ) to fix the bug ElegantKangaroo44 found. should be RC next week

4 years ago
0 Another Issue Is The Agent Uses Python 2 For Some Reason Even Though Locally I’M Using Python 3 And The Agent Is Supposed To Use A Python 3 Venv.

If this doesn't help.
Go to your ~/clearml.conf file, at the bottom of the file you can add agent.python_binary and change it to to the location of python3.6 (you can run which python3.6 to get the full path):
agent.python_binary: /full/path/to/python3.6

3 years ago
0 Is It Necessary To Serve Keras Model Using Triton Engine? I'M Trying To Serve An Endpoint, And Trying To Debug, But The Error Given Not Helping Much. Is There A Flag I Can Pass To See More Logs?

Hi @<1567321739677929472:profile|StoutGorilla30>

Is it necessary to serve keras model using triton engine?

It is not, but it is the most efficient way to serve keras models, and this is why by default clearml-serving is using Nvidia Triton (we are talking 10x factors)
I would start with the keras example, see that it works and then work your way into your example (notice you always need to provide the layers form the in/out of the model)
[None](https://github.com/allegroai/clearml-s...

one year ago
0 Hi Everyone, Additional Arguments To The Script Execution, Is It Possible? How Can It Be Done? So At The Moment When My Script Is Being Executed The

PompousBeetle71 a few questions:
is this like using PyTorch distributed , only manually? Why don't you use call trains.init in all the sub processes? We had a few threads on that, it seems like a recurring question, I'll make sure we have an example on GitHub. Basically trains will take care of passing the arg-parser commands to the sub processes, and also on torch node settings. It will also make sure they all report to the tame experiment.What do you think?

4 years ago
0 Afaiu By Default Trains Logs All Tensorboard Things, Can This Be Turned Off?

Hi HealthyStarfish45
You can disable the entire TB logging :
Task.init('examples', 'train', auto_connect_frameworks={'tensorflow': False})

3 years ago
0 Assuming I Have A

(without having to execute it first on Machine C)

Someone some where has to create the definition of the environment...
The easiest to go about it is to execute it one.
You can add to your code the following line
task.execute_remotely(queue_name='default')This will cause you code to stop running and enqueue itself on a specific queue.
Quite useful if you want to make sure everything works, (like run a single step) then continue on another machine.
Notice that switching between cpu...

4 years ago
0 Hello,

Is this reproducible ?

2 years ago
0 Hi, I Have Another Problem

It is configured as CPU (i.e. no CUDA)

4 years ago
0 Reducing Docker Container Spin-Up Time With Clearml Agent

Woot woot!
awesome, this RC is stable you can feel free to use it, the official release is probably due to be out next week :)

2 years ago
0 I Am Trying To Use

if it ain't broke, don't fix it

πŸ˜„

Up to you, just a few features & nicer UI.
BTW: everything is backwards compatible, there is no need to change anything all the previous trains/trains-agent packages will work without changing anything πŸ™‚
(This even includes the configuration file, so you can keep the current ~/trains.conf and work with whatever combination you like of trains/clearml on the same machine)

3 years ago
Show more results compactanswers