Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
one year ago
0 Hi People! I Think The Clearml

Should be fixed soon (1.10 is supposed to be released next week)

one year ago
0 Question About The Trains Agent And The Git Credentials When Setting A Trains Agent, It Is Possible To Configure Git Credentials For It And I'M Trying To Figure Out In Which Cases It Is Necessary. When Executing A Task Remotely (

Hi WackyRabbit7 ,
Regrading git credentials, see here in the trains.conf https://github.com/allegroai/trains-agent/blob/master/docs/trains.conf#L18

Trains assumes one of two (almost three) possible setups
Your code/script is in a git repository. Then when executing manually all the git references incl` uncommitted changes are stored. Then when executing with the trains-agent, it will clone the code based on these references apply the uncommitted changes and run your code. To do that the ...

4 years ago
0 Hi, Happy Friday To Everyone, Is There Anyone Who Can Ref Me To How You Would Work With Ref/Loading A Dataset With Netapp (Astra Trident) Integration From The Ide.

Hi @<1697419082875277312:profile|OutrageousReindeer5>
Is NetApp S3 protocol enabled or are you referring to NFS mounts?

6 months ago
0 Trains[Azure] Install - Azure Dependencies Not Latest. Trains Depends On Older Version Of Azure Python Sdk. My Project Already Has Dependency On The Latest Version. How Can This Be Resolved? Installing Collected Packages: Azure-Storage-Common, Azure-Stor

trains[azure] give you the possibility to do the following:
from trains import StorageManager my_local_cached_file = StorageManager.get_local_copy('azure://bucket/folder/file.bin')This means you do not have to manually download stuff/ and maintain the cache local cache, the StorageManager will do that for you.
If you do no need that ability, no need to install the trains[azure] you can just install trains
Unfortunately, we haven't had the time to upgrade to the Azure storage v...

4 years ago
0 Hi There, Are There Any Plans To Add Better Documentation/Examples To

hi ElegantCoyote26

but I can't see any documentation or examples about the updates done in version 1.0.0

So actually the docs are only for 1.0... https://clear.ml/docs/latest/docs/clearml_serving/clearml_serving

Hi there, are there any plans to add better documentation/example

Yes, this is work in progress, the first Item on the list is custom model serving example (kind of like this one https://github.com/allegroai/clearml-serving/tree/main/examples/pipeline )

about...

2 years ago
0 Hi, Together With

"Updates a few seconds ago"

That just means that the process is not dead.

Yes that seemed to be stuck 😞
Any chance you can verify with the RC version?
I'll try to dig into the commits, maybe I can come up with an explanation ...

4 years ago
0 I Want To Retrieve The Logged Metrics To Be Able To Save The Best Model From My Training. This Is My Step:

Here you go 🙂
(using trains_agent for easier all data access)
from trains_agent import APIClient client = APIClient() log_events = client.events.get_scalar_metric_data(task='11223344aabbcc', metric='valid_average_dice_epoch') print(log_events)

4 years ago
0 Hi Everyone! Is Anybody Using Log-Scale Parameter Ranges For Hyper-Parameter Optimization? It Seems That There Is A Bug In The Hpbandster Module. I'M Getting Negative Learning Rates..

Hmm GreasyLeopard35 can you specify the range you are passing to the HPO, as well as the type of optimization class ? (grid/random/optuna etc.)

2 years ago
3 years ago
0 Hi Guys, I’M Trying To Install It My Lab Server, But When I Try To Create Credentials, It Says Error And Gives More Info: Error 301 : Invalid User Id: Id=F46262Bde88B4928997351A657901D8B, Company=D1Bd92A3B039400Cbafc60A7A5B1E52B

Yes, let's assume we have a task with id aabbcc
On two different machines you can do the following:
trains-agent execute --docker --id aabbccThis means you manually spin two simultaneous copies of the same experiment, once they are up and running, will your code be able to make the connection between them? (i.e. openmpi torch distribute etc?)

3 years ago
0 Hi! In My Project I Need To Run A Lot Of Experiments On Different Subsets Of My Trainset, Collect Score And Perform Some Calculations Based On It. I Have

Hi UpsetCrocodile10

First, I perform many experiments in one process, ...

How about this one:
https://github.com/allegroai/trains/issues/230#issuecomment-723503146
Basically you could utilize create_function_task
This means you have Task.init() on the mainn "controller" and each "train_in_subset" as a "function_task". Them the controller can wait on them, and collect the data (like the HPO does.

Basically:
` controller_task = Task.init(...)
children = []
for i, s in enumer...

3 years ago
0 Hi! In My Project I Need To Run A Lot Of Experiments On Different Subsets Of My Trainset, Collect Score And Perform Some Calculations Based On It. I Have

UpsetCrocodile10

Does this method expect 

my_train_func

 to be in the same file as

As long as you import it and you can pass it, it should work.

Child exp get's aborted immediately ...

It seems it cannot find the file "main.py" , it assumes all code is part of a single repository, is that the case ? What do you have under the "Execution" tab for the experiment ?

3 years ago
0 Hi! In My Project I Need To Run A Lot Of Experiments On Different Subsets Of My Trainset, Collect Score And Perform Some Calculations Based On It. I Have

Hi UpsetCrocodile10

execute them and return scalars.

This should be a good start (I hope 🙂 )
` for child in children:

put the Task into an execution queue

Task.enqueue(child, queue_name='my_queue_here')

wait for the task to finish

child.wait_for_status(status=['completed'])

reload all the metrics

child.reload()

get the metrics

print(child.get_last_scalar_metrics()) `

3 years ago
Show more results compactanswers