Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, Can You Pls Help Me? I Am Using V 0.14 (Will Update It Soon) And I Got The Following Error: /Usr/Bin/Python3.6: No Module Named Virtualenv Trains_Agent: Error: Command '['Python3.6', '-M', 'Virtualenv', '/Home/Ubuntu/.Trains/Venvs-Builds.2/3.6']' Ret

Yes actually that might be it. Here is how it works,
It launch a thread in the background to do all the analysis of the repository, extracting all the packages.
If the process ends (for any reason), it will give the background thread 10 seconds to finish and then it will give up. If the repository is big, the analysis can take longer, and it will quit

4 years ago
0 Hi, I Would Like To Follow-Up In This

JitteryCoyote63 oh dear, let me see if we can reproduce (version 1.4 is already in internal testing, I want to verify this was fixed)

2 years ago
0 Hello! I Have An Issue Reproducing My Runs. The Task.Create Completes Successfully. When I Clone And Enqueue A Completed Task The Clone Fails. It Fails During The Python Requirements Installation. Why Is This? Do You Know How I Can Debug? Thank You In Adv

How are you getting:

beautifulsoup4 @ file:///croot/beautifulsoup4-split_1681493039619/work

is this what you had on the Original manual execution ? (i.e. not the one executed by the agent) - you can also look under "org _pip" dropdown in the "installed packages" of the failed Task

one month ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?

3 years ago
0 Hi There! Can Anybody Help Me With Specifying The 'Platform' For A Model In Clearml-Serving. I Am Using The K8S Clearml-Serving Setup (Version 1.3.1). I Already Tried A Bunch Of Variants Like

I'm assuming those errors are from the triton containers? where you able to run the simple pytorch mnist example serving from the repo?

4 months ago
0 Hi, I'M Trying To Use

I think they should not ๐Ÿ™‚

2 years ago
0 Hey, I Was Wondering How Can I Do Hparams Tuning With Trains? Couldn'T Find Anything On The Documentation

ShaggyHare67 could you send the console log trains-agent outputs when you run it?

Now theย 

trains-agent

ย is running my code but it is unable to importย 

trains

Do you have the package "trains" listed under "installed packages" in your experiment?

3 years ago
0 Hello! Getting Credential Errors When Attempting To Pip Install Transformers From Git Repo, On A Gpu Queue.

1e876021bbef49a291d66ac9a2270705 just make sure you reset it ๐Ÿ™‚

3 years ago
0 Hi There, I Used

I remember there were some issues with it ...

I hope not ๐Ÿ˜ž Anyhow the only thing that does matter is the auto_connect arguments (meaning if you want to disable some, you should pass them when calling Task.init)

2 years ago
0 Hi,Guys, I Have Some Questions: 1. Can I Backup All My Experiments? 2. Can I Add My Old Experiments To A New Server? 3. Can I Add Some Information To One Experiment Which Was Finished(Maybe I Want To Reevaluate Some Model)?

Hi SubstantialBaldeagle49
yes, you can backup the entire trains-server (see the github docs on how) You mean upgrading the server? Yes, you can change the name or add comments (Info tab / description ), and you can add key/value description (under the configuration tab, see user properties)

4 years ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

So in summary: subprocess calls appear to break clearML tracking, even if I do Task.init() in both main.py and train.py.

Okay let me see if we can reproduce & fix this, it should not be long

one year ago
0 Hi All, I'Ve Successfully Run A Task Locally, And Now I'M Trying To Clone It And Send It To A Queue. It Looks Like The Environment Is Built Successfully, But It Hangs Here:

I managed to set up my (Windows) laptop as a worker and reproduce the issue.

Any insight on how we can reproduce the issue?

2 months ago
0 How Come I Use

I see it's a plotly plot, even though I report a matplotlib one

ClearML tries to convert matplotlib into plotly objects so they are interactive, it it fails it falls back into a static image as in matplotlib

3 years ago
0 Hi,Guys, I Have Some Questions: 1. Can I Backup All My Experiments? 2. Can I Add My Old Experiments To A New Server? 3. Can I Add Some Information To One Experiment Which Was Finished(Maybe I Want To Reevaluate Some Model)?

Hi SubstantialBaldeagle49
2. Sure follow the back procedure and restore on the new server
3. Yes
task=Task.get_task(task_id='aa')
task.get_logger().report_scalar()

4 years ago
0 Hi Everyone, I'M Using Clearml-Serving With Triton And Have A Couple Of Questions Regarding Model Management:

. That speed depends on model sizes, right?

in general yes

Hope that makes sense. This would not work under heavy loads, but eg we have models used once a week only. They would just stay unloaded until use - and could be offloaded afterwards.

but then you still might encounter timeout the first time you access them, no?

3 months ago
0 Hi Everyone! Is It Possible To Read Data Directly From Server W/O Using Get_Local_Copy()?

No, it is zipped and stored, so in order to open the zipfile and read the files you have to download them.
That said everything is cached, so if the machine already downloaded the dataset there is zero download / unzipping,
make sese?

3 months ago
0 I .

@<1523707653782507520:profile|MelancholyElk85> I just run a single step pipeline and it seemed to use the "base_task_id" without cloning it...
Any insight on how to reproduce ?

3 years ago
0 Hi, I'M Getting A Lot Of The Following Logs

Thanks PompousBeetle71
Quick question, what frameworks are you using?
Do you use save method directly to file stream (or any other direct storage)?

4 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

The easiest would be as an artifact (I think).
Let's assume you put it into a csv file (with pandas or mnaually)
To upload (from the pipeline Task itself):
task.upload_artifacts(name='summary', artifact_object='~/my/summary.csv')Then if you want to grab it from anywhere else:
task = Task.get_task(task_id='HPO controller Task id here') my_csv = Task.artifacts['summary'].get_local_copy()
If you want to store as dict it might be even easier:
` task.upload_artifacts(name='summary', artifa...

3 years ago
0 Sorry Folks Too Many Questions - If I Have A Project (And I Set The Output Uri In It While Creating, To A S3 Folder) How Can I Ensure That A Experiment (Task) That I Run On My Local Outputs The Model To The Uri?

Sounds good, I assumed that was the case but I was not sure.
Let's make sure that in the clearml.conf we write it in the comment above the use_credentials_chain option, so that when users look for IAM roles configuration they can quick search for it ๐Ÿ™‚

3 years ago
0 Hi Everyone, I Have Questions Related To Clearml-Serving.

the trend step artifact used to keep track the time of the data so we know the expected trend of the input data. For example, on the first data which is trend_step = 1 the trend value is 10, then if the trend_step = 10 (the tenth data) our regressor will predict the trend value of the selected trend_step. this method is still in research to make it more efficient so it doesn't need to upload artifact every request

Make sense! I would suggest you add a GitHub issue with feature request ...

2 years ago
0 Is There Any Testing Suite That Ships With Clearml? If We'D Like To Make Some Unit Tests For Our Code?

mostly by using

Task.create

instead of

Task.init

.

UnevenDolphin73 , now I'm confused , Task.create is Not meant to be used as a replacement for Task.init, this is so you can manually create an Additional Task (not the current process Task). How are you using it ?

Regarding the second - I'm not doing anything per se. I'm running in offline mode and I'm trying to create a dataset, and this is the error I get...

I think the main thing we need to...

one year ago
0 Hey All. Quick Question About The

Can you send the full log ?

3 years ago
0 When I Run Experiments I Set

Hi IntriguedRat44
Sorry, I missed this message...
I'm assuming you are running in manual mode (i.e. not through the agent), in that case we do not change the CUDA_VISIBLE_DEVICES.
What do you see in the resource monitoring? Is it a single GPU or multiple GPUs?
(Check the :monitor:gpu in the Scalar tab under results,)
Also what's the Trains/ClearML version you are suing and the OS ?

3 years ago
0 "Clearml-Data Sync --Folder ." Doesn'T Work

Clearml 1.13.1

Could you try the latest (1.16.2)? I remember there was a fix specific to Datasets

3 months ago
Show more results compactanswers