Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hello Everyone!
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is it a one time thing? or recurring?
5 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
πŸ™ Please skip cleaml python package v1.0.1 and just move on to v1.0.2 😊 apologies for the inconvenience πŸ™‚ pip install clearml==1.0.2
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
YEY!!!! Download as CSV 🀯
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
New video is out πŸ™‚ Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
5 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Hi
Hi ClearML v0.17.1 and ClearML-Agent v0.17.0 are now the official packages & repositories πŸŽ‰ 🎊 πŸ‘‹ πŸ›€οΈ This new name brings on many changes, mainly replace a...
4 years ago
0 Votes
6 Answers
2K Views
0 Votes 6 Answers 2K Views
Hi
Hi ! ClearML Server + SDK v1.9.0 is out! πŸŽ‰ πŸš€ 🎊 Happy Holidays and Happy New Year! ❇️ πŸŽ‡ πŸŽ„
2 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
Thank you all for taking the time to answer our survey (If you haven't already, we urge you to do so ). Your feedback has a major impact on what we build, do...
5 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
we recently released a new version of clearml-session with Persistent Workspace support! πŸš€ πŸŽ‰ Finally you can develop on remote machines with workspace fold...
one year ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
We are at AAAI NY, come look us up :)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
YummyWhale40 awesome thanks!
5 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Slack security ... Go figure πŸ˜‰
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
Show more results questions
0 When Viewing Keras Scalar Result For One Experiment, The Graphs (Train And Validation) Are Super-Imposed. However When I Compare Experiment, The Graphs Are Now Separate. Can I Super-Impose The Graphs While Comparing Experiments?

Hi TeenyFly97

Can I super-impose the graphs while comparing experiments?

Hmm not at the moment, I think someone asked for the option to control it, in both comparison mode and "standalone" mode.
There is a long discussion on this feature here:
https://github.com/allegroai/trains/issues/81#issuecomment-645425450
Feel free to chime in πŸ™‚
I think that the latest agreement is a switch in the UI, separating or collecting (super-imposing) those graphs.

5 years ago
0 Hi, I Am Trying To Execeute My Code On Nvidia/Cuda Docker, But It Keeps Running, It Is Not Failed Or Not Aborted. The Last Log Message Is

Yey! MysteriousBee56 kudos on keep trying!
I'll make sure we report those errors, because this debug process should have much shorter πŸ™‚

5 years ago
0 What Is Being Stored Exactly In

Ohh... I would not delete them then ... 😞
Maybe kind of heuristics (files created a week ago can be deleted?!)

3 years ago
0 What Is Being Stored Exactly In

I have a process that cleans theΒ 

/tmp

Β each day,

WackyRabbit7 the files (configuration etc.) that are mapped into the containers are stored there.
They should clean themselves, that said, we have noticed that the services-mode skips this cleanup, and it will be solved on the next RC of clearml-agent.
Make sense ?

3 years ago
0 Hi, I Am Trying To Execeute My Code On Nvidia/Cuda Docker, But It Keeps Running, It Is Not Failed Or Not Aborted. The Last Log Message Is

MysteriousBee56 Okay, let's try this one:
docker run -t --rm nvidia/cuda:10.1-base-ubuntu18.04 bash -c "echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/docker-clean && apt-get update && apt-get install -y git python3-pip && python3 -m pip install trains-agent && echo done"

5 years ago
0 Hi All. I Am Wondering How People Tend To Use Clearml With Cross-Validation. Do You Tend To Create Separate Experiments For Each Fold? And If So, Would You Then Create Another Experiment For The Aggregated Results?

One thought is to initialise a new clearML task in each fold to capture the iteration-level metrics, and then create another task/experiment at the end to capture the aggregated metrics across folds.

That is probably the easiest, and the most scalable.
BTW: with the mew reporting feature, you can integrate the comparison of the CV directly into your final report πŸ™‚

2 years ago
0 Hi, I Am Trying To Execeute My Code On Nvidia/Cuda Docker, But It Keeps Running, It Is Not Failed Or Not Aborted. The Last Log Message Is

MysteriousBee56 not a different port, just not with "localhost" but with your machine's IP

5 years ago
0 Is The App/Ui/Backend Customizable? Any Tutorials For That?

Hi CleanWhale17 , at least for the moment, the code although open ( https://github.com/allegroai/trains-web ) has no external theme/customization interface.
That said we do have some thoughts on it.., What did you have in mind ?

5 years ago
0 Hi Community! I'M Currently Trying To Serve My Ai Model Using Clearml-Serving So I Can Access And Try My Model Through The Model Endpoint. Currently The Dataflow Of Clearml-Serving I Know Looks Like On This Diagram 1 (Model As A Rest Service). How Ever I

Oh I see, this seems like Triton configuration issue, usually dim -1 means flexible. I can also mention that serving 1.1 should be released later this week with better multiple input support for triton. Does that make sense?

3 years ago
0 Hello! I’M Wondering If There Is An Option To Run A Termination Hook Script

Lambda’s are designed to be short-lived, I don’t think it’s a fine idea to run it in a loop TBH.

Yeah, you are right, but maybe it would be fine to launch, have the lambda run for 30-60sec (i.e. checking idle time for 1 min, stateless, only keeping track inside the execution context) then take it down)
What I'm trying to solve here, is (1) quick way to understand if the agent is actually idling or just between Tasks (2) still avoid having the "idle watchdog" short lived, to that it can...

3 years ago
0 Hello, Community. I Hope You Are All Doing Well. I'M Seeking Information Regarding A Specific Problem, Specially In The Field Of Computer Vision. Typically, An App In The Field Of Computer Vision Will Have Multiple Models, Each With Its Own Preprocessing,

Hi @<1657918706052763648:profile|SillyRobin38>

In the

preprocess.py

files, we will have so many similar lines which is not good.

Actually the clearml-serving supports also directories, i.e. you can package an entire module as part of the preprocess, which would be easier for your code
Another option is to package your code in a python package and have that installed on the container (there is a special env var that allows you to add those to the serving container)
...

one year ago
0 Any Info On The Lifecycle Of Datasets Downloaded To $Home/.Clearml/Cache/Storage_Manager/Datasets Via Get_Local_Copy I Have A Task Running And I Was Watching The Above Path And Datasets Were Being Downloaded And Then They Are All Removed And For A Partic

And I think the default is 100 entries, so it should not get cleaned.

and then they are all removed and for a particular task it even happens before my task is done

Is this reproducible ? Who is cleaning it and when?

4 years ago
0 Is The App/Ui/Backend Customizable? Any Tutorials For That?

CleanWhale17 what is " Online-Training Β Support(for Dataset Shifts" ?

5 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

CurvedHedgehog15 the agent has two modes of opration:
single script file (or jupyter notebook), where the Task stores the entire file on the Task itself. multiple files, which is only supported if you are working inside a git repository (basically the Task stores a refrence to the git repository and the agent pulls it from the git repo)Seems you are missing the git repo, could that be?

3 years ago
0 Hi, I Expect There Is A Limitation In Time The Free Service

I think the limit is a few GB, I'm not sure, I'll have to check
And yes the oldest experiments will be deleted first (with the exception of published experiments, they will be deleted last)

4 years ago
0 Hi! I Have A Question Regarding Performances Of The Clearml-Server: Are The Calls From The Agents Made Asynchronously/In A Non Blocking Separate Thread? Is The Connection To The Clearml-Server Expected To Be A Bottleneck If The Clearml-Server Is Far From

JitteryCoyote63

are the calls from the agents made asynchronously/in a non blocking separate thread?

You mean like request processing on the apiserver are multi-threaded / multi-processed ?

4 years ago
0 Hello! Since Today I Get

Damn, okay I'll make sure we fix the order.
Could you verify the ~= works as intended (if the order id correct)

4 years ago
0 Hi, I Have Been Getting The Following For A While. Is There A More Detailed Log I Can Look Into? This Happens On Both Https And Http.

Hmm that makes sense, I "think" the enterprise offering has a solution for that as well (i.e. full separation over static cluster), but probably the best way to constituent this avenue is talk to Sales (I'm assuming they'll setup a call to discuss the details)

Going back to the open source, I think that adding the credentials as part of the source code might allow to have "credentials" auto populate as part of the remote execution, wdyt?

4 years ago
0 Hello,

Is this reproducible ?

3 years ago
0 [Clearml Task Querying] How Would I Find Tasks That Have The Same Code With Different Inputs/Parameters? I’M Interested In “Diff”Ing The Inputs To/Outputs From A Task To Do Pipeline “Caching” In A More Intelligent Way (For My Use Case) Than Clearml Does B

Hi ReassuredOwl55

How would I find Tasks that have the same code with different inputs/parameters?

Assuming you have the git repo
you can do:
Task.query_tasks(..., task_filter={'_all_'=dict(fields=['script.repository'], pattern='github.com/user/repo'))wdyt?

2 years ago
0 I Have A Questions About Queue Priorities With Clearml-Agent. I Have Two Queues,

To summarize: The scheduler should assign tasks the the agent first, which gives a queue the highest priority.

The issue here you assume both are idle and you need global priority based on resource preference. I understand your scenario now, but it will only hold if enqueuing order is "optimal". For example, if machine Y is running a Task B that is about to be completed (e.g. in a minute) then still machine X will pick the new Task B, and again we end up in the scenario where Task A i...

4 years ago
0 I'M Running A Simple Experiment (One Training Task, Nothing Else) And I'M Getting A Puzzling Message. Any Help Deciphering That Is Appreciated. I'M Pasting Part Of The Warnings Below:

Hi WittyOwl57
I think what happens is it auto-logs the joblib load/save calls, these calls track models used/created by the code, and attach them to the model repository representing these model.
I'm assuming there are multiple load/save , and there are multiple model instances pointing to the same local file "file:///tmp/..." . The earning basically says it is re-registering existing models.
Make sense ?

4 years ago
0 Hey Folks, When I Run

seems okay to me

4 years ago
Show more results compactanswers