Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8094 Answers
  Active since 10 January 2023
  Last activity 10 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Hi
Hi , v0.15 is out, 🎉 🚀 Your feedback had a major influence on the features we added 🙂 thank you! A selected list of features: Column resizing / ordering /...
4 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
This will close it Task.current_task().close()I think we should rename completed() because it just marks the Task as completed on the backend but does not ac...
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
docs are up
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is you server using https ?!
5 years ago
0 Votes
3 Answers
632 Views
0 Votes 3 Answers 632 Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
I would guess connectivity issues, the TLS is probably python inaccurate response (I mean in a way, it is also a TLS error, but I would imagine this has more...
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New releases: pip install trains==0.13.3https://github.com/allegroai/trains/releases/tag/0.13.3 pip install trains-agent==0.13.2https://github.com/allegroai/...
4 years ago
0 Votes
1 Answers
724 Views
0 Votes 1 Answers 724 Views
LSTMeow is back! Bots/Gals/Guys feel free to 👍 None
4 years ago
0 Votes
10 Answers
746 Views
0 Votes 10 Answers 746 Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on 🚀 🎉 Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
11 months ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YummyWhale40 awesome thanks!
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
4 years ago
0 Votes
2 Answers
646 Views
0 Votes 2 Answers 646 Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS 🚆 🎉 🎈
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Finally
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
https://allegro.ai/docs
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New video is out 🙂 Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YEY!!!! Download as CSV 🤯
2 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hello Everyone!
5 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hi
Hi ! ClearML Server + SDK v1.9.0 is out! 🎉 🚀 🎊 Happy Holidays and Happy New Year! ❇️ 🎇 🎄
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ClearML v0.17.1 and ClearML-Agent v0.17.0 are now the official packages & repositories 🎉 🎊 👋 🛤️ This new name brings on many changes, mainly replace a...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
We are at AAAI NY, come look us up :)
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
apparently everyone can ...
5 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
2 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Gals, Guys & :robot_face: , if you want to checkout the Hyper-Parameters automation (Using Bayesian Optimization Hyper-Band) We have an example on the demo s...
4 years ago
Show more results questions
0 Hi Everybody, I'M Running Experiments Inside A Docker Which Includes Multiple Python Instances, Some Of Them Are Inside Conda Environments. How Can I Specify The Agent To Use A Specific Conda Environment Inside The Docker?

Hi CrookedWalrus33

docker_setup_bash_script= ["export PATH=""/workspace/miniconda/bin:$PATH"])

Oh I think you are correct, this should do the trick:
docker_setup_bash_script= ["export PATH=/workspace/miniconda/bin:$PATH", "export LOCAL_PYTHON=/workspace/miniconda/bin/python3"]This will make sure both agent and script execute on the same python

but to run a script inside a docker which already has the environment built in.

If this is already activated, the latest agent w...

2 years ago
0 Hello Everyone! Is It Possible To Deactivate Package Analysis For Remote Execution? I Run My Code With Clearml-Agent In Docker Mode With Nvidia:Pytorch Container. When Clearml Is Running Inside The Docker The Installed Packages Of The Webui Get Updated. H

preinstalled in the environment (e.g. nvidia docker). These packages may not be available via pip, so the run will fail.

Okay that's the part that I'm missing, how come in the first run the package existed and in the cloned Task they are missing? I'm assuming agents are configured basically the same (i.e. docker mode with the same network access). What did I miss here ?

3 years ago
0 Hi, I’M Trying To Figure Out What Do The Clearml Agents Use The Webserver Endpoint For And What Would Break If One Didn’T Have Access? For Context: I’M Trying To Have A Self-Hosted Server With Endpoints Accessible Publicly, But Securely. The Webserver En

Hi HollowFish37
I think I have good news for you, the clearml-agent is only communicating with the api endpoint, so as long as this is secure, you should be fine. Do notice that the default files server endpoint should be secure as well, as by default it will allow any upload/download

3 years ago
0 I Am Using Pipeline From Decorators. In The Pipeline, There Is A Training Step That Returns A Model (I Want This Model To Also Be Uploaded As An Artifact On Clearml). But This Results In The Following Error:

Hi DilapidatedCow43
I'm assuming the returned object cannot be pickled (which is ClearML's way of serializing it)
You can upload it as a model with
` uploaded_model_url = Task.current_task().update_output_model(model_path="/path/to/local/model")

...
return uploaded_model_url `wdyt?

2 years ago
0 Hey Guys Trying To Save A Model Via The Outputmodel.Update_Weights Function I Get The Following Error:
task.mark_completed()

You have that at the bottom of the script, never call it on yourself, it will kill the actual process.
So what is going on you are marking your own process for termination, then it terminates itself leaving the interpreter and this is the reason for the errors you are seeing

The idea of mark_* is to mark an external Task, forcefully.
By just completing your process with exit code (0) (i.e. no error) the Task will be marked as completed anyhow, no need to call...

one year ago
0 Good Day! I Ran Into A Problem When When Running Two Or More Identical Nodes In A Pipeline (Multi_Instance_Support=True ), Only One Of Them Uses An Already Created Venv From Cache For This Task. And The Other Node Starts To Re-Create The Same Virtual Envi

Hi @<1598487094601191424:profile|MysteriousCow84>

only one of them uses an already created venv from cache for this task. And the other node starts to re-create the same virtual environment.

Just be clear, the second one is running, but it does not use the same venv as the other one (that is running in parallel), is that correct?

one year ago
0 When Launching A Task To Trains Agent, I'M Having Trouble Getting The Imports From Other Files Working Correctly. For Instance, If My Task Imports A Function From Another File Within The Same Git Repo [

. So to conclude: it has to be executed manually first, then with trains agent?

Yes, that said, as you mentioned, you can always edit the "installed packages" once manually, from that point you are basically cloning the experiment, including the "installed packages" so it should work if the original worked.
Make sense ?

4 years ago
0 I Have A Self-Hosted Clearm-Server And And Clearml-Agent Started With

Great!
I'll make sure the agent outputs the proper error 🙂

4 years ago
0 Hi Guys, I Configured A Trains Server And A Trains Agent. I Have Some Code I Want To Run In The Trains Agent, However The Code Is In A Local Branch On My Client (I Cant Push It On Remote Yet Because Of Internal Practices) Is There A Way To Do So? Currentl

SmugOx94 Yes, we just introduced it 🙂 with 0.16.3
Discussion was here (I'll make sure to update the issue that the version is out)
https://github.com/allegroai/trains/issues/222
In your trains.conf add the following line:
sdk.development.store_code_diff_from_remote = trueIt will store the diff from the remote HEAD instead of the local one.

4 years ago
0 When We Run Some Agents And Then Kill Them, They Remain In Ui For Quite A Long Time (Even If They Are Don'T Exist) - It Is Like 5Min. It There Some Way To Make The Ui More Responsive? I Mean To Have A Shorter Timeout After Which The Worker Is Invisible?

RoundMosquito25 are you using clearml-agent daemon --stop or are you killing them ?

killing them basically means you loose them in the UI when they timeout, the backend does not see them for 10min so it assumes they died, when you call clearml-agent --stop they will unregister themselves and disappear immortally

2 years ago
3 years ago
0 Hi, Is It Possible To Change The Visibility Of The Projects On The Dashboard So That Only Specific Users Can See The Projects?

Hi SoreDragonfly16
Sadly no, the idea is to create full visibility to all users in the system (basically saying share everything with your colleagues) .
That said, I know the enterprise version have permission / security features, I'm sure it covers this scenario as well.

3 years ago
4 years ago
0 Hi! I'M Currently Considering Switching To Clearml. In My Current Trials I Am Using Up The Api Calls Very Quickly Though. Is There Some Way To Limit That? The Documentation Is A Bit Sparse On What Uses How Many Api Calls. Is It Possible To Batch Them For

restart_period_sec

I'm assuming development.worker.report_period_sec , correct?

The configuration does not seem to have any effect, scalars appear in the web UI in close to real time.

Let me see if we can reproduce this behavior and quickly fix

2 years ago
0 Hello! I Have A Problem With Tutorial Client Code Crashes On Starting Pipelines Remotely Via

Hi FancyWhale93
pipe.start() should actually stop the local pipeline logic execution and fire it on the "services queue".
The idea is that you can launch the pipeline locally, but the actual execution of the entire logic is remote.
You can have the pipeline running locally if you call pipe.start_locally or also run the steps locally (as sub processes) with pipe.start_locally(run_pipeline_steps_locally=False)
BTW: based on your example, a more intuitive code might be the pi...

2 years ago
0 Hi There - I Am Attempting To Use The Hp Optimization Feature, But Keep Getting The Following Error:

Hi CharmingBeetle38
On the base task, do you see those arguments under the Configuration tab?
Also, if they are under Args section, you should add "Args/" prefix to the HP optimization (this is how you differentiate between the sections)

4 years ago
0 I Am Moving A Code Using Clearml Python Library To Use It'S Api As Docker Container Image. Is There Any Alternate To Use Access, Secret Keys Instead I Copy Clearml.Conf In Dockerfile?

Fully automatic, just have them defined and Task.init and everything else will work out of the box.
Notice the Env will override clearml.conf, so you can have clearml.conf with other default values inside the container, and have the Env override the definition
(not to worry, it is Not a must to have clearml.conf , it's just a nice way to add default values)

2 years ago
0 Folks, Could You Please Clarify/Help? I Correct Understand, If --Docker Is Enable That Will Means Every New Experiments Will Be Executed Into Dedicated Agent Worker Containers? Also I See For

Hi UnevenOstrich23

if --docker is enable that will means every new experiments will be executed into dedicated agent worker containers?

Correct

I think the missing part is how to specify the docker for the experiment?
If this is the case, in the web UI, clone your experiment (which will create a draft copy, that you can edit), then in the Execution tab, scroll down to the "base docker image" and specify the docker image to use.
Notice that you can also add flags after the docker im...

3 years ago
0 Adding

Makes sense to add it to docker run by default if GPUs are mentioned in agent.

I think this is an arch thing, --privileged is not needed on ubuntu flavor, that said you can always have it if you add it here:
https://github.com/allegroai/clearml-agent/blob/178af0dee84e22becb9eec8f81f343b9f2022630/docs/clearml.conf#L149

clearml-agent daemon --gpus 0 --queue default --docker
But docker still sees all GPUs.

Yes --gpus should be enough, are you sure regrading the --privileged flag ?

2 years ago
0 What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

@<1532532498972545024:profile|LittleReindeer37> nice!!! 😍
Do you want to PR? it will be relatively easy to merge and test, and I think that they might even push it to the next version (or worst case quick RC)

one year ago
0 Qq: I'M Trying To Run The

I think so (you can also comment out the Task.init() just to verify this is not a clearml issue)

3 years ago
0 Hello Guys, I Have A Strange Situation With A Pipeline Controller I'M Testing Atm. If I Run The Controller Directly In My Pycharm On Notebook It Connects Correctly To The K8S Cluster With Trains Installed. After This, If I Go Directly In The Ui, I Reset T

Hi JuicyFox94
I think you are correct, this bug will explain the entire thing.
Basically what happens is that remote_execute stops the local run before the configuration is set on the Task. Then running remotely the code pull the configuration, sees that it is empty and does nothing.
Let me see if I can reproduce it...

4 years ago
0 Hello! I’M Currently Using Clearml-Server As An Artifact Manager And Clearml-Serving For Model Inference, With Each Running On Separate Hosts Using Docker Compose. I’Ve Successfully Deployed A Real-Time Inference Model In Clearml-Serving, Configured Withi

Let's start small. Do you have grafana enabled in your docker compose and can you login to your grafana web ui?
Notice grafana needs to access the prometheus container directly so easiest way is to have everything in the same docker compose

9 months ago
0 I Got An Interesting Question From My Devs. If They Wish To Do Distributed Training, Is Clearml K8S Glue Suitable For It? Local Multiple Gpu: Just A Matter Of Assigning More Than One Gpu In The Yaml File Sent To The K8S Glue. Question Is How To Make This

HI SubstantialElk6
Yes you are correct the glue only needs to change the yaml and it will work.
When you say "Dev end" , what do you mean? I was thinking adding additional glue for multi node and just adding queues , for example add 4nodes queue and attach a glue to it, wdyt?
Regrading horovod, horovod is spinning its own nodes so integration with k8s is not trivial (regardless of ClearML). That said I know that they do have support for horovod in the Enterprise edition, but I'm not sure ...

3 years ago
0 Hi, I Was Trying To Install Clearml Agent Using Helm Chart But My K8S Version Is Not Compatible. I Have Am Older K8S Version. Is There Anywhere I Could Get A Charr That Can Work With Lower Version Of K8S? Or Any Other Methods?

Hi @<1523701304709353472:profile|OddShrimp85>

there anywhere I could get a charr that can work with lower version of k8s? Or any other methods?

I think the solution is to install it manually from the helm chart (basically take it out and build a Job YAML, wdyt?

one year ago
0 Hello, Where Can I Find The Dockerfile For These Images?

Hi @<1535793988726951936:profile|YummyElephant76>
None
None
None

one year ago
0 Hi All, Are There Any Alternatives To Storing User Credentials In

Do you have a roadmap which includes resolving things like this

Security SSO etc. is usually out of scope for the open-source platform as it really makes the entire thing a lot harder to install and manage. That said I know that on the Enterprise solution they do have SSO and LDAP support and probably way more security features. I hope it helps 🙂

4 years ago
0 Hi! Is There A Way To Export The Credentials Of The Aws Account Only During The Creation Of The Docker? I Don’T Want Every User In My Team To Know The Credentials To Access S3 Buckets. I Just Want Them To Be Able To Write In The Bucket Without The Credent

Hi AbruptCow41

I just want them to be able to write in them without them appear nor in their clearml.conf nor in their environmental variables.

So where would they put them ? (or is it pre baked into the docker?)

2 years ago
Show more results compactanswers