Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
0 Answers
934 Views
0 Votes 0 Answers 934 Views
4 years ago
0 Votes
7 Answers
406 Views
0 Votes 7 Answers 406 Views
Thank you all for taking the time to answer our survey (If you haven't already, we urge you to do so ). Your feedback has a major impact on what we build, do...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YEY!!!! Download as CSV 🤯
2 years ago
0 Votes
0 Answers
980 Views
0 Votes 0 Answers 980 Views
3 years ago
0 Votes
2 Answers
379 Views
0 Votes 2 Answers 379 Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS 🚆 🎉 🎈
4 years ago
0 Votes
3 Answers
962 Views
0 Votes 3 Answers 962 Views
This will close it Task.current_task().close()I think we should rename completed() because it just marks the Task as completed on the backend but does not ac...
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YummyWhale40 awesome thanks!
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is it a one time thing? or recurring?
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
🎊 🍾 Happy new year ! 🎆 🎇 We wanted to thank you all for the great feedback, contribution and general support you guys give us. It is truly fulfilling to ...
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
6 Answers
972 Views
0 Votes 6 Answers 972 Views
Hi
Hi ! ClearML Server + SDK v1.9.0 is out! 🎉 🚀 🎊 Happy Holidays and Happy New Year! ❇️ 🎇 🎄
one year ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
We are at AAAI NY, come look us up :)
4 years ago
0 Votes
9 Answers
957 Views
0 Votes 9 Answers 957 Views
Hi
Hi https://github.com/allegroai/trains/releases/tag/0.15.1 / https://github.com/allegroai/trains-server/releases/tag/0.15.1 / https://github.com/allegroai/tr...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hello Everyone!
4 years ago
0 Votes
0 Answers
961 Views
0 Votes 0 Answers 961 Views
4 years ago
0 Votes
0 Answers
873 Views
0 Votes 0 Answers 873 Views
4 years ago
Show more results questions
0 Can You Please Tell Me How To Make The Agent Use The Docker Env By Default? Instead Of Creating Venv It Already Has All The Necessary Environment And Libraries Installed

Can you please tell me how to return the folder where the script should run?

add it to the python path

PYTHONPATH="/src/project"
one year ago
0 Continuing On

Docker cmd is basically docker image name but you can add parameters as well.
For example "Nvidia/cuda" or "Nvidia/cuda -v /mnt/data:/mnt/data"

3 years ago
0 Hi There! Can Anybody Help Me With Specifying The 'Platform' For A Model In Clearml-Serving. I Am Using The K8S Clearml-Serving Setup (Version 1.3.1). I Already Tried A Bunch Of Variants Like

Ohh! I see now
@<1526371965655322624:profile|NuttyCamel41> the "backend: "pytorch" is not really supported because it does not use the optimized Triron engine (which is the reason to run Triron server)
In order to use pytorch you need to convert it to torchscript and then deploy, see example here:
None
[None](https://github.com/allegroai/clearml-serving/blob/7ba356efc97a6ae2159283d198d981b3c1ab85e6/examples/pytor...

4 months ago
0 Hi, I'M Running My Experiments On Remote Ec2 Machine. It Works Good But I Noticed That It Doesn'T Logs My Repo/Branch/Working_Dir, Probably Since My Git Is Local And Not On The Machine. I Used Set_Script() To Log This Details. Is There Any Other More Auto

Hi HappyDove3
task.set_script is a great way to add the info (assuming the .git is missing)
Are you running it using PyCharm? (If so use the clearml pycharm plugin, it basically passes the info from your local git to the remote machine via OS environment)

2 years ago
0 Hi, I Am Trying To Use Agent With A Sample, Very Simple Task. But It Stucks And Task Does Not Finish. In Ui In Console I See What I Pasted On Image. Do You Know What I Might Be Doing Wrong? Agent Is Run In Virtual Env Mode

RoundMosquito25 do notice the agent is pulling the code from the remote repo, so you do need to push the local commits, but the uncommitted changes clearml will do for you. Make sense?

2 years ago
0 Hello, I Am Using Clearml In Docker Mode. I Have A Simple Script That Runs Locally, Runs On The Target Machine Running The Same Tensorflow Container, But Doesn'T Run When I Deploy It Using Clearml. Here'S The Log Of The Error:

It runs directly but leads to the above error with clearml

Both manually (i.e. calling Task.init and running it without agent, and with agent ? same exact behavior ?

one year ago
0 Hello I'M Running A Local Agent . While Its Running The Task I Get This Error. Any Suggestion? Uccessfully Installed Numpy-1.24.4 Found Pytorch Version Torch==2.0.1 Matching Cuda Version 0 Found Pytorch Version Torchaudio==2.0.2 Matching Cuda Version 0 Er

You should manually remove the cudatoolkit from the installed packages section in the UI, then try to send it to the agent and see if it works. The question is how it ended there in the first place

one year ago
0 Hi All, I Have A Question Regarding Multi-Node Training Using The Clearml-Agent. What Is The Recommended Setup In This Case? Say I Have 3 Nodes With 3 Agents Running On Them. How Do I Make Sure They All Run The Same Job?

The problem is not really for the agents to wait (this is easily solved by additional high priority queue) the problem is will you have a "free" agent... you see my point ?

3 years ago
0 Whats The Main Difference Between Creating A Task And Using Init?

SweetGiraffe8 Task.init will autolog everything (git/python packages/console etc), for your existing process.
Task.create purely creates a new Task in the system, and lets' you manually fill in all the details on that Task
Make sense ?

3 years ago
0 Hi! I Noticed A Bug Related To Reusing The Same Component In A Pipeline. I Have Prepared A Mock Example So That You Can Reproduce It:

Building the pipeline in runtime from external configuration is very cool!!
I think nested components is exactly the correct solution, and it is a great use case.

2 years ago
0 Hi, Another Question If You May. Is It Possible To Edit A Logged Task? For Instance - Remove All The Metrics From Some Step Onward?

I see now.
Let's assume you know which snapshot that was:
` prev_task = Task.get_task(task_id='the_first_training_task_id')

get the second from last checkpoint

task.models['output'][-2].url
prev_scalars = prev_task.get_reported_scalars()
new_task = Task.init('example', 'new task')
logger = new_task.get_logger()

do some fpr loop and report the prev_scalars with logger.report_scalars

new_task.flush(wait_for_uploads=True)
new_task.set_initial_iteration(22000)

start the train `

3 years ago
0 Is There Any Way To Clear The Installed Packages Of A Task Programmatically? (I.E. Using The Python Sdk And Not The Ui)

task = Task.init(...) if task.running_locally(): # wait for the repo detection and requirements update task._wait_for_repo_detection() # reset requirements task._update_requirements(None)🙂

3 years ago
0 Oh Also, May I Inquire About The Clearml Professional And Enterprise Pricing?

Hi PunyGoose16 ,
I think the website is probably the easiest 🙂
https://clear.ml/contact-us/
I think they get back to quite quickly

3 years ago
0 I Have An Experiment That Generates Many Plots, But Not All Of Them Show Up In The “Plots” Section Of The Experiment Results. I Thought I Read Somewhere About A Limit On The Number Of Plots That Would Be Shown In That Section, But I Couldn’T Find It In Th

Hi NastyFox63
This seems like most of the reports are converted to pngs (which is what the automagic does if it fails to convert the matplotlib into interactive plot).

no more than 114 plots are shown in the plots tab.

Are you saying we have 114 limit on plots ?
Is this true for "full screen" mode (i.e. not in the experiments table but switch to full detailed view)

3 years ago
0 Hello Everyone, I Have A Question Regarding Datasets. I Writing A Python Script Where It Takes As Inputs A Project Name And Returns All Datasets That Exist Within That Project. I Am Using

I have to assume that I do not know the dataset ID

Sorry I mean:

datasets = Dataset.list_datasets(dataset_project="some_project") 
for d in datasets:
  d["version"] = Dataset.get(dataset_id=d["id"]).version

wdyt?

one year ago
0 Happy Friday Everyone ! We Have A New Repo Release We Would Love To Get Your Feedback On

@<1545216070686609408:profile|EnthusiasticCow4>

Is there currently a way to bind the same GPU to multiple queues? I believe the agent complains last time I tried (which was a bit ago)

run multiple agents on the same GPU,

CLEARML_WORKER_NAME=host-gpu0a clearml-agent daemon --gpus 0
CLEARML_WORKER_NAME=host-gpu0b clearml-agent daemon --gpus 0
7 months ago
0 Regarding The New Version 1.1.2, I Have Noticed Type Hints Are Now Included In The Script Generated By

GiganticTurtle0 this one worked for me 🙂
` from clearml import Task
from clearml.automation.controller import PipelineDecorator

@PipelineDecorator.component(return_values=["msg"], execution_queue="myqueue1")
def step_1(msg: str):
msg += "\nI've survived step 1!"
return msg

@PipelineDecorator.component(return_values=["msg"], execution_queue="myqueue2")
def step_2(msg: str):
msg += "\nI've also survived step 2!"
return msg

@PipelineDecorator.component(return_values=["m...

2 years ago
0 I Saw Some Talk Of Clearml + Kedro On Reddit. Is That A Good Approach?

one can containerise the whole pipeline and run it pretty much anywhere.

Does that mean the entire pipeline will be running on the instance spinning the container ?
From here: this is what I understand:
https://kedro.readthedocs.io/en/stable/10_deployment/06_kubeflow.html

My thinking was I can use one command and run all steps locally while still registering all "nodes/functions/inputs/outputs etc" with clearml such that I could also then later go into the interface and clone an...

3 years ago
0 Hi, Is There A Simple Way To Make

Sorry if it's something trivial. I recently started working with ClearML.

No worries, this has actually more to do with how you work with Dask
The Task ID is the unique id of the any Task in the system (task.id will return the UID str)
Can you post a toy Dash code here, I'll explain how to make it compatible with clearml 🙂

3 years ago
0 Dear Developers, I Encountered A Question That The Local Module Cannot Be Found When Pulling Task From Queue. I Opened A Issue Here

So is there any tutorial on this topic

Dude, we just invented it 🙂
Any chance you feel like writing something in a github issue, so other users know how to do this ?

Guess I’ll need to implement job schedule myself

You have a scheduler, it will pull jobs from the queue by order, then run them one after the other (one at a time)

2 years ago
0 Is It Possible To Select A Bunch Of Experiment And Archive Them All At Once ? I Tried With The Checkbox But There Is No Option To Archive Them All. I Do It One By One By Hand At The Moment.

Hmm I guess that now that you mention it, not that obvious when I'm on a Mac as well, maybe we should have the archive button at the bottom as well..
SteadyFox10 What do you think?

4 years ago
0 Hi There,

correct (i.e. not frontend)

one year ago
0 I Have Set

from task pick-up to "git clone" is now ~30s, much better.

This is "spent" calling apt update && update install && pip install clearml-agent
if you have those preinstalled it should be quick

though as far as I understand, the recommendation is still to not run workers-in-docker like this:

if you do not want it to install anything and just use existing venv (leaving the venv as is) and if something is missing then so be it, then yes sure that the way to go

4 months ago
0 Another Question: How Can I Make Clearml-Agent Use Pre-Installed Version From The Nvidia/Pytorch (

LOL, if this is important we probably could add some support (meaning you will be able to specify it in the "installed packages" section, per Task).
If you find an actual scenario where it is needed, I'll make sure we support it 🙂

2 years ago
0 Hello, Community. I Hope You Are All Doing Well. I'M Seeking Information Regarding A Specific Problem, Specially In The Field Of Computer Vision. Typically, An App In The Field Of Computer Vision Will Have Multiple Models, Each With Its Own Preprocessing,

You mean to add these two to the model when deploying?

    │   ├── model_NVIDIA_GeForce_RTX_3080.plan
    │   └── model_Tesla_T4.plan

Notice the preprocess.py is Not running on the GPU instance, it is running on a CPU instance (technically not the same machine)

7 months ago
0 Do I Understand Correctly, That Running

Task.get_task(..).system_tags

3 years ago
Show more results compactanswers