Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Is It Possible To Perform Debugging Operations With Pycharm Integration Using Remote Session?

is it possible to perform debugging operations with pycharm integration using remote session?

Sure, use clearml-session it will open an ssh connection to the remote machine, then you can use pycharm

3 years ago
0 Hi, I'M Configuring An Agent. After Pasting The Credentials, I Get:

GiddyTurkey39 can you ping the server-address (just making sure, this should be the IP of the server not 'localhost')

4 years ago
0 Hey There, Does Trains Support

his means that you guys internally catch the argparser object somehow right?

Correct πŸ™‚ this is how you get the type checking casting abilities, and a few other perks

5 years ago
0 Hi. I Have A Question About Pipelines And Their Generated Dependency Graphs. I Took The Code Of The Clearml Pipeline From Decorator Example:

I imagine that these phantom dependencies will prevent parallelization. Is there a workaround?

yes, they might... workaround might be a bit ugly but copy pasting the functions and changing the name
BTW: I'll check when is the next RC scheduled for, maybe it will already contain a fix 🀞

3 years ago
0 Hi, Is There An Equivalent For Set_Name To Change The Task'S Project Name? I'M Stuck In A Loop, I Have To Run Task.Init Right At The Start Of The File Because I Give It

SmarmySeaurchin8 regarding the original question:
task.set_project(project_id)Task.get_projects() to get all the project names/ids

4 years ago
0 Hello! Getting Credential Errors When Attempting To Pip Install Transformers From Git Repo, On A Gpu Queue.

Let's try:
` echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/docker-clean ; chown -R root /root/.cache/pip ; export DEBIAN_FRONTEND=noninteractive ; export CLEARML_APT_INSTALL="$CLEARML_APT_INSTALL libsm6 libxext6 libxrender-dev libglib2.0-0" ; [ ! -z $(which git) ] || export CLEARML_APT_INSTALL="$CLEARML_APT_INSTALL git" ; declare LOCAL_PYTHON ; for i in {10..5}; do which python3.$i && python3.$i -m pip --version && export LOCAL_PYTHON=$(which python3.$i) && b...

4 years ago
0 Is There An Option To Separate The Storage From The Server? E.G. Deploying My Trains Server On Some Light Machine, And Confguring The Storage To Be Aws S3 Or Something Similar

WackyRabbit7

Cool - so that means the fileserver which comes with the host will stay emtpy? Or is there anything else being stored there?

Debug Images and artifacts will be automatically stored to the file server.
If you want your models to be automagically uploaded add the following :
task=Task.init('example', 'experiment', output_uri=' ')(You can obviously point it to any other http/S3/GS/Azure storage)

5 years ago
5 years ago
0 Hi, I'D Like To Know If It'S Possible To Change The Artifact File Path That Is Shown In The Ui. I'D Need This Because I Have Clearml Agents That Are Running In The Same Vpc Of The Server, So They Use The Internal Dns For The Api Server And Files Server An

Hi LovelyHamster1
That is a good point, I think the safest / robust way is to configure both to use the same dns name/s so both (internal/external) are accessible.
Some background, the URL itself on the artifact is basically a standalone, once registered on the Task, the UI will not replace it but use it as is (The UI has no "understanding" on which server it is, it will just fetch the file).
Are you also using a diff port on the load balancer ?
(because the easiest fix is on your external ...

4 years ago
0 Hi All, I Am Trying To Execute Somewhat Custom Hpo Scheme With Clearml. I Would Want That A Single Running Python Script Will Be Able To Sample The Optimizer, Init A Task And Report The Result Multiple Times. I Didn'T Find Anything Similar In The Docs Or

that machine will be able to pull and report multiple trials without restarting

What do you mean by "pull and report multiple trials" ? Spawn multiple processes with different parameters ?
If this is the case: the internals of the optimizer could be synced to the Task so you can access them, but this is basically the internal representation, which is optimizer dependent, which one did you have in mind?
Another option is to pull Tasks from a dedicated queue and use the LocalClearMLJob ...

4 years ago
2 years ago
0 Hey Guys, In Your Opinion, What The Best Way To Upload An Artifact To An Existing Experiment From A Storage-Server (E.G., S3)? In The Storage Module Documentation, I Saw A Function That Uploads An Object (E.G., Dataframe) To The Storage-Server, And It Is

Hi SpotlessFish46 ,
Is the artifact already in S3 ?
Is the S3 configured as the default files_server in the trains.conf ?
You can always use the StorageManager upload to wherever and register the url on the artifacts.
You can also programmatically change the artifact destination server to S3, then upload the artifact as usual.
What would be the best natch for you?

4 years ago
0 When Using Docker Mode (And Specifically K8S Glue), What Are The Options For Caching? One Option Is Definitely Having A Base Image That Has The Things Needed. Anything Else? Thanks!

Gitlab has support for S3 based cache btw.

This might still be considered "slow" compared to local-dist/cluster mount

Would adding support for some sort of post task script help? Is something already there?

Interesting, can you expand on the use case? (currently there is only pre-task script, for setup)

4 years ago
0 Currently Clearml-Agent In Services-Mode Supports Cpu Only Configuration.

The reasoning is that most likely simultaneous processes will fail on GPU due to memory limit

4 years ago
0 Hello Clearml Community, Does Anyone Have An Idea How I Could Integrate/Manager Carla (

you mean The Task already exists or you want to create a Task from the code ?

3 years ago
0 I Don'T Quite Understand The Way

MagnificentSeaurchin79 you can delay it with:
task.set_resource_monitor_iteration_timeout(seconds_from_start=1800)

4 years ago
0 Hi All, I Am Getting A Bunch Of This Kind Of Log Messages "Clearml.Storage - Info - Starting Upload: /Tmp/.Clearml.Upload_Model_6Ou50Pb1.Tmp =>" I Am Pretty Sure They Happen As A Part Of The Model Initialization About 10 Of Those, My Guess Is That Every T

RipeGoose2 you can put ut before/after the Task.init, the idea is for you to set it before any of the real training starts.
As for not effecting anything,
Try to add the callback and just have it returning None (which means skip over the model log process) let me know if this one works

4 years ago
0 When Running An Experiment From A Notebook, It Knows It’S A Notebook And Automatically Adds The Notebook As An Artifact Right? And The Uncommited Changes Becomes The Nottebook Converted To A Script? In One Case I Am Seeing Actual Git Diff Coming In Instea

it knows it’s a notebook and automatically adds the notebook as an artifact right?

correct

and the uncommited changes becomes the nottebook converted to a script?

correct

In one case I am seeing actual git diff coming in instead of the notebook.

it might be there is both a git repository and a notebook and the git diff will show before the notebook is detected and shown instead ? (there is a watchdog refreshing the notebook every 30sec or so)

4 years ago
0 Hi, I Am Considering Making Automated Backups Of My Clearml-Server Using Amazon Ebs Snapshots. Should I Be Concerned With The Same Problem Described Here >

I can probably have a python script that checks if there are any tasks running/pending, and if not, run docker-compose down to stop the clearml-server, then use boto3 to trigger the creating of a snapshot of the EBS, then wait until it is finished, then restarts the clearml-server, wdyt?

I'm pretty sure there is a nice way, let me check soemthing

4 years ago
0 Is It Possible To Give The Agent Access To Install Private Pip Packages (Needs To Be Installed From The Repo)?

p.s. you should remove this line πŸ™‚
extra_index_url: ["git@github.com:salimmj/xxxx"]

4 years ago
0 Hi, Anyone Seen This Issue?

On the machine running the docker-compose (i.e. the clearml-server)

3 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

Aws autoscaler will work with iam rules along as you have it configured on the machine itself. Sagemaker job scheduling (I'm assuming this is what you are referring to, and not the notebook) you need to select the instance as well (basically the same as ec2). What do you mean by using the k8s glue, like inherit and implement the same mechanism but for sagemaker I stead of kubectl ?

4 years ago
Show more results compactanswers