Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 How Can I Clone A Task And Execute_Remotely The Cloned Task With Exit_Process=False. It Currently Kills The Notebook Kernel. If I Say Exit_Process=False, It Says Clone Cannot Be False. Why The Restriction? What To Do In A Notebook To Run A Task Remotely

I am providing a helper to run a task in queue after running it locally in the notebook

Is this part of a pipeline process or just part of the workflow ?
(reason for asking is that if this is a pipeline thing we might be able to support it in v2)

3 years ago
0 I Have Set

"regular" worker will run one job at a time, services worker will spin multiple tasks at the same time But their setup (i.e. before running the actual task) is one at a time..

5 months ago
0 Is It Necessary To Serve Keras Model Using Triton Engine? I'M Trying To Serve An Endpoint, And Trying To Debug, But The Error Given Not Helping Much. Is There A Flag I Can Pass To See More Logs?

Hi @<1567321739677929472:profile|StoutGorilla30>

Is it necessary to serve keras model using triton engine?

It is not, but it is the most efficient way to serve keras models, and this is why by default clearml-serving is using Nvidia Triton (we are talking 10x factors)
I would start with the keras example, see that it works and then work your way into your example (notice you always need to provide the layers form the in/out of the model)
[None](https://github.com/allegroai/clearml-s...

one year ago
0 With

think perhaps it came across as way more passive aggressive than I was intending.

Dude, you are awesome for saying that! no worries 🙂 we try to assume people have the best intention at heart (the other option is quite depressing 😉 )

I've been working on a Azure load balancer example, ...

This sounds exciting, let me know if we can help in any way

3 years ago
0 Congrats On The Clearml-Serving 0.9.0 Release! I’Ll Try It For Sure!

Even before we had a chance to properly notice everyone 🙂
Thank you! All the details will follow in a dedicated post, for the time being, I can say that pushing a model with pre/post processing python code and full scalable inference solution has never been easier
https://github.com/allegroai/clearml-serving/tree/main/examples/sklearn

2 years ago
0 Apart From Having Packages In Requirements.Txt, Does Clearml Expect Them To Be Actuall Installed To Add Them As Installed Packages For A Task?

It analyses the script code itself, going over all imports and adding only the directly imported packages

3 years ago
0 Anyway To Make A Job Fail If The Required Python Version (3.7 Vs 3.8 For Example) Is Not Available In The Agent?

Hmm, interesting, why would you want that? Is this because some of the packages will fail?

3 years ago
0 I’M Using Catboost For Training, But Sadly It Does Not Have A Native Integration With Clearml (Xgboost And Lightgbm Do Have Integrations). But Catboost Writes Down Training Logs In Tensorboard Format (Into A

Hmm I think everything is generated inside the c++ library code, and python is just an external interface. That means there is no was to collect the metrics as they are created (i.e. inside the c++ code), which means the only was to collect them is to actively analyze/read the tfrecord created by catboost 😞
Is there a python code that does that (reads the tfrecords it creates) ?

3 years ago
0 Hello Everyone. After Restart Self-Hosted Clearml Server The Data From Tabs Plots, Console And Scalars Are Gone Away For Every My Previous Experiment. But In Folders

Hi @<1533982060639686656:profile|AdorableSeaurchin58>
Notice the scalars and console are stored on the elasticsearch DB, this is usually under
/opt/clearml/data/elastic_7

one year ago
0 Hey Clearml Team, We Created An Account, Setup Our Data Pipeline, And Now We Can'T Get Back In. Nothing Is In The Project. Can Someone From Support Reach Out To Help?

For visibility, after close inspection of API calls it turns out there was no work against the saas server, hence no data

one year ago
0 Hey Since Hydra Does Not Work With

I see TrickyFox41 try the following:
--args overrides="param=value"Notice this will change the Args/overrides argument that will be parsed by hydra to override it's params

one year ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

Okay here is a standalone code that should be close enough? (if I missed anything let me know)

` import tempfile
from datetime import datetime
from pathlib import Path

import tensorflow as tf
import tensorflow_datasets as tfds
from clearml import Task

task = Task.init(project_name="debug", task_name="test")
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)

def normalize_img(image, labe...

one year ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

callbacks.append( tensorflow.keras.callbacks.TensorBoard( log_dir=str(log_dir), update_freq=tensorboard_config.get("update_freq", "epoch"), ) )Might be! what's the actual value you are passing there?

one year ago
0 Hi, Is It Possible To Migrate A Dataset From A Self Hosted Clearml Solution To The Clearml Hosted Solution?

Hi ShortElephant92
You could get a local copy from the local server, then switch credentials to the hosted server and upload again, would that work?

one year ago
0 Hello! Getting Credential Errors When Attempting To Pip Install Transformers From Git Repo, On A Gpu Queue.

Yes please, just to verify my hunch.
I think that somehow the docker mounts the agent is creating are (for some reason) messing it up.
Basically you can just run the following (it will do everything automatically) (replace the <TASK_ID_HERE> with the actual one)
` docker run -it --gpus "device=1" -e CLEARML_WORKER_ID=Gandalf:gpu1 -e CLEARML_DOCKER_IMAGE=nvidia/cuda:11.4.0-devel-ubuntu18.04 -v /home/dwhitena/.git-credentials:/root/.git-credentials -v /home/dwhitena/.gitconfig:/root/.gitconfig ...

3 years ago
0 Hi, I Expect There Is A Limitation In Time The Free Service

WickedGoat98 Forever 🙂
The limitation is on the storage size

3 years ago
0 From Datetime Import Datetime Import Hashlib From Clearml Import Task Previous_Timestamp = 0 Task_Filter = {} Task_Filter.Update( { 'Page_Size': 100, 'Page': 0, 'Status_Changed': ['>{}'.Format(Datetime.Utcfromtimestamp(Previou

And is there an easy way to get all the metrics associated with a project?

Metrics are per Task, but you can get the min/max/last of all the tasks in a project. Is that it?

3 years ago
0 My Autoscaled Instance Fails When Running "Git Clone" On A Private Repo. I

Hi @<1541954607595393024:profile|BattyCrocodile47>

I

do

have the SSH key placed at

/root/.ssh/id_rsa

on the machine,

Notice that the .ssh folder is mounted from the host (EC2 / GCP) into the container,

'-v', '/tmp/clearml_agent.ssh.cbvchse1:/.ssh'

This is odd, why is it mounting it to /.ssh and not /root/.ssh ?

one year ago
Show more results compactanswers