Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello, "In The Last Period I Pushed To Adopt Clearml Company Wide As It Is A Great Tool. We Actually Have A Data Center And All Nodes Are Managed By Rancher Meaning, Everything We Use Is Purely Kubernetes Stuff. I Deployed Clearml Server In Our

But once i see it on the UI means it is already launched somewhere so i didn't quite get you.

The idea is you run it locally once (think debugging your code, or testing it)
While running the code the Task is automatically created, then once in the system you can clone / launch it.

Also, I want to launch my experiments on a kubernetes cluster and i don't actually have any docs on how to do that, so an example can be helpful here.

We are working on documenting the full process, ...

4 years ago
0 Is There A Way Clearml Can Be Stopped From Updating Dependencies When Cloning?

BroadSeaturtle49 agent RC is out with a fix:
pip3 install clearml-agent==1.5.0rc0Let me know if it solved the issue

2 years ago
0 Clearml Pipelines Can Be Build From Tasks, Functions, And Decorated Functions, According To The Examples In

@<1523704157695905792:profile|VivaciousBadger56>

Is the idea here the following? You want to use inversion-of-control such that I provide a function

f

to a component that takes the above dict an an input. Then I can do whatever I like inside the function

f

and return a different dict as output. If the output dict of

f

changes, the component is rerun; otherwise, the old output of the component is used?

Yes exactly ! this way you...

2 years ago
0 Hi Anyone

That is a good question, usually the cuda version is automatically detected, unless you overrride it with the conf file or OS env. What's the setup? Are you using as package manager ? (conda actually installs CUDA drivers, if the original Task was executed on a machine with conda, it will take the CUDA version automatically, reason is to match the CUDA/Torch/TF)

4 years ago
0 Hi Anyone

AstonishingWorm64 I found the issue.
The cleamlr-serving assume the agent is working in docker mode, as it Has to have the triton docker (where triton engine is installed).
Since you are running in venv mode, tritonserver is not installed, hence the error

4 years ago
0 Hi Anyone

Hi AstonishingWorm64
I think you are correct, there is external interface to change the docker.
Could you open a GitHub issue so we do not forget to add an interface for that ?
As a temp hack, you can manually clone "triton serving engine" and edit the container image (under the execution Tab).
wdyt?

4 years ago
0 Hi Everyone, I Have Questions Related To Clearml-Serving.

If there is new issue will let you know in the new thread

Thanks! I would really like to understand what is the correct configuration

3 years ago
0 Hi! Can Someone Show Me An Example Of How

can someone show me an example of how 

PipelineController.create_draft

I think the idea is to store a draft versio of the pipeline (not the decorator type, I think, but the one launching pre-executed Tasks).
GiganticTurtle0 I'm not sure I fully understand how / why you are using it, can you expand?

EDIT:

However, my intention is ONLY to create it to be executed later on.

Hmm so may like enqueue it?

3 years ago
0 Hi Again

Hmm should be pushed later today, meanwhile:
` from clearml import Task
from clearml.automation.trigger import TriggerScheduler

def func(*args, **kwargs):
print('test', args, kwargs)

if name == 'main':
s = TriggerScheduler(pooling_frequency_minutes=1.0)
s.add_model_trigger(
name='trigger 1', schedule_function=func,
trigger_project='examples', trigger_on_tags=['deploy']
)
s.add_model_trigger(
name='trigger 2',
schedule_task_id='3f7...

4 years ago
0 Hey, Don'T Really Understand Why The Clearml Worker Needs To Pull The Repository Where My Pipeline (Defined With Decorators) Is Written Is Since Apparently A Temporary Python File (Containing At Least The Code And Imports For The Executed Component) Seems

Oh I see the pipeline controller itself (not the components) is the one with the repo
To fix that add at the top of the script the following:
` from clearml import Task

Task.force_store_standalone_script()

@PipelineDecorator.pipeline(...) `That should do the trick

2 years ago
0 Hi Folks, I Did A Deployment Of Clearml Using The K8S Helm Chart, And I Set The Agent Using K8S Glue. I Run A Task Locally, And I Went To The Ui Cloned The Experiment And Scheduled It In The Default Queue. After Doing This, I See That The Experiment Is Q

The way I understand it is that K8s glue agent is enabled by default (and I do see a Deployment for

clearml-k8sagent

SarcasticSquirrel56
Good start, when you say you see the Task in ""k8s_scheduler" queue, originally did you enqueue it to "default" ?

3 years ago
0 Hi Folks, I Did A Deployment Of Clearml Using The K8S Helm Chart, And I Set The Agent Using K8S Glue. I Run A Task Locally, And I Went To The Ui Cloned The Experiment And Scheduled It In The Default Queue. After Doing This, I See That The Experiment Is Q

Click on the "k8s_schedule" queue, then on the right hand side, you should see your Task, click on it, it will open the Task page. There click on the "Info" Tab, there look for "STATUS MESSAGE" and "STATUS REASON". What do you have there?

3 years ago
0 Task Struck At

I think this was the issue: None
And that caused TF binding to skip logging the scalars and from that point it broke the iteration numbering and so on.

2 years ago
0 Task Struck At

Hi PanickyMoth78

it was uploading fine for most of the day but now it is not uploading metrics and at the end

Where are you uploading metrics to (i.e. where is the clearml-server) ?
Are you seeing any retry logging on your console ?
packages/clearml/backend_interface/metrics/reporter.py", line 124, in wait_for_eventsThis seems to be consistent with waiting for metrics to be flushed to the backend, but usually you will see retry messages on your console when that happens

2 years ago
0 Task Struck At

Thanks @<1523701713440083968:profile|PanickyMoth78> for pining, let me check if I can find something in the commit log, I think there was a fix there...

2 years ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

Okay here is a standalone code that should be close enough? (if I missed anything let me know)

` import tempfile
from datetime import datetime
from pathlib import Path

import tensorflow as tf
import tensorflow_datasets as tfds
from clearml import Task

task = Task.init(project_name="debug", task_name="test")
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)

def normalize_img(image, labe...

2 years ago
0 Hi, I’M Having Troubles Initializing Connection To Clearml (“Error: Could Not Verify Credentials:“). Who Can Help? Thanks

IrateBee40
Check the first steps here:
https://clear.ml/docs/latest/docs/getting_started/ds/ds_first_steps
(Basically you have to generate credentials / configure you machine so it knows where the server is and how to access it)

Make sense ?

3 years ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

callbacks.append( tensorflow.keras.callbacks.TensorBoard( log_dir=str(log_dir), update_freq=tensorboard_config.get("update_freq", "epoch"), ) )Might be! what's the actual value you are passing there?

2 years ago
0 When I Run An Experiment (Self Hosted), I Only See Scalars For Gpu And System Performance. How Do I See Additional Scalars? I Have

I basically moved the Task.init() call below the imports

Okay that is odd, can you copy pate the before/after of the import, so we can fix that?!

2 years ago
0 Hi Folks! I'M Using  

ExcitedFish86 verified, fix will be available on GitHub soon :)

4 years ago
0 When Clearml Converts A

Okay the type is inferred from the default value of the function step itself, that means that both:
data_frame = step_one(pickle_url, extra=1337)and
data_frame = step_one(pickle_url, 1337)Will pass extra as int .
That said if the default value of the argument is missing, it will revert to str
In order to use the type hints as casting hint, we actually need to improve the task.connect to support the type casting (they are stored)

3 years ago
0 Is It Possible To Increase The Polling Interval For K8S Glue? Currently It Is 5 Seconds I Believe. Would Adding An Argument For It Help? Can Do A Pr If So

Ex: Expecting value: line 1 column 1 (char 0)
K8S Glue pods monitor: Failed parsing kubectl output:

Run with --debug as the first parameter
Are you running the latest from the git repo ?

4 years ago
0 Running Into A Strange Issue—

Seems correct.
I'm assuming something is wrong with the key/secret quoting ?!
Could you generate another one and test it ?
(you can have multiple key/secretes on the same user)

4 years ago
Show more results compactanswers