Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
PanickyMoth78
Moderator
34 Questions, 167 Answers
  Active since 10 January 2023
  Last activity 2 months ago

Reputation

0

Badges 1

166 × Eureka!
0 Votes
2 Answers
998 Views
0 Votes 2 Answers 998 Views
Hi. Suppose I want to report on what my task has done by having it generate a markdown (.md) file with links to some "local" figure files. looking at the rep...
one year ago
0 Votes
3 Answers
861 Views
0 Votes 3 Answers 861 Views
2 years ago
0 Votes
7 Answers
884 Views
0 Votes 7 Answers 884 Views
Hi. I am experimenting with clearml.Dataset and encountering an error. LockException: [Errno 11] Resource temporarily unavailable In my experiment, I make a ...
2 years ago
0 Votes
20 Answers
990 Views
0 Votes 20 Answers 990 Views
task struck at task.flush(wait_for_uploads=True) : I've been running a model training task - a variation on this clearml dataset example: https://github.com/...
one year ago
0 Votes
16 Answers
1K Views
0 Votes 16 Answers 1K Views
Hi. Question about Dataset upload errors: When uploading a clearml.Dataset created with output_uri=" gs://lavi_test/datasets after adding 20 files of size 50...
gcp
one year ago
0 Votes
6 Answers
922 Views
0 Votes 6 Answers 922 Views
Is there some built-in way in clearml to trigger further action on task fail (or pipeline fail)?
2 years ago
0 Votes
8 Answers
1K Views
0 Votes 8 Answers 1K Views
2 years ago
0 Votes
13 Answers
897 Views
0 Votes 13 Answers 897 Views
Another question on the topic of how a remote execution of a pipeline kills the calling process (previously discussed https://clearml.slack.com/archives/CTK2...
2 years ago
0 Votes
14 Answers
1K Views
0 Votes 14 Answers 1K Views
Hi. I have a job that processes images and creates ~5 GB of processed image files (lots of small ones). At the end - it creates a clearml.Dataset and perform...
one year ago
0 Votes
1 Answers
984 Views
0 Votes 1 Answers 984 Views
one year ago
0 Votes
7 Answers
867 Views
0 Votes 7 Answers 867 Views
I have 5 unarchived pipeline runs that were defined with this decorator: @PipelineDecorator.pipeline( name="fastai_image_classification_pipeline", project="l...
2 years ago
0 Votes
30 Answers
951 Views
0 Votes 30 Answers 951 Views
Hi. I'd like to try the GCP autoscaler. What permissions does the service account that I provide to clearml need? (and what GCP API should I enable in the GC...
2 years ago
0 Votes
7 Answers
898 Views
0 Votes 7 Answers 898 Views
Hi. I have a problem accessing repo code in pipeline components running in an AWS autoscaler (first attempts at doing this) My local clearml.conf file has ag...
2 years ago
0 Votes
27 Answers
1K Views
0 Votes 27 Answers 1K Views
Hi. I'm running this little pipeline: from clearml.automation.controller import PipelineDecorator from clearml import TaskTypes @PipelineDecorator.component(...
2 years ago
0 Votes
14 Answers
914 Views
0 Votes 14 Answers 914 Views
Hi there. I'm trying to switch pipeline code from a local run using PipelineDecorator.run_locally()to a slightly-less-local run using PipelineDecorator.set_d...
2 years ago
0 Votes
3 Answers
929 Views
0 Votes 3 Answers 929 Views
Hi. Shoulf this command succeed in the presence of project lavi-testing and absence of dataset tmp_datset within it? from clearml import Dataset tmp_dataset ...
2 years ago
0 Votes
3 Answers
912 Views
0 Votes 3 Answers 912 Views
Hi. First time user here 👋 I have experienced a problem following the getting started documentation. I opened an account on https://app.clear.ml/ I then fol...
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
I am using the AWS autoscaler and I wish to set my files server to be gs. I tried to do so by having this in the ADDITIONAL CLEARML CONFIGURATION window: api...
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
I have a training task that auto-magically saves a model for me to GCS task = Task.init( project_name=project_name, task_name=f"Image classification training...
one year ago
0 Votes
7 Answers
955 Views
0 Votes 7 Answers 955 Views
Hi I'm looking into how clearml supports datasets and dataset versioning and I'm a bit confused. Is dataset versioning not supported at all in the non-enterp...
2 years ago
0 Votes
22 Answers
936 Views
0 Votes 22 Answers 936 Views
Hi. I'm encountering a problem with model.name At least, for models that where auto-magically uploaded. I see it in my own code but you can see it if you run...
one year ago
0 Votes
2 Answers
897 Views
0 Votes 2 Answers 897 Views
Hi. I'm using @PipelineDecorator.component to define a task from a function (to run in a pipeline) I'd like to get the task object within this function so th...
2 years ago
0 Votes
22 Answers
1K Views
0 Votes 22 Answers 1K Views
I started two pipelines (using AWS autoscaler in app.clear.ml ). The pipelines ran concurrently, using the same pipeline code. Both failed in the same compon...
2 years ago
0 Votes
25 Answers
1K Views
0 Votes 25 Answers 1K Views
Autoscaler parallelization issue: I have an AWS Autoscaler set up with a resource that has a max of 3 instances assigned to the default queue I've given it a...
2 years ago
0 Votes
2 Answers
906 Views
0 Votes 2 Answers 906 Views
Hi. I've noticed that my clearml.conf has both: agent.git_user="" agent.git_pass=""and agent { ... git_user: "" git_pass: "" ... }What's the difference? Shou...
2 years ago
0 Votes
11 Answers
921 Views
0 Votes 11 Answers 921 Views
Hi. I have a few questions about the snippet attached re-running this code produces the same printouts... I chose 47 out of 100 in the pipeline ... I chose 8...
2 years ago
0 Votes
14 Answers
887 Views
0 Votes 14 Answers 887 Views
Bug? dataset name is ignored if use_current_task=True
one year ago
0 Votes
9 Answers
1K Views
0 Votes 9 Answers 1K Views
Hi. I have a question about pipelines and their generated dependency graphs. I took the code of the clearml pipeline from decorator example: https://github.c...
2 years ago
0 Votes
4 Answers
316 Views
0 Votes 4 Answers 316 Views
Hi. I'm using clearml agent 1.16.1 My code is running a multi-process pool with "spawn" (see here for why) from multiprocessing import get_context ... with g...
2 months ago
0 Votes
1 Answers
955 Views
0 Votes 1 Answers 955 Views
suppose I use a pipeline decorator to define a pipeline: @PipelineDecorator.pipeline(name='my-pipeline', project='my-project', version='0.2') def my_pipeline...
2 years ago
Show more results questions
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

Q: is there an equivalent env var for sdk.google.storage.pool_connections/pool_maxsize ? My jobs are running remotely and not within a clearml agent at the moment so they get clearml config through env vars.

one year ago
one year ago
2 years ago
0 I Started Two Pipelines (Using Aws Autoscaler In App.Clear.Ml ). The Pipelines Ran Concurrently, Using The Same Pipeline Code. Both Failed In The Same Component Half-Way Though The Pipeline Run With:

here is the log from the failing component:
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/clearml/utilities/locks/portalocker.py", line 140, in lock fcntl.flock(file_.fileno(), flags) BlockingIOError: [Errno 11] Resource temporarily unavailable

2 years ago
0 Hi. I Am Experimenting With

I'm on clearml==1.6.3rc1

2 years ago
0 Hi I'M Looking Into How Clearml Supports Datasets And Dataset Versioning And I'M A Bit Confused. Is Dataset Versioning Not Supported At All In The Non-Enterprise Or Is Versioning Available By A Different Mechanism? I See That

console output shows uploads of 500 files on every new dataset. The lineage is as expected, each additional upload is the same size as the previous ones (~50mb) and Dataset.get on the last dataset's ID retreives all the files from the separate parts to one local folder.
Checking the remote storage location (gs://) shows artifact zip files, each with 500 files

2 years ago
0 Hi. I Have A

also, whereas the pipeline agent's log has:
Executing task id [7a0ad1fb243a4ff3b9e6c477442ded4a]: repository = git@github.com:shpigi/clearml_evaluation.git branch = main version_num = e045904094cf2f4fa61ce92f7b91682f5de64ab8
The component agent's log has:
Executing task id [90de043e354b4b28a84d5cc0788fe63c]: repository = branch = version_num =

2 years ago
0 Hi. Help

essentially, several running processes were performing:
model_evals_dataset = Dataset.get( dataset_project=dataset_project, dataset_name=f"model_evals", ) model_evals_dataset.add_files(run_eval_path) model_evals_dataset.upload()

2 years ago
0 Bug?

hmm.
this isn't supported though:
dataset_args = dataset.connect(dataset_args)

one year ago
0 Hi. I'M Encountering A Problem With

another weird thing:
Before my training task is done:
print(task.models['output'].keys())outputs
odict_keys(['Output Model #0', 'Output Model #1', 'Output Model #2'])
after task.close()
I can do:
task = Task.get_task(task_id) for i in range(100): print(task.models["output"].keys())which prints
odict_keys(['Output Model #0', 'Output Model #1', 'Output Model #2'])in the first iteration
and prints the file names in the latter iterations:
` od...

one year ago
0 Hi. I Have A Few Questions About The Snippet Attached

Something else that I feel is missing from the docs regarding pipelines, as someone who has given kubeflow pipelines a try (in the http://vertex.ai pipelines environment), is some explanation of how functions become pipelines and components.
More specifically, I've learned to watch out for kubeflow pipeline code which is run at definition time (at compilation time, to be more accurate) instead of at pipeline execution time.

This whole experiment with random numbers started as my attempt ...

2 years ago
0 Hi. Question About Dataset Upload Errors: When Uploading A

I can't find version 1.8.1rc1 but I believe I see a relevant change in code of Dataset.upload in 1.8.1rc0

one year ago
0 Hi (Again... Sorry For Asking So Many Questions) Question About Using Google Cloud Storage In A Clearml Agent Running In Aws Ec2 Instance. My

For anyone following, you can "inject" a credentials json file for a google cloud service account so at to get access to your google cloud storage from agents on aws ec2 instances that are managed by the AWS autoscaler by providing the following in the ADDITIONAL CLEARML CONFIGURATION when starting the autoscaler:
` sdk.google.storage.credentials_json: "/root/gs.cred"
sdk.google.storage.project: "<my-gcp-project-id>"
files {
gsc {
contents: """<copy-paste the contents of yo...

2 years ago
0 Hi There. I'M Trying To Switch Pipeline Code From A Local Run Using

Thanks for the fix and the mock HPO example code !
Pipeline behaviour with the fix is looking good.
I see the point about changes to data inside the controller possibly causing dependencies for step 3 (or, at least, making it harder for the interpreter to know).

2 years ago
0 Autoscaler Parallelization Issue: I Have An Aws Autoscaler Set Up With A Resource That Has A Max Of 3 Instances Assigned To The

erm,
this parallelization has led to the pipeline task issuing a bunch of:
model_path/run_2022_07_20T22_11_15.209_0.zip , err: [Errno 28] No space left on deviceand quitting on me.
my train_image_classifier_component is programmed to save model files to a local path which is returned (and, thanks to clearml, the path's contents are zipped uploded to the files service).

I take it that these files are also brought into pipeline tasks's local disk?
Why is that? If that is indeed what...

2 years ago
0 Task Struck At

no retry mesages
CLEARML_FILES_HOST is gs
CLEARML_API_HOST is a self hosted clearml server (in google compute engine).

Note that earlier in the process the code uploads a dataset just fine

one year ago
0 Hi. I Have A Job That Processes Images And Creates ~5 Gb Of Processed Image Files (Lots Of Small Ones). At The End - It Creates A

I tried playing with those parameters on my laptop to no great effect.

Here is code you can use to reproduce the issue:

` import os
from pathlib import Path
from tqdm import tqdm
from clearml import Dataset, Task

def dataset_upload_test(project_id:str, bucket_name:str
):
def _random_file(fpath, sizekb):
fileSizeInBytes = 1024 * sizekb
with open(fpath, "wb") as fout:
fout.write(os.urandom(fileSizeInBytes))

def random_dataset(dataset_path, num_files, file...
one year ago
0 Hi. I Have A Few Questions About The Snippet Attached

That is a good point, I'll make sure we mention it somewhere in the docs. Any thoughts on where?

maybe in (all of) these places:
https://clear.ml/docs/latest/docs/faq
https://clear.ml/docs/latest/docs/fundamentals/task
https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk

2 years ago
0 Hi. Help

I had several pipeline components getting it and uploading files to is concurrently.
Can Datsets handle that?

2 years ago
0 Hi. I'M Encountering A Problem With

Ooh nice.
I wasn't aware task.models["output"] also acts like a dict.
I can get the one I care about in my code with something like task.models["output"]["best_model"]
however can you see the inconsistency between the key and the name there:

one year ago
0 Hi. I'D Like To Try The Gcp Autoscaler.

Is there any chance the experiment itself has a docker image specified?

It does not as far as I know. The decorators do not have docker fields specified

2 years ago
0 Hi. I'M Running This Little Pipeline:

Is there a way to set the default upload destination for all tasks in my ~/clearml.conf

.. yes by setting files_server: gs://clearml-evaluation/

2 years ago
0 Hi. Shoulf This Command Succeed In The Presence Of Project

That would be a better message however, I must have misunderstood the meaning of auto_create=True
I thought that flag made the get function into a "get-or-create"

2 years ago
2 years ago
0 Hi. I'M Running This Little Pipeline:

Hi again.
Thanks for the previous replies and links but I haven't been able to find the answer to my question: How do I prevent the content of a uri returned by a task from being saved by clearml at all.

I'm using this simplified snippet (that avoids fastai and large data)
` from clearml.automation.controller import PipelineDecorator
from clearml import TaskTypes

@PipelineDecorator.component(
return_values=["run_datasets_path"], cache=False, task_type=TaskTypes.data_processing
)
def ma...

2 years ago
Show more results compactanswers