Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetBadger76
Moderator
1 Question, 239 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
8 Answers
1K Views
0 Votes 8 Answers 1K Views
hello TartSeagull57 This is a bug introduced with version 1.4.1, for which we are working on a patch. The fix is actually in test, and should be released ver...
2 years ago
0 Hey,

hi WickedElephant66
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#models-artifacts-and-metrics
I am trying to find you some example, hold on 🙂

2 years ago
0 Hi Community, Is There A Way To Download All The Logged Scalars/Plots Using Code Itself?

Hi TenderCoyote78
Here is a snippet to illustrate how to retrieve the scalars and the plots from a task

` from clearml.backend_api.session.client import APIClient
from clearml import Task

task = Task.get_task(project_name=xxxx, task.name=xxxx) #or task_id=xxxx
client = APIClient()

#retrieving the scalars
client.events.scalar_metrics_iter_histogram(task=task.id)

#retrieving the plots
client.events.get_task_plots(task=task.id) `

2 years ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

Hi TeenyBeetle18
If the dataset could be basically built from a local machine, you could use the sync_folder (sdk https://clear.ml/docs/latest/docs/references/sdk/dataset#sync_folder or cli https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_folder_sync#syncing-a-folder ). then you would be able to modify any part of the dataset and create a new version, with only the items that changed.

There is also an option to download only parts of the dataset, have a l...

2 years ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

If the data is updated into the same local / network folder structure, which serves as a dataset's single point of truth, you can schedule a script which uses the dataset sync functionality which will update the dataset based on the modifications made to the folder.

You can then modify precisely what you need in that structure, and get a new updated dataset version

2 years ago
0 Hi There! We Work On The Project Together With My Partner. He Shared His Workspace With Me And I Have An Access To His Projects And Tasks. I Am Trying To Launch Project From His Working Space On My Remote Server. I Ran Clearml Daemon But It Does Not See T

you can run a clearml agent on your machine, in a way that it is dedicated to a certain queue. You can then clone the experiment you are interested in (either bleonging to your workspace or to the one from you partner), and enqueue it on into the queue you assigned your worker to.
clearml-agent daemon --queue 'my_queue'

2 years ago
0 Hello Community! How I Can Add S3 Credentials To S3 Bucket In Example.Env For Clearml-Serving-Triton? I Need To Add Bucket Name, Keys And Endpoint

hi AbruptHedgehog21
which s3 service provider will you use ?
do you have a precise list of the var you need to add to the configuration to access your bucket ? 🙂

2 years ago
0 Hey,

can you share with me an example or part from your code ? I might miss something in wht you intend to achieve

2 years ago
0 Hello

Great !
Concerning the running status :
in the first case, the program failed so the server has no way to be informed of a status change in the second case, this is not a task status but a dataset, so the status would change when you will publish the datasetThe fix is in last phases of testing, I hope that it will be released very soon

2 years ago
0 Since V1.4.0, Our

Hi UnevenDolphin73
The difference between v1.3.2 and v1.4.x (about download_folder) is that in 1.4.x, the subfolder structure is maintened, so the .env file would not be downloaded directly into the provided local folder (hence "./") if it is not into the bucket's main folder. The function will reproduce the subdir structure of the bucket. So you will need to specify to load_env() the path to the .env file (full path, including the env filename)

For example, if i do :
` StorageManager.down...

2 years ago
0 Hey,

btw here is the content of the imported file:

import torch
from torchvision import datasets, transforms
import os
MY_GLOBAL_VAR = 32

def my_dataloder ():
return torch.utils.data.DataLoader(
datasets.MNIST(os.path.join('./', 'data'), train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor()
` ...

2 years ago
0 Since V1.4.0, Our

the fact that the minio server is called "bucket" in the doc (

) is for sure confusing. i will check the reason of this choice, and also why we dont begin to build the structure from the bucket (the real one

)
i keep you updated

2 years ago
0 Since V1.4.0, Our

but in the other hand, when you parse your minio console, you have all the buckets shown as directories right ? there is no file in the root dir. So we used the same logic and decided to reproduce that very same structure. Thus when you will parse the local_folder, you will have the same structure as shown in the console

2 years ago
0 Hey,

can you share your logs ?

2 years ago
0 Since V1.4.0, Our

Hi UnevenDolphin73

I have reproduced the error :
Here is the behavior of that line, according to the version : StorageManager. download_folder( s3://mybucket/my_sub_dir/files , local_dir='./')

1.3.2 download the my_sub_dir content directly in ./
1.4.x download the my_sub_dir content in ./my_sub_dir/ (so the dotenv module cant find the file)

please keep in touch if you still have some issues, or if it helps you to solve the issue

2 years ago
0 Since V1.4.0, Our

Hi UnevenDolphin73
Let me resume, so that i ll be sure that i got it 🙂

I have a minio server somewhere like some_ip on port 9000 , that contains a clearml bucket
If I do StorageManager.download_folder(remote_url=' s3://some_ip:9000/clearml ', local_folder='./', overwrite=True)
Then i ll have a clearml bucket directory created in ./ (local_folder), that will contain the bucket files

2 years ago
0 Hey,

i managed to import a custom package using the same way you did : i have added the current dir path to my system
i have a 2 steps pipeline :

  1. Run a function from a custom package. This function returns a Dataloader (built from torchvision.MNIST) 2) This step receives the dataloader built in the first step as a parameter ; it shows random samples from itthere has been no error to return the dataloader at the end of step1 and to import it at step2. Here is my code :

` from clearml import Pi...

2 years ago
0 Hello

Hey TartSeagull57
We have released a version that fixes the bug. It is a RC but it is stable. Version number is 1.4.2rc1

2 years ago
0 Since V1.4.0, Our

Interesting. We are opening a discussion to weight the pros and cons of those different approaches - i ll of course keep you updated>
Could you please open a github issue abot that topic ? 🙏
http://github.com/allegroai/clearml/issues

2 years ago
0 Hey,

can you do docker ps to check if there are running containers that already bind the port ?

2 years ago
0 Since V1.4.0, Our

This means that the function will create a directory structure at local_folder , which structure will be the same as the minio's. That is to say that it will create directories corresponding to the buckets there - thus your clearml directory, which is the bucket the function found in the server root

2 years ago
0 Hey,

you can also specify a package, with or without specifying its version
https://clear.ml/docs/latest/docs/references/sdk/task#taskadd_requirements

2 years ago
0 Since V1.4.0, Our

Do you think that you could send us a bit of code in order to better understand how to reproduce the bug ? In particular about how you use dotenv...
So far, something like that is working normally. with both clearml 1.3.2 & 1.4.0

`
task = Task.init(project_name=project_name, task_name=task_name)

img_path = os.path.normpath("**/Images")
img_path = os.path.join(img_path, "
*.png")

print("==> Uploading to Azure")
remote_url = "azure://****.blob.core.windows.net/*****/"
StorageManager.uplo...

2 years ago
0 Hey,

No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?

2 years ago
0 Since V1.4.0, Our

this is because the server is thought as a bucket too = the root to be precise. Thus you will always have at least a subfolder created in local_folder - corresponding to the bucket found at the server root

2 years ago
0 Hey,

TenderCoyote78 i pm you to avoid overfilling the thread here

2 years ago
0 Since V1.4.0, Our

if i got you, clearml is a bucket, my_folder_of_interest is a sub bucket, inside clearml right ?

2 years ago
0 Hey,

TenderCoyote78
the status should normally be automatically updated . Do all the steps finish successfully ? And also the pipeline ?

2 years ago
0 Hello

Thanks for your patience

2 years ago
Show more results compactanswers