Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetBadger76
Moderator
1 Question, 239 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
8 Answers
2K Views
0 Votes 8 Answers 2K Views
hello TartSeagull57 This is a bug introduced with version 1.4.1, for which we are working on a patch. The fix is actually in test, and should be released ver...
3 years ago
0 Hi, Is There Any Approach To Export The Selected Experiments To Csv Or Excel In A Project? Just Like To Export The Following Tables. Thanks.

You need to use the API for exporting experiments to csv/excel. I am preparing an example for you

3 years ago
0 Hi We Are Getting The Following Error When We Are Trying To Run A Task On Our On Premis

btw can you screenshot your clearml-agent list and UI please ?

3 years ago
0 I Have An Inference Task In Clearml Where I Apply A Model (Defined In Input Params) To A Dataset. Clearml Registers The Model As An Input Model, Which Is Nice. But When I Clone The Task And Modify Input Param To Apply Another Model To The Same Dataset, Th

hi FiercePenguin76
Can you also send your clearml packages versions ?
I would like to sum your issue up , so that you could check i got it right

you have a task that has a model, that you use to make some inference on a dataset you clone the task, and would like to make inferences on the dataset, but with another modelthe problem is that you have a cloned task with the first model....

How have you registered the second model ? Also can you share your logs ?

3 years ago
0 Hi, Bug Report. I Was Trying To Upload Data To S3 Via Clearml.Dataset Interface

Hi,
It would be great if you could also send your clearml package version 🙂

3 years ago
0 Hi Everybody, I’M Getting Errors With Automatic Model Logging On Pytorch (Running On A Dockered Agent).

it works locally and not on a remote exec : can you check that the machine that the agent if executed from is correctly configured ? the agent there needs to be provided with the correct credentials the autolog uses the file extension to determine what it is reporting. can you try to use the regular .pt extension ?

3 years ago
0 I Am Doing Port Forwarding Of Ports From Localhost Clearml Server In Ec2 Instance To The Ports In Laptop Locally. I Am Able To Login To The Server And Generate The Credentials But I Am Not Able To Create Task

Hello DepravedSheep68 ,

In order to store your info into the S3 bucket you will need two things :
specify the uri where you want to store your data when you initialize the task (search for the parameter output_uri in the Task.init function https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) specify your s3 credentials into the clear.conf file (what you did)

3 years ago
0 Hi We Are Getting The Following Error When We Are Trying To Run A Task On Our On Premis

hi OutrageousSheep60
sounds like the agent is in reality ... dead. It sounds logical, because you cannot see it using ps
however, it would worth to check if you still can see it in the UI

3 years ago
0 Hi All! Any Example Or Doc To Use Clearml With Slurm As A Workload Manager ?

Hi MoodySparrow34
We have an user that wrote this example https://github.com/marekcygan/clearml-slurm-workers
It is a simple glue code to spin SLURM workers when the tasks are enqueued. Hope it will help

3 years ago
0 Hey,

when you spin a container , you map a host port with a container port using -p parameter
docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID =<service_id> -e CLEARML_SERVING_POLL_FREQ =5 clearml-serving-inference:latest
Here you map your computer's port 8080 with the container port 8080. If your 8080 port is already used, you can use another, using for example -p 8081:8080

3 years ago
0 Hello

Hey TartSeagull57
We have released a version that fixes the bug. It is a RC but it is stable. Version number is 1.4.2rc1

3 years ago
0 Can We Use S3 Buckets To Cache Environments?

hey UnevenDolphin73
you can mount your s3 in a local folder and provide that folder to your clearml.conf file.
I used s3fs to mount by s3 bucket as a folder. Then i modified agent.venvs_dir and agent.venvs_cache
(As mentionned here https://clear.ml/docs/latest/docs/clearml_agent#environment-caching )

3 years ago
0 Hi. When Using The Logger'S

In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms

` log = task.get_logger()

x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1

fig = go.Figure()

fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))

fig.update_layout(barmode='overlay') ...

3 years ago
0 Hey Guys! Has Anyone Ever Seen An Error Like This? I'M Using My Code In A

Hi SmugSnake6
I might have found you a solution 🎉 I answered on the GH thread https://github.com/allegroai/clearml-agent/issues/111

3 years ago
0 Since V1.4.0, Our

this is because the server is thought as a bucket too = the root to be precise. Thus you will always have at least a subfolder created in local_folder - corresponding to the bucket found at the server root

3 years ago
0 Hi Community, Is There A Way To Download All The Logged Scalars/Plots Using Code Itself?

Hi TenderCoyote78
Here is a snippet to illustrate how to retrieve the scalars and the plots from a task

` from clearml.backend_api.session.client import APIClient
from clearml import Task

task = Task.get_task(project_name=xxxx, task.name=xxxx) #or task_id=xxxx
client = APIClient()

#retrieving the scalars
client.events.scalar_metrics_iter_histogram(task=task.id)

#retrieving the plots
client.events.get_task_plots(task=task.id) `

3 years ago
0 Hi Folks, Is There A Way To Force Clear-Ml Agent With --Docker To

can you try to create an empty text file and provide its path to Task.force_requirements_env_freeze( your_empty_txt_file) ?

3 years ago
0 Hi, Is There Any Manifest For The Relevant Polices Needed For The Aws Account (If We Are Using Autoscaling)? Also, Is There A Way To Use Github Deploy Key Instead Of Personal Token? Thanks !

Hi SmugTurtle78
We currently don't support GitHub deploy keys, but there might be a way to make the task use SSH (and not HTTPS), so that you could put the SSH key on the AWS machine. Please let me check if I can find such a solution, and come back to you after.

3 years ago
0 Since V1.4.0, Our

Hi UnevenDolphin73
I am going to try to reproduce this issue, thanks for the details. I keep you updated

3 years ago
0 If I Create A Dataset With

hey
"when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty"

you are right, i think there is a bug here. We will release a fix asap 🙂

3 years ago
0 Need

can you show the logs ?

3 years ago
0 Hi, Is There Any Manifest For The Relevant Polices Needed For The Aws Account (If We Are Using Autoscaling)? Also, Is There A Way To Use Github Deploy Key Instead Of Personal Token? Thanks !

If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script

3 years ago
0 Hi All! I Trying To Organize My Workflow With Clearml, And I Found Out About Datasets. I Like The Concept And I Wonder If I Can Connect A Dataset To A Task / Experiment? Currently The Dataset Appears As Another Task In The Project Page. Thanks!

You can initiate your task as usual. When some dataset will be used in it - for example it could start by retrieving it using Dataset.get - then the dataset will be registered in the Info section (check in the UI) 😊

3 years ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :
Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')

3 years ago
0 Suppose I Use A Pipeline Decorator To Define A Pipeline:

hi PanickyMoth78
from within your function my_pipeline_function here is how to access the project and task names :

task = Task.current_task()
task_name = task.name
full_project_path = task.get_project_name()
project_name = full_project_path.split('/')[0]

Note that you could also use the full_project_path to get both project and task name
task_name = full_project_name.split('/')[-1]

3 years ago
Show more results compactanswers