Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetBadger76
Moderator
1 Question, 239 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
8 Answers
1K Views
0 Votes 8 Answers 1K Views
hello TartSeagull57 This is a bug introduced with version 1.4.1, for which we are working on a patch. The fix is actually in test, and should be released ver...
2 years ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

hey H4dr1en
you just specify the packages that you want to be installed (no need to specify the dependancies) and the version if needed.
Something like :

pytorch==1.10.0

2 years ago
0 Hey,

hi WickedElephant66
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#models-artifacts-and-metrics
I am trying to find you some example, hold on πŸ™‚

2 years ago
0 Hi Community, Is There A Way To Download All The Logged Scalars/Plots Using Code Itself?

hey TenderCoyote78
Here is an example of how to dump the plots to jpeg files

` from clearml.backend_api.session.client import APIClient
from clearml import Task
import plotly.io as plio

task = Task.get_task(task_id='xxxxxx')

client = APIClient()

t = client.events.get_task_plots(task=task.id)

for i, plot in enumerate(t.plots):
fig = plio.from_json(plot['plot_str'])
plio.write_image(fig=fig, file=f'./my_plot_{i}.jpeg') `

2 years ago
0 Hi All. I Was Using Clearml Server Hosted On A Box That I Reach Behind Traefik Using Alias For Web, File And Api. After Migration It Works Perfect For New Experiments. I Changed The Name Of The Alias From

Hi MotionlessCoral18
You need to run some scripts when migrating, to update your old experiments. I am going to try to find you soem examples

2 years ago
0 Hey.

Hi Max
you can configure a clearml agent to pull your docker image from ECR and run the experiment into it. Is that answering your question ?

2 years ago
0 Hi, I'M Trying To Use

hi SoggyBeetle95
i reproduced the issue, could you confirm me that it is the issue ?
here is what i did :
i declared some secret env var in the agent section of clearml.conf i used extra_keys to have hidden on the console, it is indeed hidden but in the execution -> container section it is clear

2 years ago
0 Hi Everyone, Quick Question Regarding Minio And Logging:

oups yes, you are right. output_uri is used for the artifacts
for the logger it is https://clear.ml/docs/latest/docs/references/sdk/logger#set_default_upload_destination

btw what do you get when you do task.get_logger().get_default_upload_destination() ?

2 years ago
0 Hi, I'M Trying To Use

hey SoggyBeetle95
You're right that's an error on our part πŸ™‚
Could you please open an issue in https://github.com/allegroai/clearml-server/issues so we can track it?
We'll update there once a fix for that issue will be released! πŸ˜„

2 years ago
0 Hey,

Hey
unfortunatly there is no way to manage the CPU cores allocation for an agent

2 years ago
0 Hey,

for controlling the number of cpus cores, i am not sure. i keep you updated asap

2 years ago
0 Hey,

it is for the sack of the example. It permits to fire the agents in background, and thus to have several agents fired from the same terminal

2 years ago
0 Hi, Is There Any Manifest For The Relevant Polices Needed For The Aws Account (If We Are Using Autoscaling)? Also, Is There A Way To Use Github Deploy Key Instead Of Personal Token? Thanks !

If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script

2 years ago
0 Upload_Artifact Not Working With Minio

hi GentleSwallow91
Concerning the warning message, there is an entry in the FAQ. Here is the link :
https://clear.ml/docs/latest/docs/faq/#resource_monitoring
We are working on reproducing your issue

2 years ago
0 Hey,

hey
you can allocate ressources to worker by adding the --gpus parameter to the command line, when you fire the agent. The gpus are designed by a number.

Example: spin two agents, one per gpu on the same machine
clearml-agent daemon --detached --gpus 0 --queue default clearml-agent daemon --detached --gpus 1 --queue default

2 years ago
0 Hi, I Am Trying To Run The Same Script On The Remote Machine. I Successfully Installed Clearml Agent And Initialized It On The Remote Server. Then I Ran Clearml-Agent Daemon --Docker. Then I Cloned The Project And Sent It To A Queue. But I Got The Followi

hi VexedKoala41
Your agent is running into a docker container that may have a different version of python installed. It tries to install a version of the package that doesn't exist for this python version.
Try to specify the latest matching version Task.add_requirements( β€˜ipython’ , '7.16.3')

2 years ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :
Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')

2 years ago
0 Hey,

yep i am working on it - i have something that i suspect not to work as expected. nothing sure though
for the step that reports the model :
`
@PipelineDecorator.component(return_values=['res'],
parents=['step_one'],
cache=False,
monitor_models=['mymodel'])
def step_two():
import torch
from clearml import Task
import torch.nn as nn
class nn_model(nn.Module):
def init(self):
...

2 years ago
0 Hey,

regarding the file extension, it should not be a problem

2 years ago
0 I Am Doing Port Forwarding Of Ports From Localhost Clearml Server In Ec2 Instance To The Ports In Laptop Locally. I Am Able To Login To The Server And Generate The Credentials But I Am Not Able To Create Task

Hello DepravedSheep68 ,

In order to store your info into the S3 bucket you will need two things :
specify the uri where you want to store your data when you initialize the task (search for the parameter output_uri in the Task.init function https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) specify your s3 credentials into the clear.conf file (what you did)

2 years ago
0 Hey,

To provide an upload destination for the artifact, you can :
add the parameter default_output_uri to Task.init ( https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) set the destination into clearml.conf : sdk.development.default_output_uri ( https://clear.ml/docs/latest/docs/configs/clearml_conf#sdkdevelopment )
To enqueue the pipeline, you simply call it, without run_locally or debug_pipeline
You will have to provide the parameter execution_queue to your steps, or defau...

2 years ago
0 I’M Trying To Get The Meta-Information About The Code (Section Execution) To Be Auto-Filled, However When I Run The Script With The Pycharm Testrunner, It Is Missing. If I Use

Hi EnormousWorm79

The Pycharm testrunner wraps the script into a local cript, and thats what you are getting.
(jb pytest runner). Because it is local, you lose the source info

Let me check if I have a workaround or solution for you. I keep you updated

2 years ago
0 Hey, So I'M Trying To Upload An Artefact To Clearml’S Fileserver(I Have A Self Hosted Clearml Server Running), I'Ve Uploaded The File Using Storagemanager.Upload_File(Path, Url) And Giving The Url As “

Hi WickedElephant66
When you are in the Projects section of the WebApp (second icon on the left), enter either "All Experiments" or any project you want to access to. Up on the center is the Models section. You csn find the url the model can be downloaded from, in the details, section

2 years ago
0 Hi Everyone, Quick Question Regarding Minio And Logging:

yes everything that is downloaded is cached. The cache folder is in your config file :

` sdk {
# ClearML - default SDK configuration

storage {
    cache {
        # Defaults to system temp folder / cache
        default_base_dir: "~/.clearml/cache"
         size {
            # max_used_bytes = -1
            min_free_bytes = 10GB
            # cleanup_margin_percent = 5%
        }
    }

    direct_access: [
        # Objects matching are...
2 years ago
0 Hi, I Am Trying To Use The Parameterset For Hyper-Parameter Tuning With Dependencies, An Example Of How I Use It: Parameterset([{“Prm1”:1, “Prm2": 1},{“Prm1”:2, “Prm2":2}]) But I Get A Warning :

hi MoodySheep3
I think that you use ParameterSet the way it is supposed to be πŸ™‚
When I run my examples, I also get this warning - which is weird ! because
This is just a warning, the script continues anyway (and reaches end without issue) Those HP exists - and all the sub tasks corresponding to a given parameters set find them !

2 years ago
0 Hi, Bug Report. I Was Trying To Upload Data To S3 Via Clearml.Dataset Interface

thanks ! we have added quite a lot of new features on datasets on our last releases. I would encourage you to update your clearml packages πŸ™‚

2 years ago
Show more results compactanswers