Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4212 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Hi All! I Can'T Use Scalar Tab In All Experiments Due To Elastic Search Error:

Please run the following commands and share the results. Chances are that somehow the default mappings that we apply on the index creation were not applied to your events scalar index.

  • First run the following command
curl -XGET "localhost:9200/_cat/indices/events-training_stats_scalar-*"
  1. And then for each of the returned indices run the following:
curl -XGET "localhost:9200/<index_name>/_mappings"
2 years ago
0 Hi, I'M Looking For Documentation On Gcp Autoscalers. When I Search On The Docs Site, It Shows Me The Aws Autoscaler But Not The Gcp One. Can Someone Point Me To The Right Docs Page? Thanks!

Hi JumpyPig73 ,

It appears that only the AWS autoscalar is in the open version and other autoscalars are only in advanced tiers (Pro and onwards):
https://clear.ml/pricing/

3 years ago
0 Is There A Robust Way (Using The Sdk And Not The Ui) To Add Tags To Task Regardless Of Where It Is Executed?

I think you can get the task from outside and then add tags to that object

3 years ago
0 Hi Everyone! I'Ve Noticed That If I Run An Experiment And It Fails, The Clearml Agent Will Delete All Datasets That Have Been Downloaded During The Run. Is It Correct Behavior? How Can I Force The Agent To Preserve Such Datasets?

ExcitedSeaurchin87 , Hi 🙂

I think it's correct behavior - You wouldn't want leftover files flooding your computer.

Regarding preserving the datasets - I'm guessing that you're doing the pre-processing & training in the same task so if the training fails you don't want to re-download the data?

3 years ago
3 years ago
3 years ago
0 I Assume I Can Ask A Question Here. The Clearml Orchestrator Looks Interesting. But The Website Suggests That K8S Is Required. We Have A Linux Training Box (Lambdabox) Where We Want To Run Training. Can We Place The Clearml Orchestrator Agent On The M

RobustFlamingo1 , I think this is because you looked at 'Orchestrate for DevOps' and not 'Automate for Data Scientist'. If you switch to the other option you will see no K8S is required 🙂

I am guessing that the use-case shown there would be more what you're looking for. The K8S is something for larger scale deployments when the DevOps guys set up the system to run on K8S cluster

3 years ago
0 Hi, What Would Be The Recommended Way To Add/Track Arbitrary Models To/With Outputmodels? Currently Hacking It By Using Joblib Dump And Subsequently Deleting Unwanted "Local" Files. Arbitrary In This Case Just Extensions To Some Scikitlearn Classes.

If you set Task.init(..., output_uri=<PATH_TO_ARTIFACT_STORAGE>) everything will be uploaded to your artifact storage automatically.
Regarding models. I to skip the joblib dump hack you can simply connect the models manually to the task with this method:
https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#connect

3 years ago
0 We Are Getting "Fetch Experiment Failed" In Ui In One Of Projects. Other Projects Are Ok. In

Are you running a self deployed server? What is the version if that is the case?

3 years ago
0 Hi! Im Trying Pipelines With Data Creating And Model Training.

Hi AbruptHedgehog21 , it looks like you need to parameters.dataset_id on step data_creation

2 years ago
0 Hey Everyone Is There An Option In The Ui/Scalars Wizard To Have Metrics Aggregated Into Single Graph, E.G The Ability To See

Hi @<1523701842515595264:profile|PleasantOwl46> , I suggest opening a GitHub feature request for this 🙂

3 months ago
0 Hello, Deploy Clearml In Docker And Train A Yolov8 Model, Finish Train, In Plots No Pics Show,So Why

Are you using the community server or are you using the open source and self hosting?

one year ago
0 Hi, Is There A Way To Get Clearml.Conf Using Clearml Sdk? Tia

What is the use case of accessing clearml.conf during runtime?

2 years ago
0 Hello! How Can I Save And Resume Studies With Optune? If Best Quality Have Not Reached Or Another Reasons. Optune Have Capabilities For Resuming Studies.

Hi!
I believe you can stop and resume studies by adding these actions to your script:

Add save points via joblib.dump()
and connect them to clearml via clearml.model.OutputModel.connect()

Then, when you want to start or resume a study, load latest study file via joblib.load() and connect to clearml with http://clearml.model.InputModel.co nnect()

This way you can stop your training sessions with the agent and resume them from nearly the same point

I think all the required references are h...

4 years ago
0 [Clearml-Session Question] Why Does Jupyter Lab Have Only Token In Url While Code-Server Doesn’T?

Hi @<1524922424720625664:profile|TartLeopard58> , can you elaborate on what do you mean by code-server?

2 years ago
0 I Am Experiencing Performance Issues With Using Clearml Together With Pytorch Lightning Cli For Experiment Tracking. Essentially What We'Re Doing Is Fetching The Logger Object Through Task.Get_Logger() And Then Using The Reporting Methods. However, It Ad

Hi SoreHorse95 ,

Does ClearML not automatically log all outputs?

Regarding logging maybe try the following setting in ~/clearml.conf sdk.network.metrics.file_upload_threads: 16

3 years ago
0 Hi, Thanks A Lot For Your Product ! I Have A Question About Possible Functionality. Is It Possible Now To Group Graphics In "Scalars" Tab In Dropdown Groups As It Done In Tensorboard. I'Ll Try To Explain What I Mean, Let Say That I Am Solving Object Detec

Hi SillyGoat67 ,

Hmmm. What if you run these in separate experiments and each experiment reports it's own result? This way you could use comparison between experiments to see the different results grouped together.

Also you can report different scalars for the same series so you can see something like this:

3 years ago
0 Hi Everyone, I Have A Problem With My

What are the requirements specific in the experiment UI?

3 years ago
3 years ago
0 Hey Guys, I’M Trying To Install

DeliciousSeal67 , you need to update the docker image in the container section - like here:

3 years ago
0 Is There A Way To Get List Of Users From The Api?

Hi @<1523701842515595264:profile|PleasantOwl46> , you can use users.get_all to fetch them - None

one year ago
Show more results compactanswers