Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ThankfulClams64
Moderator
6 Questions, 53 Answers
  Active since 04 July 2024
  Last activity 28 days ago

Reputation

0

Badges 1

53 × Eureka!
0 Votes
3 Answers
117 Views
0 Votes 3 Answers 117 Views
How do you get ClearML GPU Compute to show up under Applications or Autoscalers?
2 months ago
0 Votes
7 Answers
458 Views
0 Votes 7 Answers 458 Views
Hello, are there any resources for trying to reduce the number of API calls? I am trying out Clear ML and with just 20 epochs it says there have been 80k api...
5 months ago
0 Votes
2 Answers
88 Views
0 Votes 2 Answers 88 Views
Hi from what I can tell trying to follow this documentation None To start an agent I should run clearml-agent daemon --queue test_queue --detached and to sto...
29 days ago
0 Votes
3 Answers
117 Views
0 Votes 3 Answers 117 Views
I'm trying to use clearml agents. For tensorflow it looks like it does not save the pip package correctly. I need to install it as tensorflow[and-cuda] not j...
one month ago
0 Votes
1 Answers
454 Views
0 Votes 1 Answers 454 Views
For clearml-agents where does it clone the git repo and can you specify the location somehow?
5 months ago
0 Votes
69 Answers
9K Views
0 Votes 69 Answers 9K Views
5 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

So I am only seeing values for the first epoch. It seems like it does not track all of them so maybe something is happening when it tries to log scalars.
I have seen it only log iterations but setting task.set_initial_iteration(0) seemed to fix that so it now seems to be logging the correct epoch
Tensorboard is correct and works. I have never seen an issue in the tensorboard logs

4 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I am on 1.16.2

    task = Task.init(project_name=model_config['ClearML']['project_name'],
                     task_name=model_config['ClearML']['task_name'],
                     continue_last_task=False,
                     auto_connect_streams=True)
4 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

Correct, so I get something like this

ClearML Task: created new task id=6ec57dcb007545aebc4ec51eb5b34c67
======> WARNING! Git diff too large to store (2536kb), skipping uncommitted changes <======
ClearML results page: 

but that is all

4 months ago
0 How Do You Get Clearml Gpu Compute To Show Up Under Applications Or Autoscalers?

Yea, from all the YouTube videos it is just there with no mention of how to get it. But I don't have it

2 months ago
0 Hi From What I Can Tell Trying To Follow This Documentation

Thank you! I think that is all I need to do

29 days ago
0 For Clearml-Agents Where Does It Clone The Git Repo And Can You Specify The Location Somehow?

It looks like it creates a task_repository folder in the virtual environment folder. There is a way to specify your virtual environment folder but I haven't found anyway to specify the git directory

5 months ago
0 Hello, Are There Any Resources For Trying To Reduce The Number Of Api Calls? I Am Trying Out Clear Ml And With Just 20 Epochs It Says There Have Been 80K Api Calls

I didn't do a very scientific comparison but the # of API calls did decrease substantially by turning off auto_connect_streams It is probably about 100k API calls per day with 1 experiment running where before it was maybe 300k API calls per day. Still seems like a lot when I only run 20-30 epochs in a day

5 months ago
0 Hello, Are There Any Resources For Trying To Reduce The Number Of Api Calls? I Am Trying Out Clear Ml And With Just 20 Epochs It Says There Have Been 80K Api Calls

Thanks! It looks like I can set

auto_connect_streams = False

in the task init at least to try.

We are using Keras so it is logging progress bars by default, which I think we could turn off. I just wouldn't expect logging text to require so many api calls. Especially since they charge by API calls I assumed it would be better managed.

5 months ago
0 Hello, Are There Any Resources For Trying To Reduce The Number Of Api Calls? I Am Trying Out Clear Ml And With Just 20 Epochs It Says There Have Been 80K Api Calls

Will do! It probably won't be until next week. I don't plan on stopping this run to try it but will definitely follow up with my results.
Yea I think if we self-hosted I wouldn't have noticed it at all

5 months ago
0 Hello, Are There Any Resources For Trying To Reduce The Number Of Api Calls? I Am Trying Out Clear Ml And With Just 20 Epochs It Says There Have Been 80K Api Calls

It's possible, is there a way to just slow down or turn off the log streaming to see how it affects the API calls?

5 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

When I try to abort an experiment. I get this in the log

clearml.Task - WARNING - ### TASK STOPPED - USER ABORTED - STATUS CHANGED ###

but it does not stop anything it just continues to run

4 months ago
0 I'M Trying To Use Clearml Agents. For Tensorflow It Looks Like It Does Not Save The Pip Package Correctly. I Need To Install It As

I guess I don't understand I am referring to the clearml configuration file on the agent. The only way I have gotten it to consistently work is to just install the environment before hand and set that environment variable. Otherwise it seems clearml is not correctly saving the environment to be able to reproduce it. In my case the issues is installing tensorflow instead of tensorflow[and-cuda] which is what was installed

one month ago
0 I'M Trying To Use Clearml Agents. For Tensorflow It Looks Like It Does Not Save The Pip Package Correctly. I Need To Install It As

I just used CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL can that be put int he clearml.conf? I didn't see a reference to it in the documentaiton

one month ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

Hi we are currently having the issue. There is nothing in the console regarding ClearML besides

ClearML Task: created new task id=0174d5b9d7164f47bd10484fd268e3ff
======> WARNING! Git diff too large to store (3611kb), skipping uncommitted changes <======
ClearML results page: 

The console logs continue to come in put no scalers or debug images show up.

4 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

It is not always reproducible it seems like something that we do not understand happens then the machine consistently has this issue. We believe it has something to do with stopping and starting experiments

4 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

I just created a new virtual environment and the problem persists. There are only two dependencies clearml and tensorflow. @<1523701070390366208:profile|CostlyOstrich36> what logs are you referring to?

4 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

It was working for me. Anyway I modified the callback. Attached is the script that has the issue for me whenever I add random_image_logger to the callbacks It only logs some of the scalars for 1 epoch. It then is stuck and never recovers. When I remove random_image_logger the scalars are correctly logged. Again this only on 1 computer, other computers we have logging work perfectly fine

4 months ago
0 I Am Using Clearml Pro And Pretty Regularly I Will Restart An Experiment And Nothing Will Get Logged To Clearml. It Shows The Experiment Running (For Days) And It'S Running Fine On The Pc But No Scalers Or Debug Samples Are Shown. How Do We Troubleshoot T

Okay I will do another run to capture the console output. We currently set auto_connect_streams to False to reduce the number of API calls. So there isn't really anything in the ClearML task page console section

4 months ago
Show more results compactanswers