Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
StrangePelican34
Moderator
6 Questions, 19 Answers
  Active since 10 January 2023
  Last activity 12 months ago

Reputation

0

Badges 1

19 × Eureka!
0 Votes
9 Answers
975 Views
0 Votes 9 Answers 975 Views
one year ago
0 Votes
5 Answers
982 Views
0 Votes 5 Answers 982 Views
3 years ago
0 Votes
10 Answers
974 Views
0 Votes 10 Answers 974 Views
3 years ago
0 Votes
7 Answers
840 Views
0 Votes 7 Answers 840 Views
one year ago
0 Votes
2 Answers
664 Views
0 Votes 2 Answers 664 Views
Is there documentation on the ClearML Slurm enterprise integration?
12 months ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
3 years ago
0 I Am Using Opennmt-Tf (2.18.1) And Clearml (1.1.2) For Training And Testing My Translation Models. I Am Wanting To Register The Incremental Bleu Scores And Final Test Data With Clearml (For Plotting, Comparison, Etc.), But It Is Not Working. I Cannot Fi

So, accordintg to the article (and the code as far as I could tell), OpenNmt-tf automatically enabled TensorBoard. That is, it auto-logs the relevant features through tf.summary ( https://www.tensorflow.org/api_docs/python/tf/summary ). This is output on the cmd line with the likes of:
` INFO:tensorflow:Evaluation result for step 9000: loss = 1.190986 ; perplexity = 3.290324 ; bleu = 63.569644
INFO:tensorflow:Step = 9100 ; steps/s = 2.17, source words/s = 28293, target words/s = 39388 ; Lea...

3 years ago
0 I Am Using Opennmt-Tf (2.18.1) And Clearml (1.1.2) For Training And Testing My Translation Models. I Am Wanting To Register The Incremental Bleu Scores And Final Test Data With Clearml (For Plotting, Comparison, Etc.), But It Is Not Working. I Cannot Fi

In Tensorflow's init .py, tensorboard appears to be initialized (including tf.summary):
` # Hook external TensorFlow modules.

Import compat before trying to import summary from tensorboard, so that

reexport_tf_summary can get compat from sys.modules. Only needed if using

lazy loading.

_current_module.compat.v2 # pylint: disable=pointless-statement
try:
from tensorboard.summary._tf import summary
_current_module.path = (
[_module_util.get_parent_dir(summary)] + _current_m...

3 years ago
0 After I Finish Training A Model, I Want To Call Logger.Report_Scalars To Help Monitor Inferencing Status (We Do A Lot Of Batch) But After The Model Finishes Training, Scalars Are No Longer Accepted By The Task As It Is Considered Completed. Help!

@<1523701205467926528:profile|AgitatedDove14> - for some reason none of those solutions are working. I am forcing "mark_started" - but it doesn't register. Models don't have the report_* endpoints and even trying with the artifact - once the job finishes, the artifact will no longer update.

one year ago
0 Is There Documentation On The Clearml Slurm Enterprise Integration?

Thanks! I'll look forward to it. If there is a draft that can give a general scope and function to the integration, that would also be helpful for this phase of our project.

12 months ago
0 After I Finish Training A Model, I Want To Call Logger.Report_Scalars To Help Monitor Inferencing Status (We Do A Lot Of Batch) But After The Model Finishes Training, Scalars Are No Longer Accepted By The Task As It Is Considered Completed. Help!

@<1523701205467926528:profile|AgitatedDove14> - after the model_trainer.train step it is marked as complete. This is done using our own repo - None . The extra reporting steps are not added here (I am working on that locally) but it is calling the job complete.

one year ago
0 For Clearml Serving, If I Am Trying To Deploy 100 Models On A Gpu That Can Handle 5 Concurrently, But Each One Will Be Sporadically Used (Fine Tuned Models Trained For Different Customers), Can Clearml-Serving Automatically Load And Unload Models Based Up

I checked Triton and found these references:

  • None
  • NoneIt appears that "they sell that" as Triton Management Service, part of None . It is possible to do through their API, but would need to be explicit. Moreover, there are likely a few different algorithms that could be us...
one year ago
0 For Clearml Serving, If I Am Trying To Deploy 100 Models On A Gpu That Can Handle 5 Concurrently, But Each One Will Be Sporadically Used (Fine Tuned Models Trained For Different Customers), Can Clearml-Serving Automatically Load And Unload Models Based Up

Let's see if I understand:

  • Triton server deployments only have manual, static deployment of models for inferencing (without enterprise)
  • ClearML can load and unload models based upon usage, but has to do so from the hard drive
  • Triton server does not support saving models off to normal RAM for faster loading/unloading
  • Therefore, currently, we can deploy 100 models when only 5 can be concurrently loaded, but when they are unloaded/loaded (automatically by ClearML), it will take a few sec...
one year ago