Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, Is There A Way To Get The Quota Used By Each Task? My "Metrics" Quota Is Filling Up Very Quickly And I Would Like To Understand What'S Causing It.

Hi, is there a way to get the quota used by each task? My "metrics" quota is filling up very quickly and I would like to understand what's causing it.

  
  
Posted 11 months ago
Votes Newest

Answers 16


Hi @<1570220858075516928:profile|SlipperySheep79>
I think this is more complicated than one would expect. But as a rule of thumb, console logs and metrics are the main ones. I hope it helps? Maybe sort by number of iterations in the experiment table ?

BTW: probable better to ask in channel

  
  
Posted 11 months ago

I subscribe to the problem of having large metrics without a tool for proper inspection what is it coming from.

  
  
Posted 11 months ago

Like get the tasks that uses the most metrics API?

  
  
Posted 11 months ago

I can definitely feel you!
(I think the implementation is not trivial, metrics data size is collected and stored as commutative value on the account, going over per Task is actually quite taxing for the backend, maybe it should be an async request ? like get me a list of the X largest Tasks? How would the UI present it? As fyi, keeping some sort of book keeping per task is not trivial either, hence the main issue)

  
  
Posted 11 months ago

I tried to export them to json and they don't take more than 50KB each, but maybe they take more memory internally?

Ballpark should be the same.

I'm already at 300MB of usage with just 15 tasks

Maybe it was not updated yet? meaning you had more and deleted? (I think this is updated asynchronously, with max of 24h)

  
  
Posted 11 months ago

I'm already at 300MB of usage with just 15 tasks

Wow, what do you have there? I would try to download the console logs and see what the size you are getting, this is the only thing that makes sense, wdyt?

BTW: to get the detailed size for scalars, maximize the plot (otherwise you are getting "subsampled" data)

  
  
Posted 11 months ago

So the longest experiments I have takes ~800KB in logs. I have tens of plotly plots logged manually, how are they stored internally? I tried to export them to json and they don't take more than 50KB each, but maybe they take more memory internally?

  
  
Posted 11 months ago

Hi @<1523701087100473344:profile|SuccessfulKoala55> , I'm uploading some debug images by they are around 300KB each, and less than 10 per experiment. Also, aren't debug images counted as artifacts for the quota?

  
  
Posted 11 months ago

I deleted a few experiments, but they had the same kind of plots and metrics. so I don't think they would release much space

  
  
Posted 11 months ago

Also, very large git diffs and/or connected configuration might also grow fairly large

  
  
Posted 11 months ago

I have some git diffs logged but they are very small. For the configurations I saw that the datasets tasks have a fairly large "Dataset Content" config (~2MB), but I only have 5 dataset tasks

  
  
Posted 11 months ago

Would just having some python API be an option? It would be more than enough to check what is causing this, and it would be called infrequently

  
  
Posted 11 months ago

UI for some anomalous file,

Notice the metrics are not files/artifacts, just scalars/plots/console

  
  
Posted 11 months ago

Yes, or even just something like task.get_size()

  
  
Posted 11 months ago

Hi @<1570220858075516928:profile|SlipperySheep79> , are you by any chance uploading large debug images?

  
  
Posted 11 months ago

Hi @<1523701205467926528:profile|AgitatedDove14> , I already tried to check manually in the web UI for some anomalous file, i.e. by downloading the log files or exporting the metrics plots, but I couldn't find anything that takes more than 100KB, and I'm already at 300MB of usage with just 15 tasks. It's not possible to get more info using some python APIs?

  
  
Posted 11 months ago
615 Views
16 Answers
11 months ago
11 months ago
Tags