Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Want To Run My Clearml Task On An Agent In K8S Together With A Memory Profiler (Maybe

I want to run my clearml task on an agent in k8s together with a memory profiler (maybe https://github.com/plasma-umass/scalene or https://github.com/pythonspeed/filprofiler ). The problem is that they both require you to run it as scalene my_clearml_task.py or fil-profile run my_clearml_task.py . Any ideas on how to do this?

  
  
Posted one year ago
Votes Newest

Answers 30


but this will be invoked before fil-profiler starts generating them

I thought it will flush in the background 😞
You can however configure the profiler to a specific folder, then mount the folder to the host machine:
In the "base docker args" section add -v /host/folder/for/profiler:/inside/container/profile

  
  
Posted one year ago

and I have no way to save those as clearml artifacts

You could do (at the end of the code
task.upload_artifact('profiler', Path('./fil-result/'))wdyt?

  
  
Posted one year ago

and I have no way to save those as clearml artifacts

  
  
Posted one year ago

I guess that’s the only option, thanks for your help

  
  
Posted one year ago

so probably, my question can be transformed into: “Can I have control over what command is used to start my script on clearml-agent”

  
  
Posted one year ago

btw, you can also run using python -m filprofiler run my_clearml_task.py

  
  
Posted one year ago

Does it had any errors in the local run up to the task.execute_remotely call?

You can try hack it, in the UI, under EXECUTION tab, add this prefix (-m scalene) to the script path, something like: - scalene my_clearml_task.py , can you try with it? (make sure you install scalene or have it under your installed packages)

  
  
Posted one year ago

the task is running, but no log output from fil-profiler (when ran totally locally, then it does some logging at the very beginning)

  
  
Posted one year ago

[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py' doesn’t look good

  
  
Posted one year ago

how do you run it locally? the same?

  
  
Posted one year ago

But here you can see why it didn’t succeed

  
  
Posted one year ago

AgitatedDove14 I did exactly that.

  
  
Posted one year ago

[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'

  
  
Posted one year ago

“assuming the “catboost_train.py” is in the working directory” - maybe I get this part wrong?

  
  
Posted one year ago

not a full log yet (will have to inspect it to not have any non-public info), but something potentially interesting

  
  
Posted one year ago

hmm that is odd.
Can you send the full log ?

  
  
Posted one year ago

Adding venv into cache: /root/.clearml/venvs-builds/3.8 Running task id [aa2aca203f6b46b0843699d1da373b25]: [.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'

  
  
Posted one year ago

So maybe the path is related to the fact I have venv caching on?

  
  
Posted one year ago

So maybe the path is related to the fact I have venv caching on?

hmmm could be...
Can you quickly disable the caching and try ?

  
  
Posted one year ago

nope, I need to contact devops team for that, that can happen not earlier than Monday

  
  
Posted one year ago

No worries, I'll see if I can replicate it anyhow

  
  
Posted one year ago

I got it working!

  
  
Posted one year ago

for some reason, when I ran it previous time, then repo, commit and working dir were all empty

  
  
Posted one year ago

but this time they were all present, and the command was run as expected:

  
  
Posted one year ago

Adding venv into cache: /root/.clearml/venvs-builds/3.8
Running task id [8c65e88253034bd5a8dba923062066c1]:
[pipelines]$ /root/.clearml/venvs-builds/3.8/bin/python -u -m filprofiler run catboost_train.py

  
  
Posted one year ago

=fil-profile= Preparing to write to fil-result/2021-08-19T20:23:30.905
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory.svg
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory-reversed.svg

  
  
Posted one year ago

now the problem is: fil-profiler persists the reports and then exits

  
  
Posted one year ago

yes, same

  
  
Posted one year ago

FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).

  
  
Posted one year ago

but this will be invoked before fil-profiler starts generating them

  
  
Posted one year ago
86 Views
30 Answers
one year ago
4 months ago
Tags