now the problem is: fil-profiler persists the reports and then exits
so probably, my question can be transformed into: “Can I have control over what command is used to start my script on clearml-agent”
Adding venv into cache: /root/.clearml/venvs-builds/3.8 Running task id [aa2aca203f6b46b0843699d1da373b25]: [.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
but this time they were all present, and the command was run as expected:
not a full log yet (will have to inspect it to not have any non-public info), but something potentially interesting
hmm that is odd.
Can you send the full log ?
nope, I need to contact devops team for that, that can happen not earlier than Monday
how do you run it locally? the same?
Does it had any errors in the local run up to the task.execute_remotely
call?
You can try hack it, in the UI, under EXECUTION tab, add this prefix (-m scalene) to the script path, something like: - scalene my_clearml_task.py
, can you try with it? (make sure you install scalene or have it under your installed packages)
but this will be invoked before fil-profiler starts generating them
So maybe the path is related to the fact I have venv caching on?
btw, you can also run using python -m filprofiler run my_clearml_task.py
and I have no way to save those as clearml artifacts
You could do (at the end of the codetask.upload_artifact('profiler', Path('./fil-result/'))
wdyt?
But here you can see why it didn’t succeed
Adding venv into cache: /root/.clearml/venvs-builds/3.8
Running task id [8c65e88253034bd5a8dba923062066c1]:
[pipelines]$ /root/.clearml/venvs-builds/3.8/bin/python -u -m filprofiler run catboost_train.py
the task is running, but no log output from fil-profiler (when ran totally locally, then it does some logging at the very beginning)
for some reason, when I ran it previous time, then repo, commit and working dir were all empty
I guess that’s the only option, thanks for your help
=fil-profile= Preparing to write to fil-result/2021-08-19T20:23:30.905
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory.svg
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory-reversed.svg
So maybe the path is related to the fact I have venv caching on?
hmmm could be...
Can you quickly disable the caching and try ?
No worries, I'll see if I can replicate it anyhow
FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).
[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
doesn’t look good
but this will be invoked before fil-profiler starts generating them
I thought it will flush in the background 😞
You can however configure the profiler to a specific folder, then mount the folder to the host machine:
In the "base docker args" section add -v /host/folder/for/profiler:/inside/container/profile
“assuming the “catboost_train.py” is in the working directory” - maybe I get this part wrong?
and I have no way to save those as clearml artifacts