btw, you can also run using python -m filprofiler run my_clearml_task.py
so probably, my question can be transformed into: “Can I have control over what command is used to start my script on clearml-agent”
Does it had any errors in the local run up to the task.execute_remotely
call?
You can try hack it, in the UI, under EXECUTION tab, add this prefix (-m scalene) to the script path, something like: - scalene my_clearml_task.py
, can you try with it? (make sure you install scalene or have it under your installed packages)
[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
doesn’t look good
the task is running, but no log output from fil-profiler (when ran totally locally, then it does some logging at the very beginning)
how do you run it locally? the same?
FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).
But here you can see why it didn’t succeed
[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
“assuming the “catboost_train.py” is in the working directory” - maybe I get this part wrong?
hmm that is odd.
Can you send the full log ?
not a full log yet (will have to inspect it to not have any non-public info), but something potentially interesting
Adding venv into cache: /root/.clearml/venvs-builds/3.8 Running task id [aa2aca203f6b46b0843699d1da373b25]: [.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
So maybe the path is related to the fact I have venv caching on?
So maybe the path is related to the fact I have venv caching on?
hmmm could be...
Can you quickly disable the caching and try ?
nope, I need to contact devops team for that, that can happen not earlier than Monday
No worries, I'll see if I can replicate it anyhow
for some reason, when I ran it previous time, then repo, commit and working dir were all empty
but this time they were all present, and the command was run as expected:
Adding venv into cache: /root/.clearml/venvs-builds/3.8
Running task id [8c65e88253034bd5a8dba923062066c1]:
[pipelines]$ /root/.clearml/venvs-builds/3.8/bin/python -u -m filprofiler run catboost_train.py
now the problem is: fil-profiler persists the reports and then exits
=fil-profile= Preparing to write to fil-result/2021-08-19T20:23:30.905
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory.svg
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory-reversed.svg
and I have no way to save those as clearml artifacts
and I have no way to save those as clearml artifacts
You could do (at the end of the codetask.upload_artifact('profiler', Path('./fil-result/'))
wdyt?
but this will be invoked before fil-profiler starts generating them
but this will be invoked before fil-profiler starts generating them
I thought it will flush in the background 😞
You can however configure the profiler to a specific folder, then mount the folder to the host machine:
In the "base docker args" section add -v /host/folder/for/profiler:/inside/container/profile
I guess that’s the only option, thanks for your help