Does it had any errors in the local run up to the task.execute_remotely
call?
You can try hack it, in the UI, under EXECUTION tab, add this prefix (-m scalene) to the script path, something like: - scalene my_clearml_task.py
, can you try with it? (make sure you install scalene or have it under your installed packages)
So maybe the path is related to the fact I have venv caching on?
hmmm could be...
Can you quickly disable the caching and try ?
=fil-profile= Preparing to write to fil-result/2021-08-19T20:23:30.905
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory.svg
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory-reversed.svg
now the problem is: fil-profiler persists the reports and then exits
and I have no way to save those as clearml artifacts
You could do (at the end of the codetask.upload_artifact('profiler', Path('./fil-result/'))
wdyt?
how do you run it locally? the same?
FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).
and I have no way to save those as clearml artifacts
but this will be invoked before fil-profiler starts generating them
not a full log yet (will have to inspect it to not have any non-public info), but something potentially interesting
but this time they were all present, and the command was run as expected:
[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
doesn’t look good
[.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
“assuming the “catboost_train.py” is in the working directory” - maybe I get this part wrong?
I guess that’s the only option, thanks for your help
hmm that is odd.
Can you send the full log ?
but this will be invoked before fil-profiler starts generating them
I thought it will flush in the background 😞
You can however configure the profiler to a specific folder, then mount the folder to the host machine:
In the "base docker args" section add -v /host/folder/for/profiler:/inside/container/profile
so probably, my question can be transformed into: “Can I have control over what command is used to start my script on clearml-agent”
the task is running, but no log output from fil-profiler (when ran totally locally, then it does some logging at the very beginning)
nope, I need to contact devops team for that, that can happen not earlier than Monday
No worries, I'll see if I can replicate it anyhow
Adding venv into cache: /root/.clearml/venvs-builds/3.8 Running task id [aa2aca203f6b46b0843699d1da373b25]: [.]$ /root/.clearml/venvs-builds/3.8/bin/python -u '/root/.clearml/venvs-builds/3.8/code/-m filprofiler run catboost_train.py'
btw, you can also run using python -m filprofiler run my_clearml_task.py
So maybe the path is related to the fact I have venv caching on?
But here you can see why it didn’t succeed
Adding venv into cache: /root/.clearml/venvs-builds/3.8
Running task id [8c65e88253034bd5a8dba923062066c1]:
[pipelines]$ /root/.clearml/venvs-builds/3.8/bin/python -u -m filprofiler run catboost_train.py
for some reason, when I ran it previous time, then repo, commit and working dir were all empty