Reputation
Badges 1
147 × Eureka!btw, you can also run using python -m filprofiler run my_clearml_task.py
nope, catboost docs offer to manually run tensorboard against the output folder https://catboost.ai/docs/features/visualization_tensorboard.html
So maybe the path is related to the fact I have venv caching on?
not sure - ideally I would like to see these tables (e.g. with series_name, series_dtype, number_of_non_na_values as columns) back to back in the GUI to track the transformations. I think it isn’t possible with Dataset . Anyway, this whole scenario is not a must have, but a nice to have.
I guess you can easily reproduce it by cloning any task which has an input model - logs, hyperparams etc are being reset, but inputmodel stays.
Basically, my problem is that it returns empty result. In the same code I can get dataset by its ID and I can get the task (which created the dataset) usingTask.get_tasks()(without mentioning th ID explicitly)
this does not prevent from enqueuing and running new tasks, rather an eyesore
=fil-profile= Preparing to write to fil-result/2021-08-19T20:23:30.905
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory.svg
=fil-profile= Wrote memory usage flamegraph to fil-result/2021-08-19T20:23:30.905/out-of-memory-reversed.svg
I see that in the end, both query functions are calling Task._query_tasks
Adding venv into cache: /root/.clearml/venvs-builds/3.8
Running task id [8c65e88253034bd5a8dba923062066c1]:
[pipelines]$ /root/.clearml/venvs-builds/3.8/bin/python -u -m filprofiler run catboost_train.py
and my problem occurred right after I tried to delete ~1.5K tasks from a single subproject
in the far future - automatically. In the nearest future - more like semi-manually
“VSCode running locally connected to the remote machine over the SSH” - exactly
yeah, I think I’ll go with schedule_function right now, but your proposed idea would make it even clearer.
I guess this is the one https://catboost.ai/docs/concepts/python-reference_catboostipythonwidget.html
we’ll see, thanks for your help!
no, I’m providing the id of task which generated the model as a “hyperparam”
so I assume it’s somehow related to remote connection form VS Code
restart of clearml-server helped, as expected. Now we see all experiments (except for those that were written into task__trash during the “dark times”)
SmugDolphin23 sorry I don’t get how this will help with my problem
One workaround that I see is to export commonly used code not to a local module, but rather to a separate in-house library.
“supply the local requirements.txt” this means I have to create a separate requirements.txt for each of my 10+ modules with different clearml tasks
Did a small update: added a workaround and renamed the issue to include more client_facing conditionlimit_execution_time is presentinstead of an implementation detail conditiontimeout_jobs are present
Also added implementation thought to the issue
The only thing I found is that I need to run flake8, but it fails even without any changes, i.e. it was not enforced before (see my msg in )
For me - workaround is totally acceptable, thus scheduler is once again usable for me.
I’ll make it more visible though
I already added to the task:Workaround: Remove limit_execution_time from scheduler.add_task