Reputation
Badges 1
147 × Eureka!Iâll make it more visible though
I do see the âData Processingâ type task in UI together with all other dataset-related features, like lineage plot
but this will be invoked before fil-profiler starts generating them
for the tasks that are not deleted, log is different:[2021-09-09 12:19:07,718] [8] [WARNING] [clearml.service_repo] Returned 400 for tasks.dequeue in 4ms, msg=Invalid task id: status=stopped, expected=queued
log:[2021-09-09 11:22:09,339] [8] [WARNING] [clearml.service_repo] Returned 400 for tasks.dequeue in 2ms, msg=Invalid task id: id=28d2cf5233fe41399c255950aa8b 8c9d,company=d1bd92a3b039400cbafc60a7a5b1e52b
I think they appeared when I had a lot of HPO tasks enqueued and not started yet, and then I decided to either Abort or Archive them - I donât remember already
no new unremovable entries have appeared (although I havenât tried)
For me - workaround is totally acceptable, thus scheduler is once again usable for me.
Thanks for the answer! Registering some metadata as a model doesnât feel correct to me. But anyway this is certainly not a show-stopper. Just wanted to clarify.
mostly the transformation of the pandas Dataframe - how the columns are added/removed/change types, NAs removed, rows removed etc
slightly related follow-up question: can I add user properties to a scheduler configuration?
I want to have 2 instances of scheduler - 1 starts reporting jobs for staging, another one for prod
not sure I fully get it. Where will the connection between task and scheduler appear?
yes, Iâll try it out
Did a small update: added a workaround and renamed the issue to include more client_facing conditionlimit_execution_time is present
instead of an implementation detail conditiontimeout_jobs are present
I tried this, but didnât help:input_models = current_task.models["input"] if len(input_models) == 1: input_model_as_input = {"name": input_models[0].name, "type": ModelTypeEnum.input} response = current_task.send(DeleteModelsRequest( task=current_task.task_id, models=[input_model_as_input] ))
I am not registering a model explicitly in apply_model
. I guess it is done automatically when I do this:output_models = train_task_with_model.models["output"] model_descriptor = output_models[0] model_filename = model_descriptor.get_local_copy()
not sure - ideally I would like to see these tables (e.g. with series_name, series_dtype, number_of_non_na_values as columns) back to back in the GUI to track the transformations. I think it isnât possible with Dataset
. Anyway, this whole scenario is not a must have, but a nice to have.
clearml==1.5.0
WebApp: 1.5.0-192 Server: 1.5.0-192 API: 2.18
or somehow, we can centralize the storage of S3 credentials (i.e. on clearml-server) so that clients can access s3 through the server
also, I donât see an edit button near input models
SmugDolphin23 sorry I donât get how this will help with my problem
and my problem occurred right after I tried to delete ~1.5K tasks from a single subproject
do you want a fully reproducible example or just 2 scripts to illustrate?
âsupply the local requirements.txtâ this means I have to create a separate requirements.txt for each of my 10+ modules with different clearml tasks