Reputation
Badges 1
147 × Eureka!if a provide a PR, there I donât see any CI processes in place that will verify the correctness of my code.
well, I first run clearml-session to start everything on the remote machine, then I close the local process (while Interactive is still running on the remote machine)
also, I donât see an edit button near input models
it certainly does not use tensorboard python lib
log:[2021-09-09 11:22:09,339] [8] [WARNING] [clearml.service_repo] Returned 400 for tasks.dequeue in 2ms, msg=Invalid task id: id=28d2cf5233fe41399c255950aa8b 8c9d,company=d1bd92a3b039400cbafc60a7a5b1e52b
Thanks for the answer! Registering some metadata as a model doesnât feel correct to me. But anyway this is certainly not a show-stopper. Just wanted to clarify.
docker: nvidia/cuda:11.2.1-cudnn8-runtime-ubuntu20.04
jupyterlab 3.0.11
clearml lib 0.17.5
no warnings2021-03-24 17:55:44,672 - clearml.Task - INFO - No repository found, storing script code instead
like replace a model in staging seldon with this model from clearml; push this model to prod seldon, but in shadow mode
it is missing in CLI, but I was able to set external_ssh_port and external_address in GUI. It was certainly a step forward, but still failed
âassuming the âcatboost_train.pyâ is in the working directoryâ - maybe I get this part wrong?
Congrats on the release! Are you planning to release a roadmap so that the community would know what to expect next from ClearML?
Although it is only for model tracking, autologging is yet to be implemented there
I think they appeared when I had a lot of HPO tasks enqueued and not started yet, and then I decided to either Abort or Archive them - I donât remember already
I tried this, but didnât help:input_models = current_task.models["input"] if len(input_models) == 1: input_model_as_input = {"name": input_models[0].name, "type": ModelTypeEnum.input} response = current_task.send(DeleteModelsRequest( task=current_task.task_id, models=[input_model_as_input] ))
AgitatedDove14 I did exactly that.
I see that scheduler task UI has the capabilities to edit user properties. But I donât see how I can read and/or write them through code
when I go into Dataset.list_datasets with the debugger and remove system_tags=[âdatasetâ] from api call params - I get the correct response back
so probably, my question can be transformed into: âCan I have control over what command is used to start my script on clearml-agentâ
âTo have the Full pip freeze as âinstalled packagesâ - thatâs exactly what Iâm trying to prevent. Locally my virtualenv has all the dependencies for all the clearml tasks, which is fine because I donât need to download and install them every time I launch a task. But remotely I want to keep the bare minimum needed for the concrete task. Which clearml successfully does, as long as I donât import any local modules.
I had a bunch of training tasks each of which outputted a model. I want to apply each one of them to a specific dataset. I have a clearml task ( apply_model ) for that, which takes dataset_id and model-producing task_id as input. First time I initiate apply model by hardcoding ids and starting the run from my machine (it is then goes into cloud, when it reaches execute_remotely )
I can try, but difficult to verify correctness without a publicly available test suite
mostly the transformation of the pandas Dataframe - how the columns are added/removed/change types, NAs removed, rows removed etc
not a full log yet (will have to inspect it to not have any non-public info), but something potentially interesting
if the task is of wrong type (not data_processing) - then itâll get both correct type and correct system tag
âVSCode running locally connected to the remote machine over the SSHâ - exactly
