Reputation
Badges 1
147 × Eureka!havenāt tested it within decorator pipelines, but try
Logger.current_logger()
so I assume itās somehow related to remote connection form VS Code
docker: nvidia/cuda:11.2.1-cudnn8-runtime-ubuntu20.04
jupyterlab 3.0.11
clearml lib 0.17.5
no warnings2021-03-24 17:55:44,672 - clearml.Task - INFO - No repository found, storing script code instead
also, I tried running the notebook directly in remote jupyter - I see correct uncommitted changes
yeah, I missed the fact that Iām running it not by opening remote jupyter in browser, but by connecting to remote jupyter with local VS Code
I think we can live without mass deleting for a while
weāll see, thanks for your help!
and my problem occurred right after I tried to delete ~1.5K tasks from a single subproject
task_trash_trash
is probably irrelevant, as the latest entry there is from Dec 2021
gotcha, thanks!
restart of clearml-server helped, as expected. Now we see all experiments (except for those that were written into task__trash during the ādark timesā)
Iām rather sure that after restart everything will be back to normal. Do you want me to invoke smth via SDK or REST while the server is still in this state?
SuccessfulKoala55 any ideas or should we restart?
we certainly modified some deployment conf, but lets wait for answers tomorrow
in the far future - automatically. In the nearest future - more like semi-manually
Also, installed packages are also incorrect (not including ones that I install fmor within the notebook using !pip install package_name_here
)
` # Python 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]
azure_storage_blob == 12.8.0
boto3 == 1.17.30
clearml == 0.17.5
google_cloud_storage == 1.36.2
ipykernel == 5.5.0
Detailed import analysis
**************************
IMPORT PACKAGE azure_storage_blob
clearml.storage: 0
IMPORT PACKAGE boto3
clearml.storage: 0
IMPORT PACKA...
āVSCode running locally connected to the remote machine over the SSHā - exactly
I would probably like to see a fully-blown example with other market leading technologies covering parts which are missing from clear-ml. E.g. clearml+feast+seldon
Congrats on the release! Are you planning to release a roadmap so that the community would know what to expect next from ClearML?
I guess you can easily reproduce it by cloning any task which has an input model - logs, hyperparams etc are being reset, but inputmodel stays.
I am not registering a model explicitly in apply_model
. I guess it is done automatically when I do this:output_models = train_task_with_model.models["output"] model_descriptor = output_models[0] model_filename = model_descriptor.get_local_copy()
I am importing a module which is in the same folder as the main one (i.e. in the same package)
exactly what Iām talking about
āTo have the Full pip freeze
as āinstalled packagesā - thatās exactly what Iām trying to prevent. Locally my virtualenv has all the dependencies for all the clearml tasks, which is fine because I donāt need to download and install them every time I launch a task. But remotely I want to keep the bare minimum needed for the concrete task. Which clearml successfully does, as long as I donāt import any local modules.
but we run everything in docker containers. Will it still help?