
Reputation
Badges 1
147 × Eureka!For others, who havenāt heard about ngrok:Ngrok exposes local servers behind NATs and firewalls to the public internet over secure tunnels.
if the task is of wrong type (not data_processing) - then itāll get both correct type and correct system tag
and ClearML should strive to be clear, amirite? š
not sure I fully get it. Where will the connection between task and scheduler appear?
ideally, I want to hardcode, e.g. use_staging = True, enqueue it; and then via clone-edit_user_properties-enqueue in UI start the second instance
I see that in the end, both query functions are calling Task._query_tasks
yes, but note that Iām not talking about VS Code instance set up be clearml-session, but about a local one. Iāll do another test to determine whether VS Code from clearml-session suffers from the same problem
it certainly does not use tensorboard python lib
I am not registering a model explicitly in apply_model
. I guess it is done automatically when I do this:output_models = train_task_with_model.models["output"] model_descriptor = output_models[0] model_filename = model_descriptor.get_local_copy()
SmugDolphin23 sorry I donāt get how this will help with my problem
clearml==1.5.0
WebApp: 1.5.0-192 Server: 1.5.0-192 API: 2.18
I had a bunch of training tasks each of which outputted a model. I want to apply each one of them to a specific dataset. I have a clearml task ( apply_model
) for that, which takes dataset_id and model-producing task_id as input. First time I initiate apply model by hardcoding ids and starting the run from my machine (it is then goes into cloud, when it reaches execute_remotely
)
but I donāt get to this line, because my task is already of type data_processing
when I go into Dataset.list_datasets with the debugger and remove system_tags=[ādatasetā] from api call params - I get the correct response back
this does not prevent from enqueuing and running new tasks, rather an eyesore
task_trash_trash
is probably irrelevant, as the latest entry there is from Dec 2021
weāll see, thanks for your help!
and my problem occurred right after I tried to delete ~1.5K tasks from a single subproject
also - line 77 which sets (non-system) tags is not invoked for me, thus if I define different tags for both task and dataset - then latter is being lost
I do see the āData Processingā type task in UI together with all other dataset-related features, like lineage plot
SuccessfulKoala55 any ideas or should we restart?
Hereās my workaround - ignore the fail messages, and manually create an SSH connection to the server with Jupyter port forwarded.
if you call Task.init in your entire repo (serve/train) you end up with "installed packages" section that contains all the required pacakges for both use cases ?
yes, and I thought that it is looking at what libraries are installed in virtualenv, but you explained that it rather doing a static analysis over whole repo.
Basically, my problem is that it returns empty result. In the same code I can get dataset by its ID and I can get the task (which created the dataset) usingTask.get_tasks()
(without mentioning th ID explicitly)
yeah, I think Iāll go with schedule_function
right now, but your proposed idea would make it even clearer.