Reputation
Badges 1
50 × Eureka!No, not at all. I recon we started seeing errors around mid-last week. We are using default settings for everything except some password-stuff on the server.
Yeah, that makes sense. The only drawback is that you'll get a single point that all lines will go through in the Parallel Coordinates plot when the optimization finishes π
I think that you are absolutely correct. Thanks for the pointer!
@<1590514584836378624:profile|AmiableSeaturtle81> thatβs the service we are using :-)
How much RAM have you assigned to your elastic service?
Hi @<1523701087100473344:profile|SuccessfulKoala55> , thanks for responding. I've found out that my first error came from cloning a super old version of the clean up task in the web UI π
I don't know about the other error, to me it looks like the task gets deleted before handling errors, but since an error occurred (some 404 stuff, maybe the files actually aren't there) when deleting some artifacts on the task, clearml tries to reload the task and fails, with the 400/201 or 400/101. ...
Perfect! Thanks SuccessfulKoala55 , that would be an acceptable workaround until setup_upload also supports Azure π π
Sorry, I got caught up by other tasks. I might investigate further later, but it's not top of mind right now. Our main issue is to get people to archive their old tasks and models so they can be cleaned up π
@<1523701070390366208:profile|CostlyOstrich36> any thoughts? Are the model files themselves easier to serve?
@<1576381444509405184:profile|ManiacalLizard2> what happens when ES hits the limit? Does it go OOM, or does the scalars loading just take a long time in the web-ui? And what about tasks putting scalars in the index?
Sure. Really, I'm just using the default client:# ClearML SDK configuration file
api {
web_server: http://server.azure.com:8080
api_server: http://server.azure.com:8008
files_server: http://server.azure.com:8081
credentials {
"access_key" = "..."
"secret_key" = "..."
}
}
sdk {
# ClearML - default SDK configuration
storage {
cache {
# Defaults to system temp folder / cache
default_base_dir: "~/.clearml/c...
Hi CostlyOstrich36
I have created a base task on which I'm optimizing hyperparameters. With clearml-param-search I could use --params-override to set a static parameter, which should not be optimized, e.g. changing the number of epochs for all experiments. It seems to me that this capability is not present in HyperParameterOptimizer . Does that make sense?
From the example on https://clear.ml/docs/latest/docs/apps/clearml_param_search/ :
` clearml-param-search {...} --p...
but is model files easier to serve?
We are running the latest version (WebApp: 1.7.0-232 β’ Server: 1.7.0-232 β’ API: 2.21).
When I run docker logs clearml-elastic I get lots logs like this one:
{"type": "server", "timestamp": "2022-10-24T08:51:35,003Z", "level": "INFO", "component": "o.e.i.g.DatabaseNodeService", "cluster.name": "clearml", "node
.name": "clearml", "message": "successfully reloaded changed geoip database file [/tmp/elasticsearch-3596639242536548410/geoip-databases/cX7aMqJ4SwCxqM7s
YM-S9Q/GeoLite2-City.mmdb]...
I just tried and the result is the same. The other method only triggers on exceptions
Hey SweetBadger76 , thanks for answering. I'll check it out! Does that correspond to filling out azure.storage in the clearml.conf file?
And how do I ensure that the server can access the files from the blob storage?
This is an example of the console output of a task aborted via the webUI:
Epoch 1/29 ββΈββββββββββββββββββββββββββββββββββββββ 699/16945 0:04:53 β’ 1:55:25 2.35it/s v_num: 0.000
2024-09-16 12:52:57,263 - clearml.Task - WARNING - ### TASK STOPPING - USER ABORTED - LAUNCHING CALLBACK (timeout 30.0 sec) ###
[2024-09-16 12:52:57,284][core.callbacks.model_checkpoint][INFO] - Marking task as `in_progress`
[2024-09-16 12:52:57,309][core.callbacks.model_checkpoint][INFO] - Saving last checkpoint...
Thanks for responding @<1523701087100473344:profile|SuccessfulKoala55> . Good question! One solution could be to create a new open-source project with lightning + clearml integrations and link it to the Lightning ecosystem-ci ; I believe most people use the basic tensorboard-logger with ClearML, but the extended usecase of a ClearML model checkpoint callback might make it valuable.
I guess one would have to disable auto-logging of p...
Hi CostlyOstrich36 , thanks for answering. We are using compute instances through the Machine Learning Studio in Azure. They basically work by spinning up an instance, loading a docker-image and executing a specific script in a folder that you upload along with the docker-image. Nothing is persisted between runs and there is no clear notion of a "user" (when thinking of ~/.clearml.conf at least).
SuccessfulKoala55 yeah, sorry, should have mentioned that our storage is also Azure (blob sto...
Yes, I tried updating recently, it costed me a full days work of rolling back versions until I found something that worked π
Well, one solution could be to say that models can only be exported from main/master and then have devops start a new trigger on PR completion. That would require some logic for stopping the existing TriggerScheduler, but that shouldn't be too difficult.
However, the most flexible solution would be to have some way of triggering the execution of a script in the parent task environment, something along the lines of clearml-agent build ... . I just can't wrap my head around triggering that ...
Hi CostlyOstrich36
What I'm seeing is expected behavior:
In my toy example, I have a VAE which is defined by a YAML config file and parsed with PytorchLightning CLI. Part of the config defines the latent dimension (n_latents) and the number of input channels of the decoder (in_channels). These two values needs to be the same. When I just use the Lightning CLI, I can use variable interpolation with OmegaConf like this:
` class_path: mymodel.VAE
init_args:
{...}
bottleneck:
class_pat...
The lightning folks won't include new loggers anymore (since mid-2022, see None ) π
It's running v7.17.18 @<1722061389024989184:profile|ResponsiveKoala38>
Sorry for the late reply @<1722061389024989184:profile|ResponsiveKoala38> . So this is the diff between my local version (hosted together on a single server with docker-compose). Does anything spring to mind?
Specifically, this is what I get in the console log when the agent spins up a task:
Poetry Enabled: Ignoring requested python packages, using repository poetry lock file!
Creating virtualenv latent-features in /data/clearml/venvs-builds/3.9/task_repository/our-repo/.venv
Installing dependencies from lock file
Hi CurvedHedgehog15 , thanks for replying!
I guess that one could modify the config with variable interpolation (similar to how it's done in YAML, e.g. ${encoder.layers} ) - however, it seems to be quite invasive to specify that in our trainer script π
Well, consider the case where you start the trigger scheduler on commit A, then you do some work that defines a new model and commit as commit B, train some model and now you want to export/deploy the model by publishing it and tagging it with some tag that triggers the export, as in your example. The scheduler will then fail, because the model is not implemented at commit A.
Anyways, I think I've solved it, I'll post the workaround when I get around to it π
You can create a task in the t...
Just wanted to share a workaround for using a TriggerScheduler to execute a script using the latest commit of a given branch, without relying on cloning a Task. Don't know if it has been shown before in here π
from clearml import Model, Task
from clearml.automation import TriggerScheduler
def trigger_model_func(model_id: str):
model = Model(model_id)
print(f"Triggered model export for model '{model.name}' ({model_id})")
# NOTE: To execute from the branch of
# task...
I've tried setting the output_uri on Task.init, but that seems to only affect model checkpoints and artifacts