Reputation
Badges 1
46 × Eureka!Yeah, I'm starting to lean towards enterprise solution more and more 😁
Thanks!
@<1714813627506102272:profile|CheekyDolphin49> You should probably use 'General/coupling' and 'General/rep'
To make sure I understand, I need to setup a domain with a cert and it should work, no additional ClearML config is required?
The issue was .ssh wasn't propagated so the git repository couldn't be cloned.
Additional info:
-Public URL uses HTTPS, internal traffic doesn't.
-clearml.storage fails while trying to fetch None ...
Meaning it just replaced the internal IP with the URL at some point for some reason, it doesn't exist in that form anywhere in any configs (http and public URL).
Neither, metric is a number you report through the Logger:
I just added the secrets/keys to docker-compose.yml and restarted everything but no change.
Not ClearML employee (just a recent user), but maybe this will help? None
So after publishing a task (right click/Publish from WebUI), one of the models got their id changed to __DELETED__4be00...
The other one (last_model on the screenshot below) is all good and didn't get deleted in this way.
"best_model" exists on the disk and I can access it by taking last_model's URL and just changing the file name, but I cannot normally access it via id (which has now changed to __DELETED__4be00...). Any ideas why this might have happened?
, when I log images, they appear in the UI with http://<my-ip> so they are inaccessible (they should be translated to None . Is there any path_substitution variant for this scenario in the config? I can't seem to find it in the docs. Thanks!
Perfect, exactly what I needed, thanks!
Found this, seems to be exactly this: None
It appears that running docker as --privileged resolves the issue which is easier for me than to edit all of the instances I've already created. Is there an easy way to add a docker argument in the python script?
I've tried task.set_base_docker(docker_arguments="--privileged") right after Task.init but it doesn't seem to work.
Thanks!
I'll check the docker command next time this happens, thanks! For the machines, all of them have GPUs (and are in fact identical/cloned VMs) and if I rerun it and get the same exact machine again it works so it's some part of "GPU detection" or something, we'll know more hopefully once it happens again, thanks.
Got it. Is there any way to skip a point at some iteration? If I just don't report it at iteration t I'll get interpolation from t-1 to t+1.
I hacked around the solution by setting api.files_server for the agent to the public URL, but ideally I'd avoid going through reverse-proxy if there's some path_substitution equivalent for this. Thanks
I know about clearml.conf but wanted to avoid ssh-ing through 50 instances to edit it.
task.set_base_docker does the job, but docker_arguments doesn't propagate if I leave docker_image as None (it just uses both image and arguments from clearml.conf of the agent). If I explicitly state docker_image and docker_arguments in task.set_base_docker it works fine.
Probably not, I'm trying to access it via external IP. Could you point me to instructions for that in the docs, I don't remember seeing it anywhere? Thanks!
Once I used clearml-data add --folder * API everything works correctly (though all files recursively ended up in the root, I had luck all were named differently).
clearml-1.13.1
Task.add_requirements("requirements.txt")
task = Task.init(project_name="My project", task_name="My task")
task.execute_remotely(queue_name="default")
...
Having a bit of trouble with this one (sorry for possibly dumb questions).
Are there any docs on how to add certs to the docker image? I see this ( None ) which is where letsencrypt points me to, but I'm not sure what's the proper way to do this on the webapp docker (I'd assume there's a non-hacky way to do it as others are using the same setup I'm trying to make work I guess)