Reputation
Badges 1
86 × Eureka!Can PipelineDecorator.upload_model be used to store models on the clearml fileserver?
I'm asking this because my kwargs is observed as an empty dict if printed
Hey, is it possible for me to upload a pdf as an artefact?
This issue was due to a wsl proxy problem; wsl’s host name couldn't be resolved by the server and that became a problem for running agents. It works fine on Linux machines so far, however.
So no worries :D
Yep, that's exactly what's happening.
However here's what I want to do:
upload model to clearml’s fileserver and get the model url in the details for easy download
How do I provide a specific output path to store the model? (Say I want to server to store it in ~/models)
I'm training my model via a remote agent.
Thanks to your suggestion I could log the model as an artefact(using PipelineDecorator.upload_model()) - but only the path is reflected; I can't seem to download the model from the server
So I am able to access it via sending requests to the clearml fileserver but, any way to access it from the dashboard(the main app)?
Hey so I was able to get the local .py files imported by adding the folder to my path sys .path
stuff is a package that has my local modules - I've added it to my path by sys.path.insert, though here it isn't able to unpickle
I had initially just pasted the new credentials in place of the existing ones in my conf file;
Running clearml-init now fails at verifying credentials
Yes, I can pull other containers from dockerhub
Yep, no clue why I had two of them either;
It started my pipeline and a few seconds in, another pipeline shows up
So I'm trying to run my pipeline file that runs a pipeline locally and logs metrics and stuff to the clearml server
Alright then, the server worked as it should so far, thanks 😄
Is there a way to store the return values after each pipeline stage in a format other than pickle?
version: "3.6"
services:
apiserver:
command:
apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
/opt/clearml/logs:/var/log/clearml
/opt/clearml/config:/opt/clearml/config
/opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
redis
mongo
elasticsearch
fileserver
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_ELASTIC_SERVICE_PORT: 920...
Hey We figured a temporary solution - by importing the modules and reloading the contents of the artefact by pickle. It still gives us a warning, though training works now. Do send an update if you find a better solution