Reputation
Badges 1
86 × Eureka!version: "3.6"
services:
apiserver:
command:
apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
/opt/clearml/logs:/var/log/clearml
/opt/clearml/config:/opt/clearml/config
/opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
redis
mongo
elasticsearch
fileserver
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_ELASTIC_SERVICE_PORT: 920...
It's the same file from the raw github link
Yes, I can pull other containers from dockerhub
David,
I haven't provided a monitor_model parameter
A simple StorageManager.download_folder(‘url’)
My minio instance is hosted locally, so I'm providing an url like ‘ http://localhost:9000/bucket-name%E2%80%99
Hey David , I was able to get things uploaded to the fileserver by a change in the conf
Because it is pulling from http://docker.elastic.co , can I replace that one with the image available on docker hub?
How do I provide an output storage destination for that stage of the pipeline?
Hey We figured a temporary solution - by importing the modules and reloading the contents of the artefact by pickle. It still gives us a warning, though training works now. Do send an update if you find a better solution
So I'd have to make edits to the docker-compose file for clearml-serving; there would not be any issues arising due to that right?
Im using the clearml-serving for repo and running the docker-compose file there to set it tip ,
But im also running the clearml server on my machine self - hosted
stuff is a package that has my local modules - I've added it to my path by sys.path.insert, though here it isn't able to unpickle
I'm facing the issue during the initial set up of clearml serving - i.e the step where you use docker-compose to launch the serving containers
Here's the code, we're trying to make a pipeline using PyTorch so the first step has the dataset that ’ s created using ‘stuff’ - a local folder that serves as a package for my code. The issue seems to be in the unpicking stage in the train function.
Umm I suppose that won't work - this package consists of .py scripts that I use for a set of configs and Utils for my model.
However, I use this to create an instance of a dataloader(torch) this is fed into my next stage in the pipeline - though I import the local modules and add the folders to the path it is unable to unpickle the artifact
Hey so I was able to get the local .py files imported by adding the folder to my path sys .path
Yep, the pipeline finishes but the status is still at running . Do we need to close a logger that we use for scalers or anything?