
Reputation
Badges 1
86 × Eureka!So, I replaced line 65 in your docker-compose file with image: elasticsearch:7.16.2 so that it pulls the image from the dockerhub registry than the registry at http://docker.elastic.co , I just want to confirm if this is okay with the functioning of clearml
version: "3.6"
services:
apiserver:
command:
apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
/opt/clearml/logs:/var/log/clearml
/opt/clearml/config:/opt/clearml/config
/opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
redis
mongo
elasticsearch
fileserver
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_ELASTIC_SERVICE_PORT: 920...
Also,
How do I just submit a pipeline to the server to be executed by an agent?
Currently I am able to use P ipeline Decorator.run_locally() to run it ;
However I just want to push it to a queue and make the agent do it's trick, any recommendations ?
Also, does PipelineDecorator.upload_model store anything on the fileserver ? I can't seem to understand the use of PipelineDecorator.upload_model() apart from making a model appear on the pipeline task
Hey, is it possible for me to upload a pdf as an artefact?
It's the same file from the raw github link
Configuration completed now; I t was a proxy issue from my end
However running my pipeline from a different m achine still gives me a problem
Yep, that's exactly what's happening.
However here's what I want to do:
upload model to clearml’s fileserver and get the model url in the details for easy download
So I did exactly that, and the name and path of the model on the local repo is noted;
However, I want to upload it to the fileserver
This issue was due to a wsl proxy problem; wsl’s host name couldn't be resolved by the server and that became a problem for running agents. It works fine on Linux machines so far, however.
So no worries :D
Hey We figured a temporary solution - by importing the modules and reloading the contents of the artefact by pickle. It still gives us a warning, though training works now. Do send an update if you find a better solution
Is there a way to store the return values after each pipeline stage in a format other than pickle?
Yes, I can pull other containers from dockerhub
We're initialising a task to ensure it appears on the experiments page;
Also not doing so gave us issues of ‘Missing parent pipeline task’ for a set of experiments we had done earlier
Here's the code, we're trying to make a pipeline using PyTorch so the first step has the dataset that ’ s created using ‘stuff’ - a local folder that serves as a package for my code. The issue seems to be in the unpicking stage in the train function.
Umm I suppose that won't work - this package consists of .py scripts that I use for a set of configs and Utils for my model.
Because it is pulling from http://docker.elastic.co , can I replace that one with the image available on docker hub?
I suppose the issue is only with the elastic search registry
Also, does clearml by default upload models if we save them using torch.save?