I manually deleted the allegroai/trains:latest
image, that didn't help either
Can you lend a few a words about how the not-pip freeze mechanism of detecting packages work?
not manually I assume that if I deleted the image, and then docker-composed up, and I can see the pull working it should pull the correct one
UptightCoyote42 - How are these images avaialble to all agents? Do you host them on Docker hub?
But does it disable the agent? or will the tasks still wait for the agent to dequeue?
How do I get from the node to the task object?
In my use case I'm using an agent on the same mahcine I'm developing, so pointing this env var to the same venv I'm using for development, will skip the venv creation process from teh task requirements?
why not use my user and group?
I'd go for
` from trains.utilities.pyhocon import ConfigFactory
config = ConfigFactory.parse_file(CONF_FILE_PATH) `
Anyway I checked the base task, and this is what it has in installed packages (seems like it doesn't list all the real packages in the environment)
Cool - what kind of objects are returned by .artifacts.
getitem
? I want to check their docs
let me try to docker-compose down --rmi all
I was refering to what is the returned object of Task.artifacts['...']
- when I call .get
I understand what I get, I'm asking because I want to see how the object I'm calling .get
on behaves
CostlyOstrich36 so why 1000:1000? My user and group are not that and so do all the otehr files I have under /opt/clearml
` name: XXXXXXXXXX
on:
workflow_dispatch
jobs:
test-monthly-predictions:
runs-on: self-hosted
env:
DATA_DIR: ${{ secrets.RUNNER_DATA_DIR }}
GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.RUNNER_CREDS }}
steps:
# Checkout
- name: Check out repository code
uses: actions/checkout@v2
# Setup python environment
- name: Setup up python environment using Poetry
run: |
/home/elior/.poetry/bin/poetry env use python3.9
...
If you want we can do live zoom or something so you can see what happens
working directory is the same relative to the script, but the absolute path is different as the github action creates a new environment for it and deletes it after
glad I managed to help back in some way
looks like it did pull the right image
How did it come to this? I didn't configure anything, I'm using the trains AMI, with the suggested instance type
If the credentials don't have access tothe autoscale service obviously it won't work
and the machine I have is 10.2.
I also tried nvidia/cuda:10.2-base-ubuntu18.04 which is the latest