
Reputation
Badges 1
41 × Eureka!CLI doesn’t care about the state of my git repo right?
it finally finished no worries
should I nuke the .clearml/cache
Dataset.get
works fine from python script, it pulls in the data into cache. Just the cli seems broken
Thanks, I guess I need to have a bucket under Cloud Storage?
got it, nice, thanks
this is great… so it looks like best to do it in a new dir
no containers for me 😁
So if I do this in my local repo, will it mess up my git state, or should I do it in a fresh directory?
Oh I think I know what missed. When I set --project … --name …
they did not match the names I used when I did task.init( )
in my code
So net-net does this mean it’s behaving as expected, or is there something I need to do enable “full venv cache”? It spends nearly 2 mins starting fromcreated virtual environment CPython3.8.10.final.0-64 in 97ms creator CPython3Posix(dest=/home/pchalasani/.clearml/venvs-builds/3.8, clear=False, global=False)
and then printing several lines lines like this
` Successfully installed pip-20.1.1
Collecting Cython
Using cached Cython-0.29.30-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86...
I have a strong attachment to a workflow based on CLI, nice zsh auto-suggestions, Hydra and the like. Hence why I moved away from dvc 🙂
I think I am missing one part — which command do I use on my local machine, to indicate the job needs to be run remotely? I’m imagining something likeclearml-remote run python3 my_train.py
So if I want to train with a remote agent on a remote machine, I have to:
spin up clearml-agent on the remote create a dataset using clearml-data, populate with data… from my local machine use clearml-data to upload data to google gs:// bucket modify my code so it accesses data from the dataset as here https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#accessing-datasetsAm I understanding right?
I would also be interested in a GCP autoscaler, I did not know it was possible/available yet.
I usedtask.execute_remotely(queue_name=..., clone=True)
and indeed it instantly activates the venv on the remote. I assume clone=True is fine
Yes after installing , it listed the installed packages in the console , with version of each
I see, so there’s no way to launch a variant of my last run (with say some config/code tweaks) via CLI, and have it re-use the cached venv?
Thanks for the quick response . Will look into this later , I think I understand
A quick note for others who may visit this… it looks like you have to do:Task.force_requirements_env_freeze(force=True, requirements_file="requirements.txt")
to ensure any changes in requirements.txt are reflected in the remote venv
I use a CLI arg remote=True so depending on that it will run locally or remotely.
But “cloning” via UI runs an exact copy of the code/config, not a variant, unless I edit those via UI (which is not ideal). So it looks like the following workflow that is trivial to do locally is not possible via remote agents:
run exp tweak code/configs in IDE, or tweak configs via CLI have it re-rerun in exact same venv (with no install overhead etc)
So maybe the remote agents are more meant for enqueuing a whole collection of settings (via code) and checking back in a few hours (in which ...