Reputation
Badges 1
533 × Eureka!Legit, if you have a cached_file (i.e. exists and accessible), you can return it to the caller
I agree, so shouldn't it be if cached_file: return cached_file
instead of if not cached_file: return cached_file
cluster.routing.allocation.disk.watermark.low:
Thanks Alon
I doubled checked the credentials in the configurations, and they have full EC2 access
` name: XXXXXXXXXX
on:
workflow_dispatch
jobs:
test-monthly-predictions:
runs-on: self-hosted
env:
DATA_DIR: ${{ secrets.RUNNER_DATA_DIR }}
GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.RUNNER_CREDS }}
steps:
# Checkout
- name: Check out repository code
uses: actions/checkout@v2
# Setup python environment
- name: Setup up python environment using Poetry
run: |
/home/elior/.poetry/bin/poetry env use python3.9
...
I set it to true, I have more packages installed now, but it still fails.. here is the log TimelyPenguin76
` Successfully installed clearml-1.0.5 cloudpickle-1.6.0 cycler-0.10.0 hyperopt-0.2.5 kiwisolver-1.3.2 matplotlib-3.4.3 networkx-2.6.2 pandas-1.3.2 patsy-0.5.1 plotly-5.3.0 python-dateutil-2.8.2 statsmodels-0.12.2 tenacity-8.0.1 tqdm-4.62.2
Adding venv into cache: /home/elior/.clearml/venvs-builds/3.8
Running task id [24a54a473c234b00a126ec805d74046f]:
[.]$ /home/elior/.clearml/venvs...
Yep, if communication is both ways, there is no way (that I can think of) it can be solved for offline mode.
But if the calls that are made from the server to the client can be redundant in a specific setup (some functionality will not work, but enough valuable functionality remains) then it is possible in the manual way
So if I'm collecting from the middle ones, shouldn't the callback be attached to them?
-_- why there isn't a link to source on the docs?
Depending on where the agent is, the value of DATA_DIR
might change
DangerousDragonfly8 but would this work if they are not concurrent but sequential?
a machine that had previous installation, but I deleted the /opt/trains
directory beforehand
that will require restarting the agent again?
inference table is a pandas dataframe
TimelyPenguin76 this fixed it, using the detect_with_pip_freeze
as true
solves the issue
So dynamic or static are basically the same thing, just in dynamic, I can edit the artifact while running the expriment?
Second, why would it be overwritten if I run a different run of the same experiment? As I saw, each object is stored under a directory with the task ID which is unique per run, so I assume I won't be overriding artifacts which are saved under the same name in different runs (regardless of static or dynamic)
Cool - so that means the fileserver which comes with the host will stay emtpy? Or is there anything else being stored there?
I'm using pipe.start_locally
so I imagine I don't have to .wait()
right?
Good, so if I'm templating something using clearml-task
(without queue, so the task is in draft mode) it will use this task? Even though it never exeucted?
sorry I think it trimmed it
what if i want it to use ssh creds?
I never installed trains on this environment