Reputation
Badges 1
25 × Eureka!I want that last python program to be executed with the environment that was created by the agent for this specific task
Well basically they all inherit the Python environment that points to the venv they started from, so at least in theory it should be transparent when the agent is spinning the initial process.
I eventually found a different way of achieving what I needed
Now I'm curious, what did you end up doing ?
in my repo I maintain a bash script to setup a separate python env.
Hmm interesting, now I have to wonder what is the difference ? meaning why doesn't the agent build a similar one based on the requirements ?
JitteryCoyote63 you mean in runtime where the agent is installing? I'm not sure I fully understand the use case?!
In the installed packages section it includes
pywin32 == 303
even though that is not in my requirements.txt.
So for some reason it is being detected (meaning your code base actually imports it in code)
But you can just remove it, either by manually editing the cloned Task (right click, reset, then you can edit the section), or via codeTask.ignore_requirements("pywin32") task = Task.init(...)
or at least stick to the requirements.txt file rather than the actual environment
You can also for it to log the requirements.txt withTask.force_requirements_env_freeze(requirements_file="requirements.txt") task = Task.init(...)
ohh, could it be a 32bit version of python ?
and pip install clearml-agent
fails?
Hi FiercePenguin76
Artifacts are as you mentioned, you can create as many as you like but at the end , there is no "versioning" on top , it can be easily used this way with name+counter.
Contrary to that, Models do offer to create multiple entries with the same name and version is implied by order. Wdyt?
Registering some metadata as a model doesnβt feel correct to me.
Yes I'm with you π
BTW what kind of meta-data would need versions during the life time of a Task ?
Make sense. BTW: you can manually add data visualization to a Dataset with dataset.get_logger().report_table(...)
Interesting!
Wouldn't Dataset (class) be a good solution ?
This is what I just used:
` import os
from argparse import ArgumentParser
from tensorflow.keras import utils as np_utils
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Activation, Dense, Softmax
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint
from clearml import Task
parser = ArgumentParser()
parser.add_argument('--output-uri', type=str, required=False)
args =...
Funny it's the extension "h5" , it is a different execution path inside keras...
Let me see what can be done π
Oh my bad, post 0.17.5 π
RC will be out soon, in the meantime you can install directly from github:pip install git+
Hi GrievingTurkey78
I think it is already fixed with 0.17.5, no?
Cloud Access section is in theΒ
Profile
Β page.
Any storage credentials (S3 for example) are only stored on the client side (never the trains-server), this is the reason we need to configure them in the trains.conf. When the browser needs to access those URL's (downloading an artifact) it also needs the secret/key, it automatically display a popup requesting them, and will store them in this section. Notice they are stored on the browser session (as a cookie).
For some reason copying over everything and making another file and running it there does not allow it to run
Not sure i follow...
you should only have one ~/clearml.conf nad from wherever you are running your code it will always read the configuration from the same file
yes, looks like. Is it possible?
Sounds odd...
Whats the exact project/task name?
And what is the output_uri?
OutrageousGrasshopper93 could you send an example of the two links from the artifacts (one local one remote) ?
Thanks OutrageousGrasshopper93
I will test it "!".
By the way the "!" is in the project or the Task name?
actually the issue is that the packages are not being detected π
what happens if you do the following?Task.add_requirements("tensorflow") task = Task.init(...)
but realized calling that from the extension would be hard, so we opted to have the TypeScript code make calls to the ClearML API server directly, e.g.
POST /tasks.get_all_ex
.
did you manage to get that working?
- To get the credentials, we read the
~/clearml.conf
file. I tried hard, but couldn't get a TypeScript library to work to parse the HOCON config file format... so I eventually resorted to using (likely brittle) regex to grab the ClearML endpoint and API ke...
Hi ElegantCoyote26
what's the clearml version you are using?
Hi FrothyShark37
Can you verify with the latest version?
pip install -U clearml
YEYYYYYYyyyyyyyyyyyyyyyyyy
so for example if there was an idle GPU and Q3 take it and then there is a task comes to Q2 which we specified 3GPU but now the Q3 is taken some of these GPU what will happen
This is a standard "race" the first one to come will "grab" the GPU and the other will wait for it.
I'm pretty sure enterprise edition has preemption support, but this is not currently part of the open source version (btw: also the dynamic GPU allocation, I think, is part of the enterprise tier, in the opensource ...
FlutteringWorm14 any insight on the Task the it fails to delete ? or to reproduce ?
I try to add it to ClearML Serving, but it call
forward
method by default
If this is the case, then the statement above is odd to me, if this is a custom engine, who exactly is calling " forward
" ?
(in you code example you specifically call generate, as you should)