Hmm can you run the agent in debug mode, and check the specific console log?
'''
clearml-agent --debug daemon --foreground ...
If this is GitHub/GitLab/Bitbucket what I'm thinking is just a link opening an iframe / tab with the exact entry point script / commit.
What do you think?
Agreed, MotionlessCoral18 could you open a feature request on the clearml-agent repo please? (I really do not want this feature to get lost, and I'm with you on the importance, lets' make sure we have it configured from the outside)
DrabCockroach54 notice here there is no aarch64 wheel for anything other than python 3.5...
(and in both cases only py 3.5/3.6 builds, everything else will be built from code)
https://pypi.org/project/pycryptodome/#files
For .git-credentials remove the git_pass/git_user from the clearml.conf
If you want to use ssh you need to also add:force_git_ssh_protocol: true
https://github.com/allegroai/clearml-agent/blob/a2db1f5ab5cbf178840da736afdc370cfff43f0f/docs/clearml.conf#L25
If there a way to do this without manually editing installed packages?
Running your code once with Task.init
should automatically detect all the directly imported packages, then when trains-agent
executes the Task, it will install them to a clean venv and put back all the packages inside the venv.
In order for all the used packages (e.g. bigquery) to appear in the "Installed packages" your cide needs to be executed once manually (i.e. not with trains-agent) then the ` tra...
Hi SkinnyPanda43
Do you mean the cleaml-agent or the cleaml python (a.k.a the auto package detection) ?
still it is a chatgpt interface correct ?
Actually, no. And we will change the wording on the website so it is more intuitive to understand.
The idea is you actually train your own model (not chatgpt/openai) and use that model internally, which means everything is done inside your organisation, from data through training and ending with deployment. Does that make sense ?
BoredHedgehog47 that actually depends on the container, are you running as root inside the container ?
if not I think the easiest hack is to always map /etc/hosts as a k8s secret file?
I'm not sure this is configurable from the outside π
Hi OutrageousGrasshopper93
I think that what you are looking for is Task.import_task and Task.export
https://allegro.ai/docs/task.html#trains.task.Task.import_task
https://allegro.ai/docs/task.html#trains.task.Task.export_task
I am running clearml-agent in docker mode btw.
Try -e PYTHONOPTIMIZE=1
in the docker args section, should do the same π
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONOPTIMIZE
You can control it with auto_ arguments in the Task.init call
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
. Perhaps it is the imports at the start of the script only being assigned to the first task that is created?
Correct!
owever when I split the experiment task out completely it seems to have built the cloned task correctly.
Nice!!
The remaining problem is that this way, they are visible in the ClearML web UI which is potentially unsafe / bad practice, see screenshot below.
Ohhh that makes sense now, thank you π
Assuming this is a one time credntials for every agent, you can add these arguments in the "extra_docker_arguments" in clearml.conf
Then make sure they are also listed in: hide_docker_command_env_vars
which should cover the console log as well
https://github.com/allegroai/clearml-agent/blob/26e6...
Is gpu_0_utilization also in % then?
Correct π
I was trying to find, what are those min and max value for above metrics.
Oh that makes sense, notice that you can get the values over time, so you can track the usage over the experiment lifetime (you can of course see it in the Scalar tab of the experiment)
and pip install clearml-agent
fails?
No, I mean actually compare using the UI, maybe the arguments are different or the "installed packages"
Yes! Thanks so much for the quick turnaround
My pleasure π
BTW: did you see this (it seems like the same bug?!)
https://github.com/allegroai/clearml-helm-charts/blob/0871e7383130411694482468c228c987b0f47753/charts/clearml-agent/templates/agentk8sglue-configmap.yaml#L14
I want that last python program to be executed with the environment that was created by the agent for this specific task
Well basically they all inherit the Python environment that points to the venv they started from, so at least in theory it should be transparent when the agent is spinning the initial process.
I eventually found a different way of achieving what I needed
Now I'm curious, what did you end up doing ?
UnevenDolphin73 it seems this is a UI browser limit, this means we will need to move it into the server ...
See here: https://clearml.slack.com/archives/CTK20V944/p1640247879153700?thread_ts=1640135359.125200&cid=CTK20V944
Okay great, so we do have the Args section there.
What do you have in the "Execution" tab?
You can check the keras example, run it twice, on the second time it will continue from the previous checkpoint and you will have input and output model.
https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py
So clearml-init can be skipped, and I provide the users with a template and ask them to append the credentials at the top, is that right?
Correct
What about the "Credential verification" step in clearml-init command, that won't take place in this pipeline right, will that be a problem?
The verification test is basically making sure the credentials were copy pasted correctly.
You can achieve the same by just running the following in your python console:
` from clearml import Ta...
I basically just mean having a date input like you would in excel where it brings up a calendar and a clock if itβs time β and defaults to βnowβ
I would love that as well, but I kind of suspect the frontend people will say these things tend to start small and grow into a huge effort. At the moment what we do is the UI is basically plain text and the casting is done on the SDK side.
You can however provide type information and help (you can see it when you hover over the arguments on th...
LazyFish41 just making sure, you built a container from the docker file, and used it as base docker image for the Task, is that correct ?
Also notice the cleaml-agent will not change the entry point of the docker meaning if the entry point does not end with plain bash, it will not actually run anything
It would be nice to have some documentation proclaiming how randomness behaves when running tasks (in all their variations). E.g. Should I trust seeds to be reset or should I not assume anything and do my own control over seeds.
That is a good point, I'll make sure we mention it somewhere in the docs. Any thoughts on where?
(also could you make sure all posts regrading the same question are put in the thread of the first post to the channel?)