Reputation
Badges 1
662 × Eureka!Yes exactly, but I guess I could've googled for that ๐
Copy the uncommitted changes captured by ClearML using the UI, write to changes.patch
, run git apply changes.patch
๐
Yes, as I wrote above ๐
They are set with a .env
file - it's a common practice. The .env
file is, at the moment, uploaded to a temporary cache (if you remember the discussion regarding the StorageManager
), so it's also available remotely (related to issue #395)
I'll have a look at 1.1.6 then!
And that sounds great - environment variables should be supported everywhere in the config, or then the docs should probably mention where they are and are not supported ๐
I'll be happy to test it out if there's any commit available?
The thing I don't understand is how come this DOES work on our linux setups ๐ค
AFAIU, something like this happens (oversimplified):
` from clearml import Task # <--- Crash already happens here
import argparse
import dotenv
if name == "main":
# set up argparse with optional flag for a dotenv file
dotenv.load_dotenv(args.env_file)
# more stuff `
The agent also uses a different clearml.conf
, so it should not matter?
But it does work on linux ๐ค I'm using it right now and the environment variables are not defined in the terminal, only in the .env
๐ค
So a normal config file with environment variables.
Could you provide a more complete set of instructions, for the less inclined?
How would I backup the data in future times etc?
- in the second scenario, I might have not changed the results of the step, but my refactoring changed the speed considerably and this is something I measure.
- in the third scenario, I might have not changed the results of the step and my refactoring just cleaned the code, but besides that, nothing substantially was changed. Thus I do not want a rerun.Well, I would say then that in the second scenario itโs just rerunning the pipeline, and in the third itโs not running it at all ๐
(I ...
Yeah I will probably end up archiving them for the time being (or deleting if possible?).
Otherwise (regarding the code question), I think itโs better if we continue the original thread, as it has a sample code snippet to illustrate what Iโm trying to do.
Yeah I was basically trying to avoid clutter in the Pipelines
page. But see my other thread for the background, maybe you have some good input there? ๐
See e None @<1523701087100473344:profile|SuccessfulKoala55>
Dynamic pipelines in a notebook, so I donโt have to recreate a pipeline every time a step is changed ๐ค
Hey @<1523701435869433856:profile|SmugDolphin23> , thanks for the reply! Iโm aware of the caching โ thatโs not the issue Iโm trying to resolve ๐
This is related to my other thread, so Iโll provide an example there -->
@<1523701827080556544:profile|JuicyFox94> we have it up and running, hurray ๐
One thing I noticed in the k8s logs is frequent warnings about Python 3.6..? Is the helm chart built with that Python version?
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.utils import int_...
And actually it fails on quite many tasks for us with this Python 3.6.
I tried to set up a different image ( agent8sglue.defaultContainerImage: "ubuntu:20.04"
) but that did not change much.
I suspect the culprit is agentk8sglue.image
, which is set to tag 1.24-21
of clearml-agent-k8s-base
. That image is quite very oldโฆ Any updates on that? ๐ค
Oh nono, more like:
- Create a pipeline
- Add N steps to it
- Run the pipeline
- It fails/succeeds, the user does something with the output
- The user would like to add/modify some steps based on the results now (after closer inspection).I wonder if at (5), do I have to recreate the pipeline every time? ๐ค
Iโll also post this on the main channel -->
Thanks! To clarify, all the agent does is then spawn new nodes to cover the tasks?
i.e. It does not process tasks on its own?
I am; it seems like maybe a couple of hours?
Thanks CostlyOstrich36 !
Not that I recall
The deferred_init
input argument to Task.init
is bool
by default, so checking type(deferred_init) == int
makes no sense to begin with, and is altering the flow.
Thanks AgitatedDove14 , I'll first have to prove viability with the free version :)
SuccessfulKoala55 could this be related to the monkey patching for logging platform? We have our own logging handlers that we use in this case
I thought so too - so I added flush calls just in case, but nothing's changed.
This is somewhat weird since it always happens in the above scenario (Ray + ClearML), and always in the last task/job from Ray