Reputation
Badges 1
662 × Eureka!SuccessfulKoala55 That string was autogenerated by pyhocon and matches their documentation too - https://github.com/lightbend/config/blob/master/HOCON.md#substitutions
The first example won't work (it will treat ${...}
as a string literal and won't replace it). The second does work, but as mentioned anyway, these were not hand typed, but rather generated from pyhocon, so I don't think that's the issue 🤔
I'm not sure; the setup is not unique to Mac.
Each user has their own .env
file which is given to the code entry point, and at some point will be loaded with dotenv.load_dotenv()
.
The environment variables are not set in code anywhere, but the clearml.conf
uses them directly.
@<1523701827080556544:profile|JuicyFox94> we have it up and running, hurray 🙂
One thing I noticed in the k8s logs is frequent warnings about Python 3.6..? Is the helm chart built with that Python version?
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.utils import int_...
And actually it fails on quite many tasks for us with this Python 3.6.
I tried to set up a different image ( agent8sglue.defaultContainerImage: "ubuntu:20.04"
) but that did not change much.
I suspect the culprit is agentk8sglue.image
, which is set to tag 1.24-21
of clearml-agent-k8s-base
. That image is quite very old… Any updates on that? 🤔
I believe that happens natively thanks to pyhocon? No idea why it fails on mac
i.e. It does not process tasks on its own?
But it does work on linux 🤔 I'm using it right now and the environment variables are not defined in the terminal, only in the .env
🤔
So a normal config file with environment variables.
AFAIU, something like this happens (oversimplified):
` from clearml import Task # <--- Crash already happens here
import argparse
import dotenv
if name == "main":
# set up argparse with optional flag for a dotenv file
dotenv.load_dotenv(args.env_file)
# more stuff `
Here's how it failed for us 😅poetry
stores git related data in poetry.lock
, so when you pip list
, you get an internal package we have with its version, but no git reference, i.e. internal_module==1.2.3
instead of internal_module @ git+https://....@commit
.
Then pip
actually fails (our internal module is not on pypi), but poetry
suceeds
Maybe it's better to approach this the other way, if one uses Task.force_requirements_env_freeze()
, then the locally updated packages aren't reflected in poetry
🤔
Or some users that update their poetry.lock
and some that update manually as they prefer to resolve on their own.
Haha, I've opened so many issues these past few days... Sure, np!
Right so it uses whatever version is available on the agent.
Yeah it would be nice to have either a poetry_version
(a-la https://github.com/allegroai/clearml-agent/blob/5afb604e3d53d3f09dd6de81fe0a494dacb2e94d/docs/clearml.conf#L62 ), rename the latter to manager_version
, or just install from the captured environment, etc? 🤔
Fair enough 😄
Could be nice to be able to define the fallbacks under type
maybe?type: [ poetry, pip ]
(current way under the hood) vs type: [ pip, poetry ]
The tl;dr is that some of our users like poetry
and others prefer pip
. Since pip install git+....
stores the git data, it seems trivial to first try and install based on pip
, and only later on poetry
, since the pip
would crash with poetry
as it stores git data elsewhere (in poetry.lock
)
Would be nice if the second one was a toggle-able feature (either per use or in the server settings) maybe?
Yeah I figured (2) would be the way to go actually 😄
Local changes are applied before installing requirements, right?
Ah it already exists https://github.com/allegroai/clearml-server/issues/134 , so I commented on it
Ah right, I missed that in the codebase. It just adds the .dataset
convention to the dataset task.
SmugDolphin23 we've been working with this for 2 weeks now, and it creates a lot of junk in our UI. Is there anyway to have better control over this?
Let me test it out real quick.
Those are for specific packages, I'm wondering about the package managers as a whole
No task, no dataset, just an empty container with no reference to the task it's attached.
It seems to me that it should not move the task if use_current_task=True
?
The agent also uses a different clearml.conf
, so it should not matter?
Most of these are configurations (specific for an execution, but one such configuration defines multiple tasks). Some models might be uploaded if the user does not use our built-in link to ClearML model fetching 😄