Reputation
Badges 1
662 × Eureka!In which repo?:)
Thanks AgitatedDove14 , I'll give it a try. Perhaps additional documentation is needed for that extra_layout
Some examples of the mess it creates (also posted in the main channel):
A single project now has multiple subprojects The subprojects have the .datasets hidden subproject (with really frustrating project names) The subprojects are empty To access the original project, I have to go twice into the same project because of these hidden projects Because of these hidden subprojects, I cannot delete a project that has 0 experiments
That's exactly what I meant AgitatedDove14 π It's just that to access that comparison page, you have to make a comparison first. It would be handy to have a link (in the side bar?) to an empty comparison
I don't think there's a PR issue for that yet, at least I haven't created one.
I could have a look at this and maybe make a PR.
Not sure what would the recommended flow be like though π€
CostlyOstrich36 I'm not sure what you mean by "through the apps", but any script AFAICS would expose the values of these environment variables; or what am I missing?
That doesn't make sense? π€
Maybe I was not clear, but it's a simple part of the config file.
Answering myself for future interested users (at least GrumpySeaurchin29 I think you were interested):
You can "hide" (explained below) secrets directly in the agent π :
When you start the agent listening to a specific queue (i.e. the services worker), you can specify additional environment variables by prefixing them to the execution, i.e. FOO='bar' clearml-agent daemon .... Modify the example AWS autoscaler script - after the driver = AWSDriver.from_config(conf) , inject ...
Also something we are very much interested in (including the logger-based scatter plots etc)
Oh! Nice! I'll have a go at it and report back at the PR if it's in a functional state π Thanks AgitatedDove14 !
can I assume these files are reused
A definite maybe, they may or may not be used, but we'd like to keep that option π
Maybe the "old" way Dataset were shown is better suited ?
It was, but then it's gone now π
I see your point, this actually might be a "bug"?!
I would say so myself, but could be also by design..?
Awesome, I'll ask Product to reach out
LMK, happy to help out!
I know our use case is maybe a very different one, but...
task.upload_artifact(..., is_requirement=True) , task.connect_configuration(..., is_requirement=True)
Just implies these artifacts/configurations must be downloaded prior to running the code itself; then you also don't have to worry about zipping? π€
Unfortunately not, each task defines and constructs its own dataset. I want cloned task to save that link π€
For the former (static-ish environment variables), just add:
environment {
VAR1: value1
VAR2: value2
}
to the agentβs clearml.conf
Also I can't select any tasks from the dashboard search results π
Right so this is checksum based? Are there plans to only store delta changes for files (i.e. store the changed byte instead of the entire file)?
Hm. Is there a simple way to test tasks, one at a time?
@<1523701827080556544:profile|JuicyFox94> we have it up and running, hurray π
One thing I noticed in the k8s logs is frequent warnings about Python 3.6..? Is the helm chart built with that Python version?
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography and will be removed in a future release.
from cryptography.utils import int_...
That was a good idea, unfortunately did not help too much, but I think I may have a found a work around, thanks!
SuccessfulKoala55 This happens pip >= 22.3 btw.
Another semi-related issue is that I now encounter these kind of error messages:clearml_agent: ERROR: __init__() got an unexpected keyword argument 'types'
Great to hear @<1523701087100473344:profile|SuccessfulKoala55> ! Is there an estimated timeline for these releases?
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Debugging. It's very useful for us to be able to see the contents of the configuration and understand what is going on and what is meant to be going on. Without a preview (which in our case is the entire content of the configuration file), one has to take an annoying route of downloading the files etc. The configurations are uploaded to a single task and then linked across all task to conserve storage space (so the S3 storage point is identical across tasks) Sure, sounds good. I think it's a ...
Okay trying again without detached
It does, but I don't want to guess the json structure (what if ClearML changes it or the folder structure it uses for offline execution?). If I do this, I'm planning a test that's reliant on ClearML implementation of offline mode, which is tangent to the unit test
Hey SuccessfulKoala55 ! Is the configuration file needed for Task.running_locally() ? This is tightly related with issue #395, where we need additional files for remote execution but have no way to attach them to the task other then using the StorageManager as a temporary cache.