Reputation
Badges 1
662 × Eureka!But... Which queue does it listen to, and which type of instances will it use etc
For now we've monkey-patched it to our usecase:
` Dataset._Dataset__hidden_tag = "active"
def foo(cls, dataset_project, dataset_name):
dataset_project = dataset_project or "Datasets"
return dataset_project, dataset_project.rpartition("/")[0]
Dataset._build_hidden_project_name = foo `
That doesn't make sense? π€
Maybe I was not clear, but it's a simple part of the config file.
Indeed. I'll open an issue, sure!
clearml.backend_api.session.defs.ENV_HOST.get() did not work unfortunately π€
The Task.init is called at a later stage of the process, so I think this relates again to the whole setup process we've been discussing both here and in #340... I promise to try ;)
AgitatedDove14 I will try! I remember there were some issues with it, where I had to resort to this method first, but maybe things have changed since :)
Not sure if ClearML has any built in support, but we used the above for a similar issue but with Prefect2 :)
I'm not sure I follow, how would that solution look like?
Ah. Apparently getting a task ID while itβs running can cause this behaviour π€
Let me know if there's any additional information that can help SuccessfulKoala55 !
I can navigate through the projects, but selecting one task in one project, then navigating to another project and selecting a different task -> there is no suggestion to compare the tasks.
In the projects page if I show all - I just see the projects. If I search for a task of similar name, I get results, but I can't compare them via the UI.
The only way I managed so far was to create a pseudo-comparison between unrelated tasks in the same project, then remove one task from comparion, and u...
I'm using 1.1.6 (upgraded from 1.1.6rc0) - should I try 1.1.7rc0 or smth?
You could probably either:
Start the task first (using Task.init ), and then set the parameters if needed Attach the dataset to the task itself
Maybe this is part of the paid version, but would be cool if each user (in the web UI) could define their own secrets, and a task could then be assigned to some user and use those secrets during boot?
Same result π This is frustrating, wtf happened :shocked_face_with_exploding_head:
This is also specifically the services queue worker I'm trying to debug π€
Debugging. It's very useful for us to be able to see the contents of the configuration and understand what is going on and what is meant to be going on. Without a preview (which in our case is the entire content of the configuration file), one has to take an annoying route of downloading the files etc. The configurations are uploaded to a single task and then linked across all task to conserve storage space (so the S3 storage point is identical across tasks) Sure, sounds good. I think it's a ...
AgitatedDove14 yeah I see this now; this was an issue because I later had to "disconnect" the remote task, so it can, itself, create new tasks (using clearml.config.remote.override_current_task_id(None) ). I guess you might remember that discussion? π
EDIT: It's the discussion we had here, for reference. https://clearml.slack.com/archives/CTK20V944/p1640955599257500?thread_ts=1640867211.238900&cid=CTK20V944
So probably not needed in JitteryCoyote63 's case, we still have some...
Maybe it's better to approach this the other way, if one uses Task.force_requirements_env_freeze() , then the locally updated packages aren't reflected in poetry π€
Fair enough π
Could be nice to be able to define the fallbacks under type maybe?type: [ poetry, pip ] (current way under the hood) vs type: [ pip, poetry ]
Here's how it failed for us π
poetry stores git related data in poetry.lock , so when you pip list , you get an internal package we have with its version, but no git reference, i.e. internal_module==1.2.3 instead of internal_module @ git+https://....@commit .
Then pip actually fails (our internal module is not on pypi), but poetry suceeds
Local changes are applied before installing requirements, right?
Iβll also post this on the main channel -->
From the traceback ( backend_interface/task/task.py, line 178, in __init__ ), notice it's not Task.init
Removing the PVC is just setting the state to absent AFAIK
This was a long time running since I could not access the macbook in question to debug this.
It is now resolved and indeed a user error - they had implicitly defined CLEARML_CONFIG_FILE to e.g. /home/username/clearml.conf instead of /Users/username/clearml.conf as is expected on Mac.
I guess the error message could be made clearer in this case (i.e. CLEARML_CONFIG_FILE='/home/username/clearml.conf' file does not exist ). Thanks for the support! β€
No it doesn't, the agent has its own clearml.conf file.
I'm not too familiar with clearml on docker, but I do remember there are config options to pass some environment variables to docker.
You can then set your environment variables in any way you'd like before the container starts
Coming back to this; ClearML prints a lot of error messages in local tests, supposedly because the output streams are not directly available:
` --- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.10/logging/init.py", line 1103, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
File "/home/idan/CC/git/ds-platform/.venv/lib/python3.10/site-packages/clearml/task.py", line 3504, in _at_exit
self.__shutdown...