Nope, no other config files
Not sure if ClearML has any built in support, but we used the above for a similar issue but with Prefect2 :)
It misses the repository information of course, but the 'configuration/Args' were logged. So something weird in identifying the repository
And last but not least, for dictionary for example, it would be really cool if one could do:my_config = task.connect_configuration(my_config, name=name) my_other_config = task.connect_configuration(my_other_config, name=other_name) my_other_config['bar'] = my_config # Creates the link automatically between the dictionaries
Let me know if there's any additional information that can help SuccessfulKoala55 !
This could be relevant SuccessfulKoala55 ; might entail some serious bug in ClearML multiprocessing too - https://stackoverflow.com/questions/45665991/multiprocessing-returns-too-many-open-files-but-using-with-as-fixes-it-wh
I'm saying it's a bug
Great, thanks! Any idea about environment variables and/or other files (CSV)? I suppose I could use the task.upload_artifact for the CSVs. but I'm still unsure about the environment variables
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
I'll have yet another look at both the latest agent RC and at the docker-compose, thanks!
There was no "default" services agent btw, just the queue, I had to launch an agent myself (not sure if it's relevant)
StorageManager.download_folder(remote_url=' s3://some_ip:9000/clearml/my_folder_of_interest ', local_folder='./') yields a new folder structure, ./clearml/my_folder_of_interest , rather than just ./my_folder_of_interest
AgitatedDove14 yeah I see this now; this was an issue because I later had to "disconnect" the remote task, so it can, itself, create new tasks (using clearml.config.remote.override_current_task_id(None) ). I guess you might remember that discussion? 😁
EDIT: It's the discussion we had here, for reference. https://clearml.slack.com/archives/CTK20V944/p1640955599257500?thread_ts=1640867211.238900&cid=CTK20V944
So probably not needed in JitteryCoyote63 's case, we still have some...
I guess it's mixed. If #340 is resolved, then this initializer task will be a no-op: detach, and init-close new tasks as needed.
It does not 🙂
We started discussing it here - https://clearml.slack.com/archives/CTK20V944/p1640955599257500?thread_ts=1640867211.238900&cid=CTK20V944
You suggested this solution - https://clearml.slack.com/archives/CTK20V944/p1640973263261400?thread_ts=1640867211.238900&cid=CTK20V944
And I eventually found this solution to work - https://clearml.slack.com/archives/CTK20V944/p1641034236266500?thread_ts=1640867211.238900&cid=CTK20V944
None, they're unusable for us.
It failed on some missing files in my remote_execution, but otherwise seems fine now
It is installed on the pipeline creating the machine.
I have no idea why it did not automatically detect it 😞
e.g. a separate structured user guide with common tips, usability, best practices - https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html
vs the doc, where each function is its own page, e.g.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html
Honestly I wouldn't mind building the image myself, but the glue-k8s setup is missing some documentation so I'm not sure how to proceed
Those are cool and very welcome additions (hopefully the additional info in the Info tab will be a link?) 😁
The main issue is the clutter that the forced renaming creates, as shown in the pictures I attached in the other thread.
Why does ClearML hide the dataset task from the main WebUI? Users should have some control over that. If I specified a project for the dataset, I specifically want it there, in that project, not hidden away in some .datasets hidden sub-project. Not...
Thanks CostlyOstrich36 !
And can I make sure the same budget applies to two different queues?
So that for example, an autoscaler would have a resource budget of 6 instances, and it would listen to aws and default as needed?
We just inherit from logging.Handler and use that in our logging.config.dictConfig ; weird thing is that it still logs most of the tasks, just not the last one?
Yeah I managed to work around those former two, mostly by using Task.create instead of Task.init . It's actually the whole bunch of daemons running in the background that takes a long time, not the zipping.
Regarding the second - I'm not doing anything per se. I'm running in offline mode and I'm trying to create a dataset, and this is the error I get...
There is a data object it, but there is no script object attached to it (presumably again because of pytest?)
Thanks for your help SuccessfulKoala55 ! Appreciate the patience 🙏
I've been answering there as well 🤕
SuccessfulKoala55 That string was autogenerated by pyhocon and matches their documentation too - https://github.com/lightbend/config/blob/master/HOCON.md#substitutions
The first example won't work (it will treat ${...} as a string literal and won't replace it). The second does work, but as mentioned anyway, these were not hand typed, but rather generated from pyhocon, so I don't think that's the issue 🤔
I tried that, unfortunately it does not help 😞