Ah right, I missed that in the codebase. It just adds the .dataset
convention to the dataset task.
Let me test it out real quick.
It also happens when use_current_task=False
though. So the current best approach would be to not combine the task and the dataset?
Actually, it appears some elements (scalars, plots, etc) have no migrated by moving mongodb data.
Where are these stored? Any idea @<1523701827080556544:profile|JuicyFox94> ?
Scaling to zero, copying the mongodb data, and scaling back up worked like a charm.
Thanks @<1523701827080556544:profile|JuicyFox94> !
Also I can't select any tasks from the dashboard search results 😞
-
I guess? 🤔 I mean the same filter option one has for e.g. tags in the table view. In the "all experiments" project I think it would make sense for one to be able to select the projects of interest, or even filter for textual matches.
-
Sorry I meant the cards indeed :)
Hmmm, what 😄
So basically what I'm looking for and what I have now is something like the following:
(Local) I have a well-defined aws_autoscaler.yaml
that is used to run the AWS autoscaler. That same autoscaler is also run with CLEARML_CONFIG_FILE=....
(Remotely) The autoscaler launches, listens to the predefined queue, and is able to launch instances as needed. I would run a remote execution task object that's appended to the autoscaler queue. The autoscaler picks it up, launches a new instanc...
Ah okay 😄 Was confused by what you quoted haha 👍
That's exactly what I meant AgitatedDove14 🙂 It's just that to access that comparison page, you have to make a comparison first. It would be handy to have a link (in the side bar?) to an empty comparison
Either, honestly, would be great. I meant even just a link to a blank comparison and one can then add the experiments from that view
The results from searching in the "Add Experiment" view (can't resize column widths -> can't see project name ...)
Hey AgitatedDove14 🙂
Finally managed; you keep saying "all projects" but you meant the "All Experiments" project instead. That's a good start 👍 Thanks!
Couple of thoughts from this experience:
Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)? Could we add a filter on the project name in the "All Experiments" project? Could we add the project for each of the search results? (see above pictur...
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Managed now 🙂 Thank you for your patience!
I edited the previous post with some suggestions/thoughts
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
I can navigate through the projects, but selecting one task in one project, then navigating to another project and selecting a different task -> there is no suggestion to compare the tasks.
In the projects page if I show all - I just see the projects. If I search for a task of similar name, I get results, but I can't compare them via the UI.
The only way I managed so far was to create a pseudo-comparison between unrelated tasks in the same project, then remove one task from comparion, and u...
Unfortunately I can't take a photo of not being able to compare tasks by navigating around the WebUI...
Does that clarify the issue CostlyOstrich36 ?
I can't seem to manage the first way around. If I select tasks in different projects, I don't get the bottom bar offering to compare between them
I understand, but then the toml file needs to be parsed to ensure poetry is used. It's just a tool entry in the pyproject.toml.
I just ran into this too recently. Are you passing these also in the extra_clearml_conf
for the autoscaler?
Then perhaps mac treats missing environment variables as empty and linux just crashes? Anyway, the config loading should be deferred, shouldn't it?
This was a long time running since I could not access the macbook in question to debug this.
It is now resolved and indeed a user error - they had implicitly defined CLEARML_CONFIG_FILE
to e.g. /home/username/clearml.conf
instead of /Users/username/clearml.conf
as is expected on Mac.
I guess the error message could be made clearer in this case (i.e. CLEARML_CONFIG_FILE='/home/username/clearml.conf' file does not exist
). Thanks for the support! ❤
Hm, just a small update - I just verified and it does indeed work on linux:
` import clearml
import dotenv
if name == "main":
dotenv.load_dotenv()
config = clearml.backend_api.Config.load() # Success, parsed with environment variables `
Yeah 🤔 🤔 they did. I'll give your suggested fix a go on Monday!
SuccessfulKoala55 That string was autogenerated by pyhocon and matches their documentation too - https://github.com/lightbend/config/blob/master/HOCON.md#substitutions
The first example won't work (it will treat ${...}
as a string literal and won't replace it). The second does work, but as mentioned anyway, these were not hand typed, but rather generated from pyhocon, so I don't think that's the issue 🤔
I'm not sure; the setup is not unique to Mac.
Each user has their own .env
file which is given to the code entry point, and at some point will be loaded with dotenv.load_dotenv()
.
The environment variables are not set in code anywhere, but the clearml.conf
uses them directly.
Yes exactly, but I guess I could've googled for that 😅
Copy the uncommitted changes captured by ClearML using the UI, write to changes.patch
, run git apply changes.patch
👍