Reputation
Badges 1
662 × Eureka!No task, no dataset, just an empty container with no reference to the task it's attached.
It seems to me that it should not move the task if use_current_task=True
?
SmugDolphin23 we've been working with this for 2 weeks now, and it creates a lot of junk in our UI. Is there anyway to have better control over this?
Ah right, I missed that in the codebase. It just adds the .dataset
convention to the dataset task.
Let me test it out real quick.
Actually, it appears some elements (scalars, plots, etc) have no migrated by moving mongodb data.
Where are these stored? Any idea @<1523701827080556544:profile|JuicyFox94> ?
Scaling to zero, copying the mongodb data, and scaling back up worked like a charm.
Thanks @<1523701827080556544:profile|JuicyFox94> !
Also I can't select any tasks from the dashboard search results π
-
I guess? π€ I mean the same filter option one has for e.g. tags in the table view. In the "all experiments" project I think it would make sense for one to be able to select the projects of interest, or even filter for textual matches.
-
Sorry I meant the cards indeed :)
Hmmm, what π
So basically what I'm looking for and what I have now is something like the following:
(Local) I have a well-defined aws_autoscaler.yaml
that is used to run the AWS autoscaler. That same autoscaler is also run with CLEARML_CONFIG_FILE=....
(Remotely) The autoscaler launches, listens to the predefined queue, and is able to launch instances as needed. I would run a remote execution task object that's appended to the autoscaler queue. The autoscaler picks it up, launches a new instanc...
Ah okay π Was confused by what you quoted haha π
That's exactly what I meant AgitatedDove14 π It's just that to access that comparison page, you have to make a comparison first. It would be handy to have a link (in the side bar?) to an empty comparison
Either, honestly, would be great. I meant even just a link to a blank comparison and one can then add the experiments from that view
The results from searching in the "Add Experiment" view (can't resize column widths -> can't see project name ...)
Hey AgitatedDove14 π
Finally managed; you keep saying "all projects" but you meant the "All Experiments" project instead. That's a good start π Thanks!
Couple of thoughts from this experience:
Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)? Could we add a filter on the project name in the "All Experiments" project? Could we add the project for each of the search results? (see above pictur...
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Managed now π Thank you for your patience!
I edited the previous post with some suggestions/thoughts
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
Unfortunately I can't take a photo of not being able to compare tasks by navigating around the WebUI...
Does that clarify the issue CostlyOstrich36 ?
I can't seem to manage the first way around. If I select tasks in different projects, I don't get the bottom bar offering to compare between them
I understand, but then the toml file needs to be parsed to ensure poetry is used. It's just a tool entry in the pyproject.toml.
I just ran into this too recently. Are you passing these also in the extra_clearml_conf
for the autoscaler?
I'm not sure; the setup is not unique to Mac.
Each user has their own .env
file which is given to the code entry point, and at some point will be loaded with dotenv.load_dotenv()
.
The environment variables are not set in code anywhere, but the clearml.conf
uses them directly.
Yes exactly, but I guess I could've googled for that π
Copy the uncommitted changes captured by ClearML using the UI, write to changes.patch
, run git apply changes.patch
π
Yes, as I wrote above π
So a normal config file with environment variables.
Could you provide a more complete set of instructions, for the less inclined?
How would I backup the data in future times etc?
- in the second scenario, I might have not changed the results of the step, but my refactoring changed the speed considerably and this is something I measure.
- in the third scenario, I might have not changed the results of the step and my refactoring just cleaned the code, but besides that, nothing substantially was changed. Thus I do not want a rerun.Well, I would say then that in the second scenario itβs just rerunning the pipeline, and in the third itβs not running it at all π
(I ...
Yeah I will probably end up archiving them for the time being (or deleting if possible?).
Otherwise (regarding the code question), I think itβs better if we continue the original thread, as it has a sample code snippet to illustrate what Iβm trying to do.