Reputation
Badges 1
662 × Eureka!Iβd like to refrain from manually specifying the dependencies, since it adds a lot of overhead to extend
They are set with a .env
file - it's a common practice. The .env
file is, at the moment, uploaded to a temporary cache (if you remember the discussion regarding the StorageManager
), so it's also available remotely (related to issue #395)
Let me verify a hypothesis...
I cannot, the instance is long gone... But it's not different to any other scaled instances, it seems it just took a while to register in ClearML
CostlyOstrich36 I'm not sure what is holding it from spinning down. Unfortunately I was not around when this happened. Maybe it was AWS taking a while to terminate, or maybe it was just taking a while to register in the autoscaler.
The logs looked like this:
- Recognizing an idle worker and spinning down.
2022-09-19 12:27:33,197 - clearml.auto_scaler - INFO - Spin down instance cloud id 'i-058730639c72f91e1'
2. Recognizing a new task is available, but the worker is still idle.
` 2022-09...
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Managed now π Thank you for your patience!
I edited the previous post with some suggestions/thoughts
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
The results from searching in the "Add Experiment" view (can't resize column widths -> can't see project name ...)
Hmmm, what π
Either, honestly, would be great. I meant even just a link to a blank comparison and one can then add the experiments from that view
Ah okay π Was confused by what you quoted haha π
Pinging about this still, unresolved π€
ClearML does not capture our internal libraries and so our functions (pipeline steps) crash with missing modules.
So a missing bit of information that I see I forgot to mention, is that we named our packages as foo-mod
in pyproject.toml
. That hyphen then getβs rewritten as foo_mod.x.y.z-distinfo
.
foo-mod @ git+
Weβd be happy if ClearML captures that (since it uses e.g. pip, then we have the git + commit hash for reproducibility), as it claims it would π
Any thoughts CostlyOstrich36 ?
β¦ And itβs failing on typing hints for functions passed in pipe.add_function_step(β¦, helper_function=[β¦])
β¦ I guess those arenβt being removed like the wrapped function step?
PricklyRaven28 That would be my fallback, it would make development much slower (having to build containers with every small change)
How or why is this the issue? I great something is getting lost in translation :D
On the local machine, we have all the packages needed. The code gets sent for remote execution, and all the local packages are frozen correctly with pip.
The pipeline controller task is then generated and executed remotely, and it has all the relevant packages.
Each component it launches, however, is missing the internal packages available earlier :(
It is installed on the pipeline creating the machine.
I have no idea why it did not automatically detect it π
Exactly, it should have auto-detected the package.
I opened a GH issue shortly after posting here. @<1523701312477204480:profile|FrothyDog40> replied (hoping I tagged the right person).
We need to close the task. This is part of our unittests for a framework built on top of ClearML, so every test creates and closes a task.
That's exactly what I meant AgitatedDove14 π It's just that to access that comparison page, you have to make a comparison first. It would be handy to have a link (in the side bar?) to an empty comparison
Unfortunately I can't take a photo of not being able to compare tasks by navigating around the WebUI...
Does that clarify the issue CostlyOstrich36 ?
Will try later today TimelyPenguin76 and report back, thanks! Does this revert the behavior to the 1.3.x one?
One more UI question TimelyPenguin76 , if I may -- it seems one cannot simply report single integers. The report_scalar
feature creates a plot of a single data point (or single iteration).
For example if I want to report a scalar "final MAE" for easier comparison, it's kinda impossible π
I realized it might work too, but looking for a more definitive answer π Has no-one attempted this? π€
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
AFAIU, something like this happens (oversimplified):
` from clearml import Task # <--- Crash already happens here
import argparse
import dotenv
if name == "main":
# set up argparse with optional flag for a dotenv file
dotenv.load_dotenv(args.env_file)
# more stuff `