Reputation
Badges 1
662 × Eureka!Sure! It looks like this
β¦ And itβs failing on typing hints for functions passed in pipe.add_function_step(β¦, helper_function=[β¦]) β¦ I guess those arenβt being removed like the wrapped function step?
CostlyOstrich36 That looks promising, but I don't see any documentation on the returned schema (i.e. workers.worker_stats is not specified anywhere?)
Right so it uses whatever version is available on the agent.
Yeah it would be nice to have either a poetry_version (a-la https://github.com/allegroai/clearml-agent/blob/5afb604e3d53d3f09dd6de81fe0a494dacb2e94d/docs/clearml.conf#L62 ), rename the latter to manager_version , or just install from the captured environment, etc? π€
Could also be related to K8, so pinging JuicyFox94 just in case π
Not sure if ClearML has any built in support, but we used the above for a similar issue but with Prefect2 :)
So a missing bit of information that I see I forgot to mention, is that we named our packages as foo-mod in pyproject.toml . That hyphen then getβs rewritten as foo_mod.x.y.z-distinfo .
foo-mod @ git+
Either one would be nice to have. I kinda like the instant search option, but could live with an ENTER to search.
I opened this meanwhile - https://github.com/allegroai/clearml-server/issues/138
Generally, it would also be good if the pop-up presented some hints about what went wrong with fetching the experiments. Here, I know the pattern is incomplete and invalid. A less advanced user might not understand what's up.
I realized it might work too, but looking for a more definitive answer π Has no-one attempted this? π€
We're using the example autoscaler, nothing modified
I'm saying it's a bug
It is. In what format should I specify it? Would this enforce that package on various components? Would it then no longer capture import statements?
There's a specific fig[1].set_title(title) call.
Those are for specific packages, I'm wondering about the package managers as a whole
I see that the GUI AutoScaler is only in the paid version, wonder why the GCP driver is not open source?
Can I query where the worker is running (IP)?
Should this be under the clearml or clearml-agent repo?
No it does not show up. The instance spins up and then does nothing.
@<1523701070390366208:profile|CostlyOstrich36> I added None btw
Thanks CostlyOstrich36 !
And can I make sure the same budget applies to two different queues?
So that for example, an autoscaler would have a resource budget of 6 instances, and it would listen to aws and default as needed?
So now we need to pass Task.init(deferred_init=0) because the default Task.init(deferred_init=False) is wrong
IIRC, get_local_copy() downloads a local copy and returns the path to the downloaded file. So you might be interested in e.g.local_csv = pd.read_csv(a_task.artifacts['train_data'].get_local_copy())
With the models, you're looking for get_weights() . It acts the same as get_local_copy() , so it returns a path.
EDIT: I think also get_local_copy() for a model should work π
I mean, I see these are defined here https://github.com/allegroai/clearml-agent/blob/master/clearml_agent/definitions.py
But I do not see where an EnvironmentConfig.set() is called...
I mean, it makes sense to have it in a time-series plot when one is logging iterations and such. But that's not always the case... Anyway I opened an issue about that too! π
My suspicion is that this relates to https://clearml.slack.com/archives/CTK20V944/p1643277475287779 , where the config file is loaded prematurely (upon import ), so our dotenv.load_dotenv() call has not yet registered.
Basically when running remotely, the first argument to any configuration (whether object or string, or whatever) is ignored, right?
Or if it wasn't clear, that chunk of code is from clearml's dataset.py
Well the individual tasks do not seem to have the expected environment.
We're using 1.1.5 at the moment -- I'll make sure everyone updates to 1.1.6 on Monday.
That solution does not work for us unfortunately -- the .env is an argument from argparse, and because we cannot attach non-git files to a remote task (again issue #395), we have to first download CLI arguments for remote execution and ensure they exist on the remote agent.