Reputation
Badges 1
662 × Eureka!I just ran into this too recently. Are you passing these also in the extra_clearml_conf
for the autoscaler?
Say I have Task A that works with some dataset (which is not hard-coded, but perhaps e.g. self-defined by the task itself).
I'd now like to clone Task A and modify some stuff, but still use the same dataset (no need to recreate it, but since it's not hard-coded, I have to maintain a reference somewhere to the dataset ID).
Since the Dataset
SDK offers use_current_task
, I would have also expected there to be something like dataset.link(task)
or task.register_dataset(ds)
...
Unfortunately not, each task defines and constructs its own dataset. I want cloned task to save that link π€
On an unrelated note, when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty
@<1539780258050347008:profile|CheerfulKoala77> you may also need to define subnet or security groups.
Personally I do not see the point in Docker over EC2 instances for CPU instances (virtualization on top of virtualization).
Finally, just to make sure, you only ever need one autoscaler. You can monitor multiple queues with multiple instance types with one autoscaler.
SmugDolphin23 I think you can simply change not (type(deferred_init) == int and deferred_init == 0)
to deferred_init is True
?
Follow up on this btw, from the WebUI/Server POV, I see there's an "Admin" role, etc. Do those have additional views available, such as users etc?
So now we need to pass Task.init(deferred_init=0)
because the default Task.init(deferred_init=False)
is wrong
Dynamic pipelines in a notebook, so I donβt have to recreate a pipeline every time a step is changed π€
So some UI that shows the contents of users.get_all
?
So no direct page to see e.g. how many people have registered and/or if someone accidentally made two (or more) accounts, or somewhere to just delete users, etc
It's a small snippet that ensures identically named projects are still unique'd with a running number.
Thanks SuccessfulKoala55 ! Could I change this during runtime, so for example, only the very first task goes through this process?
It's of course not an MLOps issue so I understand it's not high on the priority list, but would be kinda cool to just have a simple view presenting the content of users.get_all
π
Uhhh, not really unfortunately :white_frowning_face: . I have ~20 tasks happening in a single file, and it's quite random if/when this happens. I just noticed this tends to happen with the shorter tasks
Any leads TimelyPenguin76 ? I've also tried setting up a minio s3 bucket, but I'm not sure if the remote agent has copied the credentials and host π€
Hey @<1523701435869433856:profile|SmugDolphin23> , thanks for the reply! Iβm aware of the caching β thatβs not the issue Iβm trying to resolve π
I can see the task in the UI, it is not archived, and that's pretty much the snippet, but in full I do e.g.
Thanks for the reply @<1523701827080556544:profile|JuicyFox94> ! I'll debug more and let you know
It does (root in a docker container); it shouldn't touch /run/systemd/generator/systemd-networkd.service
anyway though
Sorry, not necessarily RBAC (although that is tempting π ), but for now was just wondering if an average joe user has access to see the list of "registered users"?
Not sure if @<1523701087100473344:profile|SuccessfulKoala55> or @<1523701827080556544:profile|JuicyFox94> maybe knows?
Scaling to zero, copying the mongodb data, and scaling back up worked like a charm.
Thanks @<1523701827080556544:profile|JuicyFox94> !