Reputation
Badges 1
662 × Eureka!@<1539780258050347008:profile|CheerfulKoala77> you may also need to define subnet or security groups.
Personally I do not see the point in Docker over EC2 instances for CPU instances (virtualization on top of virtualization).
Finally, just to make sure, you only ever need one autoscaler. You can monitor multiple queues with multiple instance types with one autoscaler.
Follow up on this btw, from the WebUI/Server POV, I see there's an "Admin" role, etc. Do those have additional views available, such as users etc?
So some UI that shows the contents of users.get_all
?
So no direct page to see e.g. how many people have registered and/or if someone accidentally made two (or more) accounts, or somewhere to just delete users, etc
It's a small snippet that ensures identically named projects are still unique'd with a running number.
Thanks SuccessfulKoala55 ! Could I change this during runtime, so for example, only the very first task goes through this process?
It's of course not an MLOps issue so I understand it's not high on the priority list, but would be kinda cool to just have a simple view presenting the content of users.get_all
😄
I can see the task in the UI, it is not archived, and that's pretty much the snippet, but in full I do e.g.
Thanks for the reply @<1523701827080556544:profile|JuicyFox94> ! I'll debug more and let you know
Not sure if @<1523701087100473344:profile|SuccessfulKoala55> or @<1523701827080556544:profile|JuicyFox94> maybe knows?
Scaling to zero, copying the mongodb data, and scaling back up worked like a charm.
Thanks @<1523701827080556544:profile|JuicyFox94> !
Ah, the API server /users.get_all
, I see!
True, and we plan to migrate to pipelines once we have some time for it :) but anyway that condition is flawed I believe
The logs are on the bucket, yes.
The default file server is also set to s3://ip:9000/clearml
proj_suffix = "" i = 2 while Task.get_project_id(f"{proj_name}{proj_suffix}") is not None: tasks = Task.get_tasks(project_name=f"{proj_name}{proj_suffix}") if not [task for task in tasks if not task.get_archived()]: # Empty project, we can use this one... break proj_suffix = f"_{i}" i += 1
1.8.3; what about when calling task.close()
? We suddenly have a need to setup our logging after every task.close()
call
The api.files_server
is set to the MinIO endpoint s3://ip:9000/clearml (both locally and remotely) The sdk.development.default_output_uri
is set to the MinIO endpoint (both locally and remotely) When we call Task.init
I do not set the output_uri
at all I get the logger directly with task.get_logger()
Don't even need to specify json=...
😉 Thanks!
clearml.backend_api.session.defs.ENV_HOST.get()
did not work unfortunately 🤔
On it! Should I include the additional user filters described above?
After setting the sdk.development.default_output_uri
in the configs, my code kinda looks like:
` task = Task.init(project_name=..., task_name=..., tags=...)
logger = task.get_logger()
report with logger freely `
But it is strictly that if condition in Task.init, see the issue I opened about it
I couldn't find it directly in the SDK at least (in the APIClient)... 🤔