Reputation
Badges 1
8 × Eureka!WickedGoat98 I gave you a slight twitter push π if I were I would make sure that the app credentials you put on your screen shot are revoked π π
SubstantialElk6 this is a three parter -
getting workers on your cluster, again because of the rebrand I would go to the repo itself for the dochttps://github.com/allegroai/clearml-agent#kubernetes-integration-optional
2. integrating any code with clearml (2 lines of code)
3. executing that from the web ui
If you need any help with the three, the community is here for you π
The colors are controlled by the front-end so not programmatically, but it is possible and (in my opinion) convenient to do so manually via clicking the appropriate position on the legend. Does this meet your expectations?
Shh AgitatedDove14 you're dating yourself π
Oh hey, did someone mention my name on this thread? MiniatureCrocodile39 did you manage to create a pycharm virtual env with clearml installed?
Looks like incomplete build of pytorch. What are we looking at. And who's christine?
Difficult without a reproducer, but I'll try: How did you get the logger? Maybe you forgot parentheses at task.get_logger() ?
Hmm... For quick and dirty integration that would probably do the trick, you could very well issue clearml-task commands on each kubeflow vertex (is that how they are called?)
What do you think AgitatedDove14 ?
BTW if anyone from the future is reading this, try the docs again π
WackyRabbit7 It is conceptually different than actually training, etc.
The service agent is mostly one without a gpu, runs several tasks each on their own container, for example: autoscaler, the orchestrators for our hyperparameter opt and/or pipelines. I think it even uses the same hardware (by default?) of the trains-server.
Also, if I'm not mistaken some people are using it (planning to?) to push models to production.
I wonder if anyone else can share their view since this is a relati...
Honestly, it looks like the tensorboard representation is the wrong one. Only one way to find out - you need to plot the histogram on your own π
From what i remember the bins in tb are wider. And the tapering off around zero cannot be real since this happens in super sparse modela. Overall if you are sure, than this is a nice issue to open on GitHub.
Thanks @<1523701205467926528:profile|AgitatedDove14> , also I think you're missing a few pronouns there π
horray for the new channel ! you are all invited
Which parser are you using? argparse should be logged automatically.
OddAlligator72 can you link to the wandb docs? Looks like you want a custom entry point, I'm thinking "maybe" but probably the answer is that we do it a little differently here.
Totally within ClearML :the_horns: :the_horns:
Hi, it is under construction, but it is going to be there.
We need SEO for our docs π
It depends on what you mean by deployment, and what kind of inference you plan to do (ie rt vs batched etc)
But overall currently serving itself is not handled by the open source offering, mainly because there are so many variables and frameworks to consider.
Can you share some more details about the capabilities you are looking for? Some essentials like staging and model versioning are handled very well...
The log storage can be configured if you spin your own clearml-server, but it won't have repository structure. And it shouldn't have btw. If you need secondary backup of everything, it is possible to set something up as well.
Hi BattyLion34 , could you clarify a little? If I understand correctly, you wish to use a code repository to store artifacts and ClearML logs?
ClearML Free or self-hosted?
I'm not sure I can help with the technicality, but here is a basic question you'll be aksed - are you able to download anything from your minio using ClearML?
I need to check something for you EnviousStarfish54 , I think one of our upcoming versions should have something to "write home about" in that regard
Sure we do! Btw MiniatureCrocodile39 iirc I answered one of your threads with a recording to a webinar of mine