Reputation
Badges 1
8 × Eureka!Okay, I'll make a channel here today, and the sticky post on it would be a list of other channels πͺ
For now here is one of my favorites:
Looks like it is still running DeliciousSeaanemone40 , you're suggesting it is slower than usual? There are some messages there that I've never seen before
There are examples but nothing comes to mind when. Thinking about well fleshed out for Bert etc. Maybe someone here can correct me
Hi ManiacalPuppy53 glad to see that you got over the mongo problem by advancing to PI4 π
Hi HugePelican43 , glad that you are interested in our Feature Store. The screenshots in that portion of site (we should specify that in a clearer way, probably) reflect our enterprise offering.
If you are looking for open-source solutions to incorporate online feature stores in your deployment - I'm happy to discuss how to do it in a way that works best with the clearml-ecosystem.
If, however, you are looking for "offline" feature management capabilities, this is 100% doable with your ow...
There are an astounding number of such channels actually. It probably depends on your style. Would you like me to recommend some?
Of course we can always create a channel here as well... One more can't hurt π
We need SEO for our docs π
ClearML Free or self-hosted?
Hi Elron, I think the easiest way is to print the results of !nvidia-smi or use the framework interface to get these and log them as a clearml artifact. for example -
https://pytorch.org/docs/stable/cuda.html
if she does not push, trains has a commit id for the task that does not exist on the git server. if she does not commit - trains will hold all the diff from the last commit on the server.
I guess the product offering is not so clear yet (pun intended) the self-deployed option is completely free and open source. The enterprise offering is something entirely different
https://clear.ml/pricing/
The "feature store" you see in the free tier is what I am alluding to
EnviousStarfish54 I recognize this table π i'm glad you are already talking with the right person. I hope you will get all your questions answered.
I need to check something for you EnviousStarfish54 , I think one of our upcoming versions should have something to "write home about" in that regard
EnviousStarfish54 that is the intention, it is cached. But you might need to manage your cache settings if you have many of those, since there is an initial sane setting for the cache size. Hope this helps.
Removed the yay's DeliciousBluewhale87
https://www.youtube.com/watch?v=XpXLMKhnV5k
It depends on what you mean by deployment, and what kind of inference you plan to do (ie rt vs batched etc)
But overall currently serving itself is not handled by the open source offering, mainly because there are so many variables and frameworks to consider.
Can you share some more details about the capabilities you are looking for? Some essentials like staging and model versioning are handled very well...
JealousParrot68 Some usability comments - Since ClearML is opinionated, there are several pipeline workflow behaviors that make sense if you use Datasets and Artefacts interchangeably, e.g. the step caching AgitatedDove14 mentioned. Also for Datasets, if you combine them with a dedicated subproject like I did on my show, then you have the pattern where asking for the dataset of that subproject will always give you the most up-to-date dataset. Thus you can reuse your pipelines without havin...
EnviousStarfish54 lets refine the discussion - are you looking at structured data (tables etc.) or unstructured (audio, images etc)
That's always nice to hear. Remember that many of these improvements came from the community and you can always submit a feature request on our github repo https://github.com/allegroai/clearml/issues
If you can recreate the same problem with the original repo... π€― π€©
This is definitely getting into an example soon! Props for building something cool
Sounds odd, I bet there is a way to make it work witout implicit logging statement. This is with tf2 or pytorch? Which trains version are you using?
Honestly, it looks like the tensorboard representation is the wrong one. Only one way to find out - you need to plot the histogram on your own π
Shh AgitatedDove14 you're dating yourself π
Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for π are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?