Reputation
Badges 1
8 × Eureka!I'm specifically interested in the model-first queries you would like to do (as experiment-first queries are fully featured, we want to understand whats the best way to take that into models)
Thanks for your interest in the enterprise offering πΆοΈ I would much rather if we kept this slack workspace for the open-source solution we all know and love. You can email me at mailto:ariel@clear.ml for more info. For a short answer: the Data lineage is about an order of magnitude cooler, and hyperdatasets can be thought of "beyond feature stores for unstructured data". Does this help?
Looks like it is still running DeliciousSeaanemone40 , you're suggesting it is slower than usual? There are some messages there that I've never seen before
There are examples but nothing comes to mind when. Thinking about well fleshed out for Bert etc. Maybe someone here can correct me
All should work, again - is it much slower than without trains?
What's your code situation? Is it open enough to allow you to create an issue for this on our GitHub?
If you can recreate the same problem with the original repo... π€― π€©
Not so much relevant, since it can be seen from your task π but it would be interesting to find out if trains made something be much slower, and if so - how
Are you doing imshow or savefig? Is this the matplotlib oop or original subplot? Any warning message relevant to this?
@<1523714910930866176:profile|MiniatureStarfish88> if you are here make sure to vote for the next presentor of my show ^
WackyRabbit7 It is conceptually different than actually training, etc.
The service agent is mostly one without a gpu, runs several tasks each on their own container, for example: autoscaler, the orchestrators for our hyperparameter opt and/or pipelines. I think it even uses the same hardware (by default?) of the trains-server.
Also, if I'm not mistaken some people are using it (planning to?) to push models to production.
I wonder if anyone else can share their view since this is a relati...
Its built in π and Its for... "Services"
https://github.com/allegroai/trains-server#trains-agent-services--
Hi SubstantialBaldeagle49 ,
certainly if you upload all the training images or even all the test images it will have a huge bandwidth/storage cost (I believe bandwidth does not matter e.g. if you are using s3 from ec2) If you need to store all the detection results (for example, QA, regression testing), you can always save the detections json as artifact and view them later in your dev environment when you need. The best option would be to only upload "control" images and "interesting" im...
SubstantialBaldeagle49
hopefully you can reuse the same code you used to render the images until now, just not inside a training loop. I would recommend against integrating with trains, but you can query the trains-server from any app, just make sure you serve it with the appropriate trains.conf and manage the security π you can even manage the visualization server from within trains using trains-agent. Open source is so much fun!
SubstantialBaldeagle49 not at the moment, but it is just a matter of implementing an apiclient call. you can open a feauture request for a demo on github, it will help make it sooner than later
So basically export a webapp view as csv?
I guess the product offering is not so clear yet (pun intended) the self-deployed option is completely free and open source. The enterprise offering is something entirely different
https://clear.ml/pricing/
The "feature store" you see in the free tier is what I am alluding to
First of all I'd like to thank you for pointing out that our messaging is confusing. We'll fix that.
To the point: Nice interface and optimized db access for feature stores is part of our paid, enterprise offering.
Managing data/features as part of your pipelines and getting version-controlled features == offline feature store
The latter is doable with the current ClearML open source tools, and I intend to show it very soon. But right now you won't have a different pane for DataOps, it'll...
#goodfirstissue AgitatedDove14 π€£
Nbdev ia "neat" but it's ultimately another framework that you have to enforce.
Re: maturity models - you will find no love for then here π mainly because they don't drive research to production
Your described setup can easily be outshined by a ClearML deployment, but sagemaker instances are cheaper. If you have a limited number of model architectures you can get tge added benefit of tracking your s3 models with ClearML with very little code changes. As for deployment - that's anoth...
Well in general there is no one answer. I can talk about it for days. In ClearML the question is really a non issue since of you build a pipeline from notebooks on your dev in r&d it is automatically converted to python scripts inside containers. Where shall we begin? Maybe you describe your typical workload and intended deployment with latency constraints?
Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for π are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?
same name == same path, assuming no upload is taking place? *just making sure
Okay, I'll make a channel here today, and the sticky post on it would be a list of other channels πͺ
For now here is one of my favorites:
Okay so sounds like two bugs stacked together? I wonder if this is gitlab specific. Could you provide a list of a steps to reproduce? π
ClearML Free or self-hosted?
but hey, UnevenDolphin73 nice idea, maybe we should have clearml-around that can report who is using which GPU π
HighOtter69 so if you manually change the color of one of them, the others change for you as well?!