Reputation
Badges 1
8 × Eureka!We need SEO for our docs π
There are an astounding number of such channels actually. It probably depends on your style. Would you like me to recommend some?
Of course we can always create a channel here as well... One more can't hurt π
that's how it is supposed to work π let us know if it does not.
submouldes == git submodules
This looks like a genuine git fetch issue. Trains would have problems figuring the diff if git cannot find the base commit...
Do you have submodules on the repo? did the DS push his/her commits?
Okay, I'll make a channel here today, and the sticky post on it would be a list of other channels πͺ
For now here is one of my favorites:
if she does not push, trains has a commit id for the task that does not exist on the git server. if she does not commit - trains will hold all the diff from the last commit on the server.
BattyLion34 this is up to the discretion of the meetup organizers. At any case, I am going to use the same demos to create several of my stuffed animal videos (we can also upload the same videos without the stuffed animals if there is demand for that)
What's your code situation? Is it open enough to allow you to create an issue for this on our GitHub?
Aren't the two lines enough for you? BTW why lightning and not ignite?
Nbdev ia "neat" but it's ultimately another framework that you have to enforce.
Re: maturity models - you will find no love for then here π mainly because they don't drive research to production
Your described setup can easily be outshined by a ClearML deployment, but sagemaker instances are cheaper. If you have a limited number of model architectures you can get tge added benefit of tracking your s3 models with ClearML with very little code changes. As for deployment - that's anoth...
Not so much relevant, since it can be seen from your task π but it would be interesting to find out if trains made something be much slower, and if so - how
Well in general there is no one answer. I can talk about it for days. In ClearML the question is really a non issue since of you build a pipeline from notebooks on your dev in r&d it is automatically converted to python scripts inside containers. Where shall we begin? Maybe you describe your typical workload and intended deployment with latency constraints?
Hi ManiacalPuppy53 glad to see that you got over the mongo problem by advancing to PI4 π
Looks like it is still running DeliciousSeaanemone40 , you're suggesting it is slower than usual? There are some messages there that I've never seen before
There are examples but nothing comes to mind when. Thinking about well fleshed out for Bert etc. Maybe someone here can correct me
If you can recreate the same problem with the original repo... π€― π€©
All should work, again - is it much slower than without trains?
Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for π are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?
Sure we do! Btw MiniatureCrocodile39 iirc I answered one of your threads with a recording to a webinar of mine
Hi Dan, please take a look at this answer, the webapp interface mimics this. Does this click for you?
Hmm, anything -m will solve? https://docs.docker.com/config/containers/resource_constraints/
or is it a segfault inside the container becuase ulimit
isn't set to -s unlimited
?
Are you doing imshow or savefig? Is this the matplotlib oop or original subplot? Any warning message relevant to this?