![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/GrumpyPenguin23.png)
Reputation
Badges 1
8 × Eureka!Hi SoggyFrog26 welcome to ClearML!
The way you found definitely works very well, especially since you can use it to change the input model from the UI in case you use the task as a template for orchestrated inference.
Note that you can wrap metadata around the model as well such as labels trained on and network structure, you can also use a model package to ... well package whatever you need with the model. If you want a concrete example I think we need a little more detail here on the fr...
Which parser are you using? argparse should be logged automatically.
isn't it better if it just updates the timestamp on the old models?
Looks like it is still running DeliciousSeaanemone40 , you're suggesting it is slower than usual? There are some messages there that I've never seen before
Difficult without a reproducer, but I'll try: How did you get the logger? Maybe you forgot parentheses at task.get_logger() ?
Oh hey, did someone mention my name on this thread? MiniatureCrocodile39 did you manage to create a pycharm virtual env with clearml installed?
Thanks @<1523701205467926528:profile|AgitatedDove14> , also I think you're missing a few pronouns there π
Totally within ClearML :the_horns: :the_horns:
Hi! Looks like all the processes are calling torch.save so it's probably reflecting what Lightning did behind the curtain. Definitely not a feature though. Do you mind reporting this to our github repo? Also, are you also getting duplicate experiments?
EnviousStarfish54 lets refine the discussion - are you looking at structured data (tables etc.) or unstructured (audio, images etc)
Shh AgitatedDove14 you're dating yourself π
WackyRabbit7 It is conceptually different than actually training, etc.
The service agent is mostly one without a gpu, runs several tasks each on their own container, for example: autoscaler, the orchestrators for our hyperparameter opt and/or pipelines. I think it even uses the same hardware (by default?) of the trains-server.
Also, if I'm not mistaken some people are using it (planning to?) to push models to production.
I wonder if anyone else can share their view since this is a relati...
https://github.com/allegroai/trains/issues/193 for future reference (I will update later)
As I wrote before these are more geared towards unstructured data and I will feel more comfortable, as this is a community channel, if you continue your conversation with the enterprise rep. if you wish to take this thread to a more private channel I'm more than willing.
Also, while we are at it, EnviousStarfish54 ,can I just make sure - you meant this page, right?
https://allegro.ai/enterprise/
I will dig around to see how all of this could be accomplished.
Right now I see it done in two ways:
a function that you must remember to call each time that would do the upkeep a script that you can run once in a while to do cleanupwhich one would you prefer that I will pursue?
fine. Can I open a feature request on our github for you, refering this conversation?
EnviousStarfish54 first of all, thanks for taking the time to explore our enterprise offering.
- Indeed Trains is completely standalone. The enterprise offering adds the necessary infrastructure for end-to-end integration etc. with a huge emphasis on computer vision related R&D.
- The data versioning is actually more than just data versioning because it adds an additional abstraction over the "dataset" concept, well this is something that the marketing guys should talk about... unless you ...
Well, we had a nice video from twimlcon but it is not up yet on our site. I recently gave a very long demo on both basic and semi-advanced clearml usage - you can watch it here
https://youtu.be/VJJsVJiWnYY?t=1774
the slides are here:
https://docs.google.com/presentation/d/1PFPTQkHVGxugruTRFDnuVmS85ziSbNOTixCVQwPMFDI/edit?usp=sharing
code is here:
https://github.com/abiller/events/tree/webinars/webinars/flower_detection_rnd
Hmm... For quick and dirty integration that would probably do the trick, you could very well issue clearml-task commands on each kubeflow vertex (is that how they are called?)
What do you think AgitatedDove14 ?
Removed the yay's DeliciousBluewhale87
https://www.youtube.com/watch?v=XpXLMKhnV5k
@<1523714910930866176:profile|MiniatureStarfish88> if you are here make sure to vote for the next presentor of my show ^
JealousParrot68 Some usability comments - Since ClearML is opinionated, there are several pipeline workflow behaviors that make sense if you use Datasets and Artefacts interchangeably, e.g. the step caching AgitatedDove14 mentioned. Also for Datasets, if you combine them with a dedicated subproject like I did on my show, then you have the pattern where asking for the dataset of that subproject will always give you the most up-to-date dataset. Thus you can reuse your pipelines without havin...
with upload I would strongly recommend against doing this
There are several ways of doing what you need, but none of them are 'magical' like we pride ourselves for. For that, we would need user input like yours in order to find the commonalities.
Hi TrickySheep9 , ClearML Evangelist here, this question is the one I live for π are you specifically asking "how do people usually so it with ClearML" or really the "general" answer?
Nbdev ia "neat" but it's ultimately another framework that you have to enforce.
Re: maturity models - you will find no love for then here π mainly because they don't drive research to production
Your described setup can easily be outshined by a ClearML deployment, but sagemaker instances are cheaper. If you have a limited number of model architectures you can get tge added benefit of tracking your s3 models with ClearML with very little code changes. As for deployment - that's anoth...
There are an astounding number of such channels actually. It probably depends on your style. Would you like me to recommend some?
Of course we can always create a channel here as well... One more can't hurt π