Reputation
Badges 1
14 × Eureka!As for your question, yes, our effort was diverted into other avenues and not a lot of public progress has been made.
That said, what is your plan for integration of the tools? automatically promote models to be served from within clearml?
Are you using the OSS version or the hosted one (app.clear.ml)? The ClearML enterprise offering has a built-in annotator. Please note that this was meant more for correcting annotations during the development process rather than mass annotating lots of images.
BTW, I suggest for new questions, just ask in the clearml-community. I'm really happy to help but I almost missed this message π
Hmm. seem like there is a problem. Let me check π
HI SquareFish25 , We also have a few webinars discussing these topics (more theoretical and what can be achieved using pipelines), check https://youtu.be/_5Re2GpcRp8 and https://youtu.be/yGg-exQHUfE out!
Can you check again? It works for me. If you're still not able to reach it, can you send an image of the error you're getting?
Hi TenseOstrich47
You can also check this video out on our youtube channel:
https://youtu.be/gPBuqYx_c6k
It's still branded as trains (our old brand) but it applies to clearml just the same!
How to Supercharge Your Teamβs Productivity with MLOps [S31250]
ML-Ops Workshop: Demonstrating an End-to-End Pipeline for ML/DL Leveraging GPUs [S32056]
Best Practices in Handling Machine Learning Pipelines on DGX Clusters [E32375]
Hey There Jamie! I'm Erez from the ClearML team and I'd be happy to touch on some points that you mentioned.
First and foremost, I agree with the first answer that was given to you on reddit. There's no "right" tool. most tools are right for the right people and if a tool is too much of a burden, then maybe it isn't right!
Second, I have to say the use of SVN is a "bit" of a hassle. the MLOps space HEAVILY leans towards git. We interface with git and so does every other tool I know of. That ...
JitteryCoyote63 As for the h2o, I actually looks WAY cool. But some people might see us as competitors so not 100% sure that we'll be glad to use their tool but let's see!
Hi anton, so the self-hosted ClearML provides all the features that you get from the hosted version so you're not losing on anything. You can either deploy it with docker-compose or on K8s cluster with helm charts.
Did you try with function_kwargs?
Just to make sure, if you change the title to "mean top four accuracy" it should work OK
It's decorators. It's functions as steps, It's the ability to add metric tracking for pipelines. It's a better way to move artifacts between steps (way less boilerplate code). It's pipeline instance versioning (for the upcoming UI changes), it's better step parallelizations (If steps are not dependent on each other, they will automatically be parallelized) and I've probably missed some features....
EnviousStarfish54 Yes, self hosted is still available! We're only adding options, not taking anything away! π
Hi Doron, as a matter of fact yup π The next version would include a similar feature. Plan is to have it released middle of December so stay tuned π
Hi TenseOstrich47 What you can do is report the metric to clearml, then use the Taskscheduler to listen on a specific project. If a task in this project reports a metric below \ above a certain TH (Or I think if it's the highest \ lowest as well) you can trigger an event (Task \ function). That's how you do it with the Taskscheduler object
JitteryCoyote63 I'm not sure we can get to it fast enough, unfortunately π (It only means we have cooler stuff that we're working on π )
OutrageousSheep60 The python package is in testing. Hopefully will be out Sunday \ Monday :)
In ClearML Opensource, a dataset is represented by a task (or experiment in UI terms). You can add datasets to projects to indicate that the dataset is related to the project, but it's completely a logic entity, IE, you can have a dataset (or datasets) per project, or a project with all your datasets.
JitteryCoyote63 Welcome to the wonderful world of coding where some stuff doesn't work and you don't know why and some stuff works and you don't know why π
VivaciousPenguin66 This is very true! We are trying to explain the benefits of this method. Some people like it and some people like the flexibility. We do have our philosophy in mind when we create "best practices" and obviously features to ClearML but ultimately people should do what makes them the most productive!
If we are getting philosophical, I think it's the state of the industry and as it progresses, these standard methods would become more prominent.
also, to add to what you wrote,...
Hi ResponsiveHedgehong88 , let me see if I get it straight, you have a my_config.yaml in your repo. Then you do something like task.add_configuration(my_config.yaml) and have it logged as a config object, then you modify the config object and rerun (so now it takes the configuration from the configuration object you modified in the UI, rather than the file in the repo). Am I understanding the setup correctly?
Oh!!! Sorry π
So...basically it's none of them.
All of these are hosted tiers. The self-hosted is our Open Source which you can find https://github.com/allegroai/clearml-server
It has an explanation on how to install it and some of the options available for you.
Looking at our pricing page, I can see how it's not trivial to get from there to the github page...I'll try to improve that! π
EcstaticBaldeagle77 , Actually, these scalars and configurations are not saved locally to a file, but can be retrieved and saved manually. If you want to get metrics you can call task.get_reported_scalars() and if you want configuration then call task.get_configuration_object() with the configuration section as it appears in the web application
You can use:task = Task.get_task(task_id='ID') task.artifacts['name'].get_local_copy()
Yeah, that makes lots of sense!