Reputation
Badges 1
14 × Eureka!the upload method (which has an SDK counterpart) allows you to specify where to upload the dataset to
It's decorators. It's functions as steps, It's the ability to add metric tracking for pipelines. It's a better way to move artifacts between steps (way less boilerplate code). It's pipeline instance versioning (for the upcoming UI changes), it's better step parallelizations (If steps are not dependent on each other, they will automatically be parallelized) and I've probably missed some features....
Let me circle this back to the UI folks and see if I can get some sort of date attached to this 🙂
If you can open a git issue to help tracking and improve visibility, that'll be very awesome!
Hadrien, just making sure I get the terminology, stopped instance meaning you don't pay for it, but just its storage, right? Or is it up and idling (and then Martin's suggestion is valid)? Do you get stopped instances instantely when you ask for them?
Just randomly check if there's a new version...every day 😉
ReassuredTiger98 that's great to hear 🙂
Yes definitely. As I said, if you like kedro continue using it. Both tools live happily side by side.
Let me know, if this still doesn't work, I'll try to reproduce your issue 🙂
As for experimenting, I'd say (and this community can be my witness 🙂 ) that managing your own experiments isn't a great idea. First, you have to maintain the infra (whatever it is, a tool your wrote yourself, or an excel sheet) which isn't fun and consumes time. From what I've heard, it usually takes at least 50% more time than what you initially think. And since there are so many tools out there that do it for free, then the only reason I can imagine of doing it on your own would be if y...
Parent task in a dataset is basically an indication of lineage + sharing content.
Hi SubstantialElk6 For monitoring and production labelling, what we found is that there's no "one size fits all" so while we tried designing ClearML to be easily integrate-able. In the enterprise solution we do have a labeling solution but it's not meant to do production labeling and more to do R&D label fixes. We have customers that integrated 3rd party annotation services with Clearml.
Hmm, that's not fun
I'm checking 🙂
can you please run:
nslookup app.clear.ml
Sorry, not of the script, of the Task. I just added --extra-index-url to the "Installed Packages" section, and it worked.
Hi Alejandro, could you elaborate on the use case? Do you want to basically save models and some "info" on them, but remove all experiments? You remove experiments to remove clutter? Or any other reason?
Will you later use the models for something (Retraining \ deployment)?
GiganticTurtle0 Got it, makes a lot of sense!
I...Think it's a UI bug? I'll confirm 🙂
Well...I'll make sure we do something about it 🙂
JitteryCoyote63 As for the h2o, I actually looks WAY cool. But some people might see us as competitors so not 100% sure that we'll be glad to use their tool but let's see!
Hi GentleSwallow91 let me try and answer your questions 😄
The serving service controller is basically, the main Task that controls the serving functionality itself. AFAIK: clearml-serving-alertmanager - a container that runs the alertmanager by prometheus ( https://prometheus.io/docs/alerting/latest/alertmanager/ ) clearml-serving-inference - the container that runs inference code clearml-serving-statistics - I believe that it runs software that reports to the prometheus reporting ...
I actually don't think that it's supported at the moment...I'll talk to the devs and see if that's something we can add to a future release
Hmm, I'm not 100% sure I follow. you have multiple models doing predictions. Is there a single data source that feeds to them and they run in parallel. or is one's output is another input and they run serially?
We plan to expand our model object and have searchable key:value dicts associated with it, and maybe metric graphs. What you ask is for us to also add artifacts to it. These artifacts are going to be datasets (or something else?)? If I understand correctly, a key:value would be enough as you're not saving data, but only a links to where the data is. Am I right?
You can use:task = Task.get_task(task_id='ID') task.artifacts['name'].get_local_copy()