Reputation
Badges 1
14 × Eureka!You can use:task = Task.get_task(task_id='ID') task.artifacts['name'].get_local_copy()
Hey, AFAIK, SDK version 1.1.0 disabled the demo server by default (still accessible by setting an envvar).
https://github.com/allegroai/clearml/releases/tag/1.1.0
Is this still an issue even in this version?
Hi SillySealion58 , yeah in that case we only look at the filename and not the full path. Let me see what we can do internally! Thanks and happy you found a workaround π
Hi JitteryParrot8
Do you mean Task? If you create a dataset with ClearML data, the Task's Icon would indicate it's a dataset task. Same goes for Experiment. You are in luck π The new SDK (which is about to be released any day now) would log the dataset used, every time you do Dataset.get().
Regardless we are in the final execution phases of a major overhaul to the dataset UI so stay tuned for our next server release that would, hopefully, make your life easier π
Hi! So I looked at the example code and you don't have to use joblib π
If you do:model.save_model('xgb.02.model')It will work π Sorry for the confusion!
To upload labels, after task.init calltask.set_model_label_enumeration({'label':0,'another_label':1})The models will inherit this enumeration π
Makes sense?
Cool and impressive are 2 adjective we like to hear π
So I'm looking at the example in the github, this is step1:def step_one(pickle_data_url): # make sure we have scikit-learn for this step, we need it to use to unpickle the object import sklearn # noqa import pickle import pandas as pd from clearml import StorageManager pickle_data_url = \ pickle_data_url or \ ' ` '
local_iris_pkl = StorageManager.get_local_copy(remote_url=pickle_data_url)
with open(local_iris_pkl, 'rb') as f:
iris ...
Hmm, that's not fun
I'm checking π
Yeah I get what you're saying, but when developing ClearML we did not view it like that. when we run it locally or debug it, we thought of it more "This is running just on my local computer without an agent to make sure everything works before I use an agent to run the parts". Sorry that it confuses you π
Am I doing something differently from you?
Hi OutrageousSheep60 , we have good news and great news for you! (JK, it's all great π ). In the coming week or two we'll release the ability to also add links to clearml-data, so you can bring your s3 (or any other cloud) and local files as links (instead of uploading to the server). π
Hi GentleSwallow91 let me try and answer your questions π
The serving service controller is basically, the main Task that controls the serving functionality itself. AFAIK: clearml-serving-alertmanager - a container that runs the alertmanager by prometheus ( https://prometheus.io/docs/alerting/latest/alertmanager/ ) clearml-serving-inference - the container that runs inference code clearml-serving-statistics - I believe that it runs software that reports to the prometheus reporting ...
Just to make sure, if you change the title to "mean top four accuracy" it should work OK
Hi OutrageousSheep60 , The plan is to release this week \ early next week a version that solves this.
` # ClearML - Example of Pytorch mnist training integration
from future import print_function
import argparse
import os
from tempfile import gettempdir
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from clearml import OutputModel
from clearml import Task, Logger
import time
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(1, 20,...
JitteryCoyote63 I'm not sure we can get to it fast enough, unfortunately π (It only means we have cooler stuff that we're working on π )
once integrating clearml it'll automatically report resource utilization (GPU \ CPU \ Memory \ Network \ Disk IO)
This is an sklearn example, but AFAIK it should work also with XGB. Makes sense?
What you can do is run it in offline mode though π
The new welcome screen to pipelines and our fancy new icon on the left sidebar π
EcstaticBaldeagle77 , Can you share an example of:
self.log("key_name", value) that you save? (not 100% sure what self is π )
What the automagic integration provide is that you have all the parameters of your pl trainer automatically fetched and populated, as well as when you call this function:def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) self.log('valid_loss', loss)The call to "self.log()" is fetched and r...
OutrageousSheep60 took a bit longer but SDK 1.4.0 is out π please check the links feature in clearml-data π
Can you elaborate on the use-case a bit more? Why not report directly to the server?
The ClearML team appreciates bitching anywhere you feel like it (especially the memes section).
In the absence of some UI \ UX channel I suggest to just write here. I can promise you the people who's responsibility it is to fix \ improve the UI are roaming here and will see the request π
How to Supercharge Your Teamβs Productivity with MLOps [S31250]
ML-Ops Workshop: Demonstrating an End-to-End Pipeline for ML/DL Leveraging GPUs [S32056]
Best Practices in Handling Machine Learning Pipelines on DGX Clusters [E32375]