Reputation
Badges 1
36 × Eureka!No problem, I tried with this code :
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_blobs
from joblib import dump
from clearml import Task, OutputModel
task = Task.init(project_name="serving examples", task_name="train sklearn model", output_uri=True)
# generate 2d classification dataset
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
# fit final model
model = LogisticRegression()
model.fit(X, y)
#dump(model, filename="...
With my team we found a solution: to execute tasks with agent, we use clearml-task
in CLI. We add the argument --output-uri : ***:1234
where *** is the link to our self-hosted server. Then models in pickle are automatically exported to the server, and not the path of the agent
Hi @<1523701205467926528:profile|AgitatedDove14> , sorry for the delay, I have a better understanding oh workers and agents now, thank you 😁
Hi @<1523701087100473344:profile|SuccessfulKoala55> , Sorry for the delay, thank you for your answer 😉
I'm trying to delete projects, datasets and pipeline from the web UI at my local server adress. For example if I want to delete a dataset, I put it in archive > delete the file only by clicking on the web UI (not with python).
When my team leader look at the disk space usage of the server (in docker), he can still access to this file with the dataset, even if I deleted it from the web UI.
Hi @<1523701118159294464:profile|ExasperatedCrab78> !
Here there are (left: locally, right: remotely)
Hi @<1523701070390366208:profile|CostlyOstrich36> , sorry for the delay
I just found I could reveal the hidden projects in the setting, I think that was why I couldn't delete everything I wanted 😉
Hi @<1523701070390366208:profile|CostlyOstrich36> @<1537605940121964544:profile|EnthusiasticShrimp49> , thank you for your interest, I was wondering if you had time to quickly check my issue
Hi John, I'm waiting for the approval of my superior before I can share it
Here we go @<1523701070390366208:profile|CostlyOstrich36>
Hey @<1537605940121964544:profile|EnthusiasticShrimp49> , yes I can download it and open it with pickle, here is how I do it :
pickle_data_url = ' None '
local_iris_pkl = StorageManager.get_local_copy(remote_url=pickle_data_url)
with open(local_iris_pkl, 'rb') as f:
iris = pickle.load(f)
I set up my agent from my machine but my open-source server is not running on my machine. I can share my agent conf...
Hi @<1523701087100473344:profile|SuccessfulKoala55> , thank you for your answer, I will look into it 👍
Hi @<1523701435869433856:profile|SmugDolphin23> ! I enqueued my task and I got an error sadly 😞 . I put the logs here
Hi @<1523701070390366208:profile|CostlyOstrich36> , I have version 1.9.2. When I use the command clearml-task like this one : clearml-task --project test_tag_git --name sklearn --repo http://***.git --script sklearn.py --requirements requirements.txt --branch test_tag --output-uri http://***
using the script from here : None . 'test_tag' is the name of my git tag. When executi...
Oh okay, it could explain a lot of stuff. Thank you for your answer 👍 My server isn't on 0.0.0.0, so would I need to setup a new one to solve this problem, or is there an alternative ?
I checked the logs as you suggested, I didn't find any error of this type (maybe I didn't put an important parameter). My agent is setup as a docker. Here are the logs.
Hi @<1523701087100473344:profile|SuccessfulKoala55> , here is an example :
On the picture of the Dataset 'DS_Master', the versions 1.0.1,1.0.2,1.0.3 and 1.0.4 are all children of the version 1.0.0. When I go on one specific version, I can see that the version 1.0.0 is the parent of the version I'm looking at. But when I go on the version 1.0.4 for example, I dont' know that the versions 1.0.1,1.0.2,1.0.3 are also children of the version 1.0.0. And I would like to see that on a graph, like t...
Hi @<1523701205467926528:profile|AgitatedDove14> , yes the pipeline is created via the clearml-task CLI. I find it less constraining to launch a pipeline via the CLI. I'm opening a GitHub issue right now, hoping it will be fixed soon. Thank you for your answer 😁
Hi @<1523701087100473344:profile|SuccessfulKoala55> , I see. With my team we are wondering what should be the best practice to train and make predictions with machine learning models: do we get models from artifacts to make predictions or is it a better approach to get models from "models" ? 🤔
Hi @<1523701070390366208:profile|CostlyOstrich36> , you can reproduce it with the pipeline of the iris dataset from the github None
I have a gitlab repo, I run this command to run this pipeline :clearml-task --project test-iris --name pipeline-iris --repo ***.git --script pipeline/pipeline_from tasks.py --queue services --requirements requirements.txt --task-type controller --branch main
My agent is setup as a docke...
Hi @<1523701070390366208:profile|CostlyOstrich36> , thank you for your answer, sadly it "only" adds tags to the steps of the pipeline, not the pipeline itself. And that's the last part I'm looking for.
Hi @<1523701205467926528:profile|AgitatedDove14> , I added pipeline._task.add_tags(tags) and it works, thank you very much 👍
Hi @<1523701435869433856:profile|SmugDolphin23> ! Thank you for your answer, I will try both your suggestions 😉