Reputation
Badges 1
25 × Eureka!Yes! Thanks so much for the quick turnaround
My pleasure 🙂
BTW: did you see this (it seems like the same bug?!)
https://github.com/allegroai/clearml-helm-charts/blob/0871e7383130411694482468c228c987b0f47753/charts/clearml-agent/templates/agentk8sglue-configmap.yaml#L14
Hi CheerfulGorilla72
I guess this is a documentation bug, is there a stable link for the latest docker-compose ?
If you have idea on where to start looking for a quick win, I'm open to suggestions 🙂
how can I start up the clearml agent using the clearml-agent image instead of SDK?
Not sure I follow, what do you mean instead of the SDK? and what is the "clearml-agent image" ?
Hi CrookedWalrus33
I think there if you are already logged in and you pressed on the "signup" tab instead of the "login" tab (frontend are working on a solution)
In the meantime just make sure you are clicking on the "login" tab
But in credentials creation it still shows 8008. Are there any other places in docker-compose.yml where port from 8008 to 8011 should be replaced?
I think there is a way to "tell" it what to out there, not sure:
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_config#configuration-files
In the docker bash startup scriptapt-get install poppler-utils
OddAlligator72 what you are saying is, take the repository / packages from the runtime, aka the python code calling the "Task.create(start_task_func)" ?
Is that correct ?
BTW: notice that the execution itself will be launched on other remote machines, not on this local machine
We suddenly have a need to setup our logging after every
task.close()
Hmm that gives me a handle on things, any chance it is easily reproducible ?
I have the agent configured to force install requirements.txt
what do you mean by that?
I also have task_override that adds a version which changes each run
It's just a tag, so no real difference
Has anyone done this exact use case - updates to datasets triggering pipelines?
Hi TrickySheep9 seems like this is following a diff thread, am I missing something ?
Do you think this is better ? (the API documentation is coming directly from the python doc-string, so the code will always have the latest documentation)
https://github.com/allegroai/clearml/blob/c58e8a4c6a1294f8acec6ed9cba81c3b91aa2abd/clearml/datasets/dataset.py#L633
Do people use ClearML with huggingface transformers? The code is std transformers code.
I believe they do 🙂
There is no real way to differentiate between, "storing model" using torch.save and storing configuration ...
from what I gather there is a lightly documented concept
Yes ... 😞 the reason for it is that actually one could do:
` @PipelineDecorator.pipeline(...)
def pipeline(i):
....
if name == 'main':
pipeline(0)
pipeline(1)
pipeline(2) `Basically rerunning the pipeline 3 times
This support was added as some users found a use case for it, but I think this would be a rare one
Hi RoughTiger69
Interesting question, maybe something like:
` @PipelineDecorator.component(...)
def process_sub_list(things_to_do=[0,1,2]):
r = []
for i in things_to_do:
print("doing", i)
r.append("done{}".format(i))
return r
@PipelineDecorator.pipeline(...)
def pipeline():
create some stuff to do:
results = []
for step in range(10):
r = process_sub_list(list(range(step*10, (step+1)*10)))
results.append(r)
push into one list with all result, this will ac...
HI BurlyRaccoon64
Yes, we did the latest clearml-agent solves the issue, please try:
'pip3 install -U --pre clearml-agent'
EnviousStarfish54
oh, this is a bit different from my expectation. I thought I can use artifact for dataset or model version control.
You totally can use artifacts as a way to version data (actually we will have it built in in the next versions)
Getting an artifact programmatically:
Task.get_task(task_id='aabb'). artifacts['artifactname'].get()
Models are logged automatically. No need to log manually
Hi @<1577106212921544704:profile|WickedSquirrel54>
We are self hosting it using Docker Swarm
Nice!
and were wondering if this is something that the community would be interested in.
Always!
what did you have in mind? I have to admit I'm not familiar with the latest in Docker swarm but we all lover Docker the product and the company
DeliciousBluewhale87 and is it working?
Hi ThoughtfulElephant4
I was trying to build an image using clearml server dockerfile ,
Are you saying you are rebuilding the docker image for the clearml-server and it fails ?
Can you provide the full console log?
Hi ContemplativeGoat37
it a good idea to use ClearML Agent Services for such things?
Yes! it is exactly the kind of thing it was designed to do 🙂
when you clone the Task, it might be before it is done syncying git / packages.
Also, since you are using 0.16 you have to have a section name (Args or General etc.)
How will task b use the parameters ? (argparser / connect dict?)
Hi GreasyPenguin14
This is what I did, but I could not reproduce the hang, how is this different from your code?
` from multiprocessing import Process
import numpy as np
from matplotlib import pyplot as plt
from clearml import Task, StorageManager
class MyProcess(Process):
def run(self):
# in another process
global logger
# Create a plot
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = ...
CurvedHedgehog15 the agent has two modes of opration:
single script file (or jupyter notebook), where the Task stores the entire file on the Task itself. multiple files, which is only supported if you are working inside a git repository (basically the Task stores a refrence to the git repository and the agent pulls it from the git repo)Seems you are missing the git repo, could that be?
Hmm let me rerun (offline mode right ?)
So if I do this in my local repo, will it mess up my git state, or should I do it in a fresh directory?
It will install everything fresh into the target folder (including venv and code + uncommitted changes)
And as far as I can see there is no mechanism installed to load other objects than the model file inside the Preprocess class, right?
Well actually this is possible, let's assume you have another Model that is part of the preprocessing, then you could have:
something like that should work
def preprocess(...)
if not getattr(self, "_preprocess_model):
self._preprocess_model = joblib.load(Model(model_id).get_weights())