
Reputation
Badges 1
611 × Eureka!Thank you very much, didnt know about that π
Or does MinIO delay deletion somehow? Deleting a task via the web interface also does not result in deletion of debug samples on MinIO
Or better some cache option. Otherweise the cron job is what I will use π Thanks again
But this seems like something that is not related to clearml π Anyways, thanks again for the explanations!
It is not explained there, but do you meanCLEARML_API_ACCESS_KEY: ${CLEARML_API_ACCESS_KEY:-} CLEARML_API_SECRET_KEY: ${CLEARML_API_SECRET_KEY:-}
?
I use fixed users!
The default behavior mimics Pythonβs assert statement: validation is on by default, but is disabled if Python is run in optimized mode (via python -O). Validation may be expensive, so you may want to disable it once a model is working.
Thank you very much. I am going to try that.
Also, is max_workers about compression threads or upload threads or both?
Thank you very much!
@<1523701087100473344:profile|SuccessfulKoala55> I just did the following (everything locally, not with clearml-agent)
- Set my credentials and S3 endpoint to A
- Run a task with Task.init() and save a debug sample to S3
- Abort the task
- Change my credentials and S3 endpoint to B
- Restart the taskThe result are lingering files in A that seem not to be associated with the task. I would expect ClearML to instead error the task or to track the lingering files somewhere, so they can ma...
Nvm. I forgot to start my agent with --docker
. So here comes my follow up question: It seems like there is no way to define that a Task requires docker support from an agent, right?
I am also wondering how I integrate my (preexisting) main task in the pipeline. I start my main task like this: python my_script.py --myarg "myargs"
. How are the arguments captured? I am very confused, how one integrates this correctly...
- solves it. I did not know this is possible.
Okay, great! I just want to run the cleanup services, however I am running into ssh issues so I wanted to restart it to try to debug.
Maybe something like this is how it is intended to be used?
` # run_with_clearml.py
def get_main_task():
task = Task.create(project="my_project", name="my_experiment", script="main_script.py")
return task
def run_standalone(task_factory):
Task.enqueue(task_factory())
def run_in_pipeline(task_factory):
pipe = Pipelinecontroller()
pipe.add_step(preprocess, ...)
pipe.add_step(base_task_factory=task_factory, ...)
pipe.add_step(postprocess, ...)
pipe.start()
if...
I mean if I do CLEARML_DOCKER_IMAGE=my_image
clearml-task something something
it will not work, right?
With remote_execution it is command="[...]"
, but on local it is command='train'
like it is supposed to be.
Hi @<1523701087100473344:profile|SuccessfulKoala55> Thank you very much.
Is there some way to verify the server uses the correct configuration files? (E.g. see it in the logs/web ui). I Just tried it does not work.
At least I can see the async_delete service complains about a missing secret, so I can start debugging there. I am using the same config as for my agents, but somehow for async_delete it does not work...
Now the pip packages seems to ship with CUDA, so this does not seem to be a problem anymore.
Is ther a way to see the contents of /tmp/conda_envaz1ne897.yml
? Seems to be deleted after the task is finihsed
When is the base_task_factory called? At runtime or definition time?
Thank you. Yes we need to wait for carla to spin up.
I mean, could my hard drive not become full at some point? Can clearml-agent currently detect this?
What I get for args
when I print it locally is not the same as what ClearML extracts from args
.
Is there a way to specify this on a per task basis? I am running clearml-agent in docker mode btw.
Locally it works fine.
Perfect, just what I always wanted. Looking forward to the MinIo version. Thank you:)