Reputation
Badges 1
611 × Eureka!And in the web UI artifacts is still empty.
Give me 5min and I send the full log
Is there a simple way to get the response of the MinIO instance? Then I can verify whether it is the MinIO instance or my client
Yep, I will add this as an issue. Btw: Should I rather post the kind of questions I am asking as an issue or do they fit better here?
What I get for args when I print it locally is not the same as what ClearML extracts from args .
Hard to answer now. I just wiped everything and reinstalled. If I encounter this problem again, I will investigate further.
` apiserver:
command:
- apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
- /opt/clearml/logs:/var/log/clearml
- /opt/clearml/config:/opt/clearml/config
- /opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
- redis
- mongo
- elasticsearch
- fileserver
- fileserver_datasets
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_...
Maybe let s put it in a different way:
Pipeline
Preprocess Task Main Task Postprocess Task
My main task is my experiment, so my training code. When I ran the main task standalone, I just used Task.init and set up the project name, task name, etc.
Now what I could do is push this task to the server, then just reference the task by its task-ID and run the pipeline. However, I do not want to push the main task to the server before running. Instead I want to push the whole pipeline, but st...
AgitatedDove14 I have to problem that "debug samples" are not shown anymore after running many iterations. What's appropriate to use here: A colleague told me increasing task_log_buffer_capacity worked. Is this the right way? What is the difference to file_history_size ?
So with pipeline decorators can I implement this logic?
Or there should be an early error for trying to run conda based tasks on pip agents
I mean, could my hard drive not become full at some point? Can clearml-agent currently detect this?
When you say it is an SDK parameter this means that I only have to specify it on the computer where I start the task from, right? So an clearml-agent would read this parameter from the task itself.
As in if it was not empty it would work?
The problem is that clearml installs cudatoolkit=11.0 but cudatoolkit=11.1 is needed. By setting agent.cuda_version=11.1 in clearml.conf it uses the correct version and installs fine. With version 11.0 conda will resolve conflicts by installing pytorch cpu-version.
Locally it works fine.
I have an carla.egg file on my local machine and on the worker that I include with sys.path.append before I can do import carla . It is the same procedure on my local machine and on the clearml-agent worker.
Is this really working for you guys? I have no clue what's wrong. Seems so unlikely that my code works with artifacts, datasets, but not logging...
I will create a minimal example.
@<1576381444509405184:profile|ManiacalLizard2> Yea, that makes sense. However, my problem is that I do not want to set it on the remote clearml-agent, since every use may have a different storage. E.g. one user pushes to Azure, while another one pushes to S3
Artifact Size: 74.62 MB
Okay, I see. Unfortunetly, I don't get how clearml tasks are intended to be used. Could you help me with that? (see code)
` def start_carla_factory():
task = # How do I create this task?
long_blocking_call_to_start_carla()
return task
pipe = PipelineController(
name="carla-autostart",
project="rlad/carla-servers",
version="0.0.1",
add_pipeline_tags=False,
)
pipe.add_step(name="start-carla", base_task_factory=start_carla_factory)
pipe.start() `
I created an issue on using conda as package manager: https://github.com/allegroai/clearml-agent/issues/44
Is this not something completely different?
This will just change the way to local repository is analyzed, but nothing about the agent.
I have no idea whether it is a user error or because of the clearml-server update...