Reputation
Badges 1
19 × Eureka!I am running clearml server on gcp, but I didn't exposed ports instead I ssh to machine and do port forwarding to localhost. The problem is localhost on my machine is not same as localhost inside docker on worker. If I check dataset, files are stored in localhost, but actually it is not localhost. Didn't fond the solution yet how to properly setup hostname for dataserver. Any ideas?
I checked web ui, in execution section for pipeline there is repo_url, commit id... but in tasks's execution section repo_url filed is blank.
Ok, thanks for explanation. So pipeline controller is in Running state, while task 1 is in pending state. The solution will be to add one more agent?
Thanks a lot! I do not have problem executing the pipeline remotely, I have problam executing it locally.
I have GCP instance with official clearml image.
from clearml import StorageManager, Dataset
dataset = Dataset.create(
dataset_project="Project", dataset_name="Dataset_name"
)
files = [
'file.csv',
'file1.csv',
]
for file in files:
csv_file = StorageManager.get_local_copy(remote_url=file)
dataset.add_files(path=csv_file)
# Upload dataset to ClearML server (customizable)
dataset.upload()
# commit dataset changes
dataset.finalize()
Hey, I had similar problem. Take a look here: None
As I understood, pipeline controller is a task, and it blocks queue. I solved problem by adding one more agent.
Here is basic example:
from clearml import PipelineController
def step_function(param):
print("Hello from function!")
print("Param:", param)
if __name__ == '__main__':
repo = '
'
repo_branch = 'main'
working_dir = 'pipelines'
pipe = PipelineController(
name='Test',
project='Test',
version='0.0.1',
add_pipeline_tags=False,
repo=repo,
repo_branch=repo_branch,
working_dir=working_dir
)
p...
@<1523701070390366208:profile|CostlyOstrich36> any idea? 🙂
Is there a possibility that it was using Elastic before (through some logging driver) but that it defaulted to using json.log (default) logger driver now?
I was digging around a bit, it seems that for the worker containers use default logging, that is: they use json.log files stored in /var/lib/docker/container/<hash>
folders. When I do up/down of docker compose, these container folders are purged and with them my console log is gone.
Looks like that there was problem with elastic search docker container. Everything was good after restarting machine.
Looks like that docker-compose down && docker-compose up flush console output. I upgraded server to 1.16.2-502, didn't had that problem before. Any idea?
Yes, there is one agent. As I said, I am able to execute task, but have problem with pipeline
One more question 🙂
How can I force clearML not to install requirements before running task? (already have everything installed on docker machine)
I solve problem by adding container argument
--network host
Didn't had that problem, sorry I can not help you. 😢