
Reputation
Badges 1
86 × Eureka!So I'm trying to run my pipeline file that runs a pipeline locally and logs metrics and stuff to the clearml server
Also,
How do I just submit a pipeline to the server to be executed by an agent?
Currently I am able to use P ipeline Decorator.run_locally() to run it ;
However I just want to push it to a queue and make the agent do it's trick, any recommendations ?
Hey, thanks for the reply
I have another question ;
Are Kwargs supported in functions decorated as a pipeline component?
So I am able to access it via sending requests to the clearml fileserver but, any way to access it from the dashboard(the main app)?
So the issue is that the model url points to the file location on my machine,
Is there a way for me to pass the model url something else?
I'm asking this because my kwargs is observed as an empty dict if printed
Yep, the pipeline finishes but the status is still at running . Do we need to close a logger that we use for scalers or anything?
I had initially just pasted the new credentials in place of the existing ones in my conf file;
Running clearml-init now fails at verifying credentials
A simple StorageManager.download_folder(‘url’)
My minio instance is hosted locally, so I'm providing an url like ‘ http://localhost:9000/bucket-name%E2%80%99
http://localhost:9000 http://localhost:9000/%3Cbucket%3E
My minio instance is hosted locally at the 9000 port.
Configuration completed now; I t was a proxy issue from my end
However running my pipeline from a different m achine still gives me a problem
2022-07-12 13:41:39,309 - clearml.Task - ERROR - Action failed <400/12: tasks.create/v1.0 (Validation error (error for field 'name'. field is required!))> (name=custom pipeline logic, system_tags=['development'], type=controller, comment=Auto-generated at 2022-07-12 08:11:38 UTC by mbz1kor@BMH1125053, project=2ecfc7efcda448a6b6e7de61c8553ba1, input={'view': {}})
Traceback (most recent call last):
File "main.py", line 189, in <module>
executing_pipeline(
File "/home/mbz1kor...
We're initialising a task to ensure it appears on the experiments page;
Also not doing so gave us issues of ‘Missing parent pipeline task’ for a set of experiments we had done earlier
This issue was due to a wsl proxy problem; wsl’s host name couldn't be resolved by the server and that became a problem for running agents. It works fine on Linux machines so far, however.
So no worries :D
How do I provide a specific output path to store the model? (Say I want to server to store it in ~/models)
I'm training my model via a remote agent.
Thanks to your suggestion I could log the model as an artefact(using PipelineDecorator.upload_model()) - but only the path is reflected; I can't seem to download the model from the server
Thanks for actively replying, David
Any update on the example for saving a model from within a pipeline( specifically in .pth or h5 formats?)
Hey so I was able to get the local .py files imported by adding the folder to my path sys .path