Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Team, I Am Trying To Run A Pipeline Remotely Using Clearml Pipeline And I’M Encountering Some Issues. Could Anyone Please Assist Me In Resolving Them?

Hi Team,

I am trying to run a pipeline remotely using ClearML pipeline and I’m encountering some issues. Could anyone please assist me in resolving them?

Issue 1 : After executing the code, the pipeline is initiated on the “queue_remote_start” queue and the tasks of the pipeline are initiated on the “queue_remote” queue. However, the creation of the dataset failed because it couldn’t find the Python modules from the current directory.

Issue 2 : I also attempted to use the same queue for both pipe.start and pipe.set_default_execution_queue . However, the tasks of the pipeline remained in the pending and queued state and didn’t proceed to the next step.

To run the pipeline remotely, I have created two different queues and assigned a worker to each using the following commands:

clearml-agent daemon --detached --create-queue --queue queue_remote
clearml-agent daemon --detached --create-queue --queue queue_remote_start

I then executed the following command to run the pipeline remotely:

python3 pipeline.py

The code for the Pipeline from Functions is as follows:

# Create the PipelineController object
    pipe = PipelineController(
        name="pipeline",
        project=project_name,
        version="0.0.2",
        add_pipeline_tags=True,
    )

pipe.set_default_execution_queue('queue_remote')

pipe.add_function_step(
    name='step_one',
    function=step_one,
    function_kwargs={
            "train_file": constants.TRAINING_DATASET_PATH,
            "validation_file": constants.VALIDATAION_DATASET_PATH,
            "s3_output_uri": constants.CLEARML_DATASET_OUTPUT_URI,
            "dataset_project": project_name,
            "dataset_name": constants.CLEARML_TASK_NAME,
            "use_dummy_dataset": use_dummy_model_dataset,
        },
        project_name=project_name,
        task_name=create_dataset_task_name,
        task_type=Task.TaskTypes.data_processing,
    )

pipe.start(queue="queue_remote_start")

Could anyone please provide a solution on how to successfully run the pipeline remotely? Any help would be greatly appreciated.

  
  
Posted 11 months ago
Votes Newest

Answers 39


@<1626028578648887296:profile|FreshFly37> can you share also logs of task ? It may give an idea.

  
  
Posted 11 months ago

I have attached the screenshot of logs earlier

  
  
Posted 11 months ago

@<1626028578648887296:profile|FreshFly37> how are you running this locally in the first place?
If you are running pipeline.py with cwd as ev_xx_detection/clearml , then I would not expect you to be able to do from ev_xx_detection.clearml import constants (for example), but import constants directly would work (as constants.py is in the same directory as pipeline.py ). The reason your remote run doesn't work is basically because of this:
cwd is ev_xx_detection/clearml and ev_xx_detection.clearml.constants is imported, but the module that should be imported is actually constants

  
  
Posted 10 months ago

@<1523701435869433856:profile|SmugDolphin23> Sure, Thank you for the suggestion. I'll try to add imports as mentioned by you and execute the pipeline & check the functionality.

In Local I'm running using python3 pipelin.py and used pipe.start_locally(run_pipeline_steps_locally=True) in the pipeline to initialize & it's working fine.

  
  
Posted 10 months ago

@<1523701435869433856:profile|SmugDolphin23> I have tried another way by including pipeline.py in the root directory of the code and executed “python3 pipeline.py” & still faced same issue

  
  
Posted 10 months ago

@<1523701435869433856:profile|SmugDolphin23> I have tried the same method as suggested by you and the pipeline still failed, as it couldn't find "modules". Could you please help me here?

I would like to describe the process again, which I was following:

  • I created a queue and assigned 2 workers to the queue.
  • In the pipeline.py file, to start the pipeline I used pipe.start(queue="queue_remote") and for the tasks I used pipe.set_default_execution_queue('queue_remote')
  • In the working_dir = ev_xxxx_xxtion/clearml I executed the code using python3 pipeline.py
  • The pipeline was initiated on queue " queue_remote " on worker 01 & the next tasks were initiated on queue " queue_remote " on worker 02 and it failed, as it couldn't find the modules in worker 02.
  
  
Posted 10 months ago

@<1626028578648887296:profile|FreshFly37> I see that create_dataset doesn't have a repo set. Can you try setting it manually via the repo repo_branch repo_commit arguments in the add_function_step method?

  
  
Posted 10 months ago

sure, I'll add those details & check. Thank you

  
  
Posted 10 months ago

Thank you @<1523701435869433856:profile|SmugDolphin23> It is working now after the addition of repo details into each task. It seems that we need to specify repo details in each task to pull the code & execute the tasks on the worker.

  
  
Posted 10 months ago
34K Views
39 Answers
11 months ago
10 months ago
Tags
Similar posts