Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Clearml, I Am Curious To Know How Does The Clearml

Hello ClearML,

I am curious to know how does the clearml PipelineController know the ip of the ClearML server? It seems it knows the right IP of the task server when it register a pipeline, but I am not sure how.

` # create the pipeline controller
pipe = PipelineController(
project='rudolf',
name='Pipeline functions as pipelines',
version='1.1',
add_pipeline_tags=False,
)

set the default execution queue to be used (per step we can override the execution)

pipe.set_default_execution_queue('rudolf')

add pipeline components

pipe.add_parameter(
name='url',
description='url to pickle file',
default=' '
)
pipe.add_function_step(
name='step_one',
function=step_one,
function_kwargs=dict(pickle_data_url='${pipeline.url}'),
function_return=['data_frame'],
cache_executed_step=True,
)
pipe.add_function_step(
name='step_two',
# parents=['step_one'], # the pipeline will automatically detect the dependencies based on the kwargs inputs
function=step_two,
function_kwargs=dict(data_frame='${step_one.data_frame}'),
function_return=['processed_data'],
cache_executed_step=False,
)
pipe.add_function_step(
name='step_three',
# parents=['step_two'], # the pipeline will automatically detect the dependencies based on the kwargs inputs
function=step_three,
function_kwargs=dict(data='${step_two.processed_data}'),
function_return=['model'],
cache_executed_step=True,
)

For debugging purposes run on the pipeline on current machine

Use run_pipeline_steps_locally=True to further execute the pipeline component Tasks as subprocesses.

pipe.start_locally(run_pipeline_steps_locally=False)

Start the pipeline on the services queue (remote machine, default on the clearml-server)

pipe.start(queue="rudolf")

print('pipeline completed') `

  
  
Posted one year ago
Votes Newest

Answers 8


so it's not auto-generated. What's the spec of this conf file?

  
  
Posted one year ago

you are amazing guys! thank yall

  
  
Posted one year ago

Hi AverageRabbit65 ,
Any task (including pipeline steps) is always executed either on a machine with a clearml.conf file, or by an agent which already has the clearml server address

  
  
Posted one year ago

😄

  
  
Posted one year ago

it is basically auto-generated when you do clearml-init
there are a bench of optional configurations that are not in the auto generated file though.
Have a look here it is pretty detailed https://clear.ml/docs/latest/docs/configs/clearml_conf

  
  
Posted one year ago

I see! SuccessfulKoala55 what is the right way to config it? via vim or is there any commandline tool?

  
  
Posted one year ago

AverageRabbit65 you can see the full process and how to create the configuration file here: https://clear.ml/docs/latest/docs/clearml_agent

  
  
Posted one year ago

AverageRabbit65
Any tool that will permit to edit a text file. I personally use nano . Note that the indentations are not crucial, so any tool, either GUI or CLI will be ok

  
  
Posted one year ago
572 Views
8 Answers
one year ago
one year ago
Tags
Similar posts