Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi, We Are Using Clearml For Our Experiment Tracking But Now Investigating Using The Pipeline Functionality As Well For Scheduling. We Also Want To Be Able To Trigger A Pipeline Run When There Is New Data In An External Database. Is This Possible? From Wh


@<1523701070390366208:profile|CostlyOstrich36> , a quick follow up, I've been looking at the ClearML API documentation to see how to trigger a pipeline via the API. Do you use queues and add_task , as specified here: None ?

Here is an example of the pipeline code, simplified:

"""Forecasting Pipeline"""

from clearml.automation.controller import PipelineDecorator
from clearml import TaskTypes

@PipelineDecorator.component(cache=True, task_type=TaskTypes.data_processing)
def project_pipeline(config_path: str):
    """
    Pipeline steps

    Args:
        config_path (str): Path to config file
    """

    from clearml_pipeline.modeling_utils import generate_predictions
    from loguru import logger

    try:
        results = generate_predictions(config_path)

    except Exception as e:
        logger.error(f"{e}")


@PipelineDecorator.pipeline(
    name="pipeline", project="project_name", version="0.0.1"
)
def executing_pipeline(config_path: str):
    """Decorator for executing the pipeline"""

    project_pipeline(config_path)


if __name__ == "__main__":

    PipelineDecorator.run_locally()

    executing_pipeline("clearml_pipeline/config/ml_config.yaml")
  
  
Posted 11 months ago
121 Views
0 Answers
11 months ago
11 months ago