Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
More Of Pushing Clearml To It'S Data Engineering Limits


The dark theme you have

It's this chrome extension ! I forget it's even on sometimes. It gives you a keyboard shortcut to toggle dark mode on any website. I love it.

Success! Wow, so this means I can use ClearML training/inference pipelines as part of AWS StepFunctions!

My plan is to have a AWS Step Functions state machine (DAG) that treats running a ClearML job as one step (task) in the DAG.

So I'd:

  • Create an SQS queue with a lambda worker
  • Place JSON events onto the queue, whose contents are parameters that the training/serving pipeline should take, and the ID of the pipeline. Something like {"pipeline_id": "...", "aws_sfn_task_token": "....", "s3_paths": [...]} . That aws_sfn_task_token can be sent by boto3 to signal to AWS whether the task has succeeded or failed. boto3 has API calls called SendTaskSuccess and SendTaskFailure .
  • Trigger the lambda to read off a message, and use the ClearML SDK to remotely start the job, passing the the JSON event as one or more parameters.
  • (Failure case): If the pipeline task fails, have the trigger react to that and use boto3 to emit a SendTaskFailure API call with the aws_sfn_task_token . The State Machine could react to that and place the inputs onto a Retry queue.
  • (Success case): If the pipeline task succeeds, have the trigger react to that and use boto3 to emit a SendTaskSuccess call, with the appropriate celebratory handling of that 🎉
    Note: The event on the queue could include a UUID that acts as a "Trace ID", too. That way, if I can figure out how to use the awslogs driver for the docker containers run by the ClearML workers, then we can correlate all logs in the greater DAG (assuming all logs contain the Trace ID)
    image
    image
  
  
Posted one year ago
194 Views
0 Answers
one year ago
one year ago