Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey Folks, Just Wanted To Highlight Something Weird In Our Clearml Pipeline Usage:

Hey folks, just wanted to highlight something weird in our ClearML pipeline usage:

  • When we try to run a pipeline locally using pipe.start_locally() , the pipeline picks up the tasks & performs them without any issues. The working directory is set to where the call to the python script with the pipeline was made, and the entrypoint is picked up correctly
  • When I run the same script using pipe.start() , the pipeline reads the entrypoint as the script & the working directory as the root of the git repo as per the documentation . However, the first task remains in a queued state & doesn't run at all. We are using Pipeline by functions (We noticed that each function is treated as a separate task by ClearML by creating a different file and running that particular file). What is going wrong here?
  
  
Posted 7 months ago
Votes Newest

Answers 3


  • Yes, in this scenario both the Agent & the code were present in the same machine
  • The queue being assigned to default was something we had changed after some debugging, yes.
  • We verified from the ClearML UI that the queue that the task is being assigned to wasn't default.
  • The pipeline only worked through remote execution when the entrypoint script was in the root of the git repo (which kept getting picked up as the working directory)
  
  
Posted 7 months ago

Hi @<1523701132025663488:profile|SlimyElephant79> , are you running both from the same machine? Can you share the execution tab of both pipeline controllers?

Also, reason that they in queued state is because no worker is picking them up. You can control the queue to which every step is pushed and I think by default they are sent to the 'default' queue

  
  
Posted 7 months ago

@<1523701070390366208:profile|CostlyOstrich36> any thoughts?

  
  
Posted 7 months ago
351 Views
3 Answers
7 months ago
7 months ago
Tags
Similar posts