Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, I Have A Basic Question On Clearml Purposes. I Want It To Receive Tasks From Front-End, Like This:

Hello, i have a basic question on ClearML purposes.
i want it to receive tasks from front-end, like this:

app = FastAPI()

@app.post("/create_task")
def create_task():
    pipe = PipelineController(...)
    ...
    pipe.add_function_step(name="number_one")
    
    #pipe.start_locally(run_pipeline_steps_locally=True)
    pipe.start(queue='my-worker-queue')

    return {"message": "Task created successfully"}

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)

it does run with pipe.start_locally(...)
but with pipe.start(queue='my-worker-queue') i have problems.

Is that so, that ClearML was not designed for integration with back-end servers that receive tasks?

  
  
Posted 3 months ago
Votes Newest

Answers 10


Now i can do that by spawning subprocess like this:

app = FastAPI()

@app.post("/create_task")
def create_task(input_data: InputData):
    os.system("python main-pipeline.py")
   
    return {
        "message": "Task created successfully",

    } 

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)
  
  
Posted 3 months ago

@<1523701070390366208:profile|CostlyOstrich36>
image

  
  
Posted 3 months ago

I'm trying to build a back end app, that receives jsons with inference tasks

  
  
Posted 3 months ago

it is starting uvicorn server on agent :)

  
  
Posted 3 months ago

I'm expecting that json post request will spawn a task at clearm server/worker

  
  
Posted 3 months ago

@<1736556867255013376:profile|ImpressionableElk3> , What issues are you having exactly? Can you attach logs?

  
  
Posted 3 months ago

I see. Can you please elaborate on your use case a bit? What are you trying to achieve? Are the servers supposed to be persistent until aborted?

  
  
Posted 3 months ago

In that case you might be interested in the serving module of ClearML - None

  
  
Posted 3 months ago

@<1523701070390366208:profile|CostlyOstrich36> thx a lot, i'll check it out right now

  
  
Posted 3 months ago

Might make life easier 🙂

  
  
Posted 3 months ago
212 Views
10 Answers
3 months ago
3 months ago
Tags