Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I'Ve Been Working A Bit With Trains-Agent, Having Them Deployed On Different Machines Listening To Queues (Docker Mode) And It'S Been Working Good So Far. My Question Is What Is The Difference Between That Setup (Creating Agents On Different Machines And

I've been working a bit with trains-agent, having them deployed on different machines listening to queues (docker mode) and it's been working good so far.

My question is what is the difference between that setup (creating agents on different machines and attaching them to queues) to the Trains Agent Services mode ?

  
  
Posted 4 years ago
Votes Newest

Answers 10


or its the same palce in the config file for configuring the docker mode agent base image?

  
  
Posted 4 years ago

Yeah but I don't get what it is for - for now I have 2 agents, each listening to some queues. I actually ignore the "services" queue until now

I don't get the difference between how I'm using my agents now, just starting them on machines, and making them listen to queues, to using the "services" mode

  
  
Posted 4 years ago

WackyRabbit7 It is conceptually different than actually training, etc.

The service agent is mostly one without a gpu, runs several tasks each on their own container, for example: autoscaler, the orchestrators for our hyperparameter opt and/or pipelines. I think it even uses the same hardware (by default?) of the trains-server.

Also, if I'm not mistaken some people are using it (planning to?) to push models to production.

I wonder if anyone else can share their view since this is a relatively new feature (AHEM)

  
  
Posted 4 years ago

It's just another flag when running the trains-agent
You can have multiple service-mode instances, there is no actual limit 🙂

  
  
Posted 4 years ago

Sorry.. I still don't get it - when I'm launching an agent with the --docker flag or with the --services-mode flag, what is the difference? Can I use both flags? what does it mean? 🤔

  
  
Posted 4 years ago

👍

  
  
Posted 4 years ago

Its built in 🙂 and Its for... "Services"
https://github.com/allegroai/trains-server#trains-agent-services--

  
  
Posted 4 years ago

Oh I get it, that also makes sense with the docs directing this at inference jobs and avoiding GPU - because of the 1-N thing

  
  
Posted 4 years ago

WackyRabbit7
regular trains-agent modus operandi is one job at a time (i.e. until the Task is done, no other Tasks will be pulled from the queue).

When adding --services-mode, it is Not 1-1 but 1-N, meaning a single trains-agent will launch as many Tasks as it can.
The trains-agent pulls a job from the queue and spins a docker (only dockers are supported for the time being) and lets the job run in the background (the job itself will be registered as another "worker" in the system). Then the trains-agent will pull the next job from the queue.

  
  
Posted 4 years ago

does the services mode have a separate configuration for base image?

  
  
Posted 4 years ago