Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi There, Maybe This Was Already Asked But I Don'T Remember: Would It Be Possible To Have The Clearml-Agent Switch Between Docker Mode And Virtualenv Mode At Runtime, Depending On The Experiment

Hi there, maybe this was already asked but I don't remember:
Would it be possible to have the clearml-agent switch between docker mode and virtualenv mode at runtime, depending on the experiment Image property? This seems intuitive:
If the property is empty, the user wants to run with virtualenv If the property is not empty, the user wants to run with a specific docker imageFor now I had to adapt the autoscaler, and have duplicate queues with/without docker to support both

  
  
Posted one year ago
Votes Newest

Answers 6


Hi JitteryCoyote63 , I don't believe this is possible. Might want to open a GitHub feature request for this.

I'm curious, what is the use case? Why not use some default python docker image as default on agent level and then when you need a specific image put into the experiment configuration?

  
  
Posted one year ago

If you use some lightweight image like python:3.9-bullseye the setup time is really negligible especially if the image is already on the machine compared to how long the training takes.

  
  
Posted one year ago

Yea so I assume that training my models using docker will be slightly slower so I'd like to avoid it. For the rest using docker is convenient

  
  
Posted one year ago

I guess that's a good point but really applicable if your training is CPU intensive. If your training is GPU intensive I guess most of the load goes on the GPU so running over VM (EC2 instances for example) shouldn't have much of a difference but this is worthy of testing.

I found this article talking about performance
https://blog.equinix.com/blog/2022/01/04/3-reasons-why-you-should-consider-running-containers-on-bare-metal/

But it doesn't really say what the difference in performance is.

Maybe SuccessfulKoala55 , JuicyFox94 or AgitatedDove14 might have some input on this interesting point

  
  
Posted one year ago

JitteryCoyote63 let me just add that while this is indeed an interesting feature (and completely possible to add, needless to say), I think your description is a bit too simplistic - I've come across many people who want to run their experiments in docker, but not to specify an image on each experiment, usually since they always use the same docker image, and set it as the agent's default image 🙂

  
  
Posted one year ago

How about the overhead of running the training on docker on a VM?

  
  
Posted one year ago