Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Have Set

I have set

export CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=true
export CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=true

in my entrypoint.sh (which runs clearml-agent daemon --queue $QUEUES --create-queue --cpu-only --foreground )

but it appears that tasks still take a long time to set up environments. I expected the whole process to be skipped and for the preinstalled python deps in the docker image (which is running this entrypoint script) to be used.

From task pickup to task "run python file" can be several minutes... which is greater than some of the tasks take themselves.

  
  
Posted 4 months ago
Votes Newest

Answers 54


I'm just working on speeding up the time from "queue experiment" to "my code actually runs remotely" - as of yesterday things would sit for many minutes at a time. trying to see if venv is the culprit .

  
  
Posted 4 months ago

oh yes. Using env until the next message is 2 minutes.

  
  
Posted 4 months ago

the timestamps were all that mattered in those.

  
  
Posted 4 months ago

normally when new package need to be install, it shows up in the Console tab

  
  
Posted 4 months ago

Hi Guys, just curious here, what's was the final issue?
Also out of curiosity, what does that mean? "1.12.2 because some bug that make fastai lag 2x" ?

  
  
Posted 4 months ago

@<1523701205467926528:profile|AgitatedDove14> About why we stay on 1.12.2 : None

  
  
Posted 4 months ago

thank you!
i'll take that design into consideration.

re: CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL in "docker venv mode" im still not quite sure I understand correctly - since the agent is running in a container, as far as it is concerned it may as well be on bare-metal.

is it just that there's no way for that worker to avoid venv? (i.e. the only way to bypass venv is to use docker-mode?)

  
  
Posted 4 months ago

So "Using env ..." take minutes without any output ?

  
  
Posted 4 months ago

there is almost zero overhead if your docker container alreadyt has everything (including the agent) preinstalled and you set it with CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1
it then should basically just run the code.

  
  
Posted 4 months ago

in my case using self-hosted and agent inside a docker container:
47:45 : taks foo pulled
[ git clone, pip install, check that all requirements satisfied, and nothing is downloaded]
48:16 : start training

  
  
Posted 4 months ago

im not running in docker mode though

hmmm that might be the first issue. it cannot skip venv creation, it can however use a pre-existing venv (but it will change it every time it installs a missing package)
so setting CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 in non docker mode has no affect

  
  
Posted 4 months ago

  • try with the latest RC 1.8.1rc2

, it feels like after git clone, it spend minutes without outputting anything

yeah that is odd , can you run the agent with --debug (add before the daemon command) , and then at the end of the command add --foreground
Now launch the same task on that queue, you will have a verbose log in the console.
Let us know what you see

  
  
Posted 4 months ago

We need to focus first on Why is it taking minutes to reach Using env.
In our case, we have a container that have all packages installed straight in the system, no venv in the container. Thus we don't use CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
But then when a task is pulled, I can see all the steps like git clone, a bunch of Requirement already satisfied .... There may be some odd package that need to be installed because one of our DS is experimenting ... But all that we can see what is happening.
In @<1689446563463565312:profile|SmallTurkey79> case, are you saying the log don't show anything at all ? After it pull the task: 5 minutes pass and no explanation of what those 5min been used for ?

  
  
Posted 4 months ago

starting to . thanks for your explanation .

would those containers best be started from something in services mode? or is it possible to get no-overhead with my approach of worker-inside-docker?

i designed my tasks as different functions, based mostly on what metrics to report and artifacts that are best cached (and how to best leverage comparisons of tasks) . they do require cpu, but not a ton.

I'm now experimenting with lumping a lot of stuff into one big task and seeing how this goes instead . i have to be more selective in the reporting of metrics and plots though .

  
  
Posted 4 months ago

"regular" worker will run one job at a time, services worker will spin multiple tasks at the same time But their setup (i.e. before running the actual task) is one at a time..

  
  
Posted 4 months ago

would those containers best be started from something in services mode?

Yes as long as the machine has enough cpu/ram
Notice that the services mode will start a second parallel Task after the first one is done setting up the env, if running with CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL, with containers that have git/python/clearml-agent preinstalled it should be minimal.

or is it possible to get no-overhead with my approach of worker-inside-docker?

No do not do that, see above explanation on why CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL does not work in docker venv mode

i designed my tasks as different functions, based mostly on what metrics to report and artifacts that are best cached (and how to best leverage comparisons of tasks). they do require cpu, but not a ton.

just report a single Task as multiple "titles" then each title is it's own step, then inside the "title" they have different seriese

is there a way for me to toggle CLEARML's log level?

Try to set the python master logger base logging level

  
  
Posted 4 months ago

is there a way for me to toggle CLEARML's log level? I'm doing some manual task-debugging in ipython and think it would be helpful to see network requests and timeouts if they're occurring.

  
  
Posted 4 months ago

clearml==1.12.2
clearml_agent v1.8.1rc2

  
  
Posted 4 months ago

yeah, still noticing that it can be multiple minutes before something starts...
like... what is happening in this time (besides a git clone), now that I set both

export CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=true
export CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=$(which python)

update: it's now been six mins and the task still isn't done. this should have run through in like a minute total end-to-end
image

  
  
Posted 4 months ago

what if the preexisting venv is just the system python ? my base image is python:3.10.10 and i just pip install all requirements in that image . Does that not avoid venv still?

it's good to know that in theory there's a path forward with almost zero overhead . that's what I want .

is it reasonable to expect that with sufficient workers, I can get 50 tasks to run in the same time it takes to run a single one? i cant imagine the apiserver being a noticeable bottleneck .

  
  
Posted 4 months ago

okay that's a similar setup to mine... that's interesting.
much more in line with my expectation.

  
  
Posted 4 months ago

from task pick-up to "git clone" is now ~30s, much better.

This is "spent" calling apt update && update install && pip install clearml-agent
if you have those preinstalled it should be quick

though as far as I understand, the recommendation is still to not run workers-in-docker like this:

if you do not want it to install anything and just use existing venv (leaving the venv as is) and if something is missing then so be it, then yes sure that the way to go

  
  
Posted 4 months ago

apologies - just trying to keep sensitive data out of screenshot

  
  
Posted 4 months ago

I can see all the steps like git clone,

git clone has nothing to do with "env setup" this is brining the code, you cannot skip that one, that said, this is why the git itself is cached on the host machine, so it is fast

... There may be some odd package that need to be installed because one of our DS is experimenting ... But all that we can see what is happening.

even if everything is preinstalled, it Verifies the packages match, this might take a long time. It's just pip being pip (if you want the extreme try to do the same with conda, that one is even slower)
the output of that verification stage is no new packages are installed (otherwise good thing we checked 🙂 )
bottom line, if you want to skip the pip verification/installation pass CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1

btw: i'm checking regrading the GH issue

  
  
Posted 4 months ago

minute of silence between first two msgs and then two more mins until a flood of logs. Basically 3 mins total before this task (which does almost nothing - just using it for testing) starts.
image
image
image

  
  
Posted 4 months ago

ah I see. thank you very much!

trying export CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=$(which python)
but I still see Environment setup completed successfully
(it is printed after Running task id )

it still takes a full 3 minutes between task pulled by worker until Running task id
is this normal? What is happening in these few minutes (besides a git pull / switch)?

  
  
Posted 4 months ago

yeah... still seeing variances from 1m to 10m for the same task. been testing parallel execution for hours.

  
  
Posted 4 months ago

i just ran a pipeline that took about 2h (more than half this time was just the DAG), with about a hundred tasks. i'm taking a look at them now to see what the logs show for runtimes.

  
  
Posted 4 months ago

def seeing some that took 7-8 mins whereas others 2-3...

  
  
Posted 4 months ago

this bug: None

  
  
Posted 4 months ago
11K Views
54 Answers
4 months ago
4 months ago
Tags