Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Have Set

I have set

export CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=true
export CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=true

in my entrypoint.sh (which runs clearml-agent daemon --queue $QUEUES --create-queue --cpu-only --foreground )

but it appears that tasks still take a long time to set up environments. I expected the whole process to be skipped and for the preinstalled python deps in the docker image (which is running this entrypoint script) to be used.

From task pickup to task "run python file" can be several minutes... which is greater than some of the tasks take themselves.

  
  
Posted 11 months ago
Votes Newest

Answers 54


sometimes I get "lucky" and see something more like what I expect... total experiment time < 1 min (and I have evidence of this happening. logs start-to-finish in sub-minute). But then other times the same task will take 5-10 minutes.

same worker, same queue, just one worker serving it... I am so utterly perplexed by the variation in how long things take. my clearml API server is running on a beefy 32 core machine and not much else is happening right now...
image

  
  
Posted 11 months ago

i just need to understand what I should be expecting. I thought from putting it into queue in UI to "running my code remotely" (esp with packages preloaded) should be fairly fast turnaround - certainly not three minutes... i'll have to change my whole pipeline design if this is the case)

  
  
Posted 11 months ago

are you on clearml agent 1.8.0?

(im noticing sometimes im just missing logs such as "Running task id.." entirely)

  
  
Posted 11 months ago

I know that git clone and pip verify all installed is normal. But for some reason in Michael screenshot, I don't see those steps ...

  
  
Posted 11 months ago

fwiw - i'm starting to wonder if there's a difference between me "resetting the task" vs cloning it.

  
  
Posted 11 months ago

oh it's there, before running task.

from task pick-up to "git clone" is now ~30s, much better.

though as far as I understand, the recommendation is still to not run workers-in-docker like this:

export CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1
  export CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=$(which python)

(and fwiw I have this in my entrypoint.sh )

cat <<EOF > ~/clearml.conf
agent {
    vcs_cache {
        enabled: true
    }

    package_manager: {
        type: pip,
        system_site_packages: true,
    }

}
EOF
  
  
Posted 11 months ago

there is almost zero overhead if your docker container alreadyt has everything (including the agent) preinstalled and you set it with CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1
it then should basically just run the code.

  
  
Posted 11 months ago

i really dont see how this provides any additional context that the timestamps + crops dont but okay.

  
  
Posted 11 months ago

okay that's a similar setup to mine... that's interesting.
much more in line with my expectation.

  
  
Posted 11 months ago

what if the preexisting venv is just the system python? my base image is python:3.10.10 and i just pip install all requirements in that image. Does that not avoid venv still?

it will basically create a new venv inside the container forking the existing preinistalled stuff (i.e. the new venv already has everything the python system has preinstalled)
then it will call "pip install" on all the "installed packages of the Task.
Which should just check everything is there and install nothing

If you set " CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1" it will do checks and just use the existing system python environment as is.

, I can get 50 tasks to run in the same time it takes to run a single one? i cant imagine the apiserver being a noticeable bottleneck.

50 containers on a single machine would be fine if you have enough RAM/CPU, and yes they would run concurrently.
regrading the time itself, again the spinup time of a Task should be negligible.
Pipeline tasks are not meant to be "threads" they are meant as different functions you want to run on different machines,
This means that if your pipeline is just a set of simple functions that require no cpu/gpu or IO, I'm not sure pipeline steps is the right way to go

Does that make sense?

  
  
Posted 11 months ago

BTW: you can also just add -e " CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1" to the docker args (under the Execution tab) to override the setting of the docker.
you can also add " export; " to the docker startup bash script section (do not add "#/bin/bash" , just the actual script) to get a list of all the environment variables inside the docker, just in case

  
  
Posted 11 months ago

normally when new package need to be install, it shows up in the Console tab

  
  
Posted 11 months ago

ha! yup. that was it exactly. I posted about it too None lol

  
  
Posted 11 months ago

So "Using env ..." take minutes without any output ?

  
  
Posted 11 months ago

im not running in docker mode though - im running a clearml worker in a docker container (and then multiplying the container)

  
  
Posted 11 months ago

minute of silence between first two msgs and then two more mins until a flood of logs. Basically 3 mins total before this task (which does almost nothing - just using it for testing) starts.
image
image
image

  
  
Posted 11 months ago

hard to see with your croppout here an there ...

  
  
Posted 11 months ago

clearml==1.12.2
clearml_agent v1.8.1rc2

  
  
Posted 11 months ago

i would love some advice on that though - should I be using services mode + docker and some max # of instances to be spinning up multiple tasks instead?

my thinking was to avoid some of the docker overhead. but i did try this approach previously and found that the container limit wasn't exactly respected.

  
  
Posted 11 months ago

in my case using self-hosted and agent inside a docker container:
47:45 : taks foo pulled
[ git clone, pip install, check that all requirements satisfied, and nothing is downloaded]
48:16 : start training

  
  
Posted 11 months ago

Hi Guys, just curious here, what's was the final issue?
Also out of curiosity, what does that mean? "1.12.2 because some bug that make fastai lag 2x" ?

  
  
Posted 11 months ago

def seeing some that took 7-8 mins whereas others 2-3...

  
  
Posted 11 months ago

im not running in docker mode though

hmmm that might be the first issue. it cannot skip venv creation, it can however use a pre-existing venv (but it will change it every time it installs a missing package)
so setting CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1 in non docker mode has no affect

  
  
Posted 11 months ago

oooh thank you, i was hoping for some sort of debugging tips like that. will do.

from a speed-of-clearing-a-queue perspective, is a services-mode queue better or worse than having many workers "always up"?

  
  
Posted 11 months ago

but pretty reliably some proportion of tasks still just take a much longer time. 1m - 10m is a variance i'd really like to understand.

  
  
Posted 11 months ago

from the logs, it feels like after git clone, it spend minutes without outputting anything. AgitatedDove14 Do you know what is the agent suppose to do after git clone ?
I guess a check that all packages is installed ? But then with CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1, what is the agent doing ??

  
  
Posted 11 months ago

I think a proper screenshot of the full log with some information redacted is the way to go. Otherwise we are just guessing in the dark

  
  
Posted 11 months ago

oh yes. Using env until the next message is 2 minutes.

  
  
Posted 11 months ago

SmallTurkey79 could you attach the full log of the Task?
also I would recommend "export CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1" (not true )
Usually binary env vars are 0/1
(I can see that the docs here: None
never mention it, I'll ask them to add that)

  
  
Posted 11 months ago

is there a way for me to toggle CLEARML's log level? I'm doing some manual task-debugging in ipython and think it would be helpful to see network requests and timeouts if they're occurring.

  
  
Posted 11 months ago
60K Views
54 Answers
11 months ago
11 months ago
Tags