Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Unanswered
Hi All, I Am Currently Have A Pipeline With Multiple Steps Using The Functional Api


Hi again @<1523701435869433856:profile|SmugDolphin23> ,

I was able to run the pipeline remotely on an agent, but I am still facing the same problem with the code breaking on the exact same step that requires the docker container. Is there a way to debug what is happening? Currently there is no indication from the logs that it is running the code in the docker container. Here are the docker related logs:

agent.docker_pip_cache = /home/amerii/.clearml/pip-cache
agent.docker_apt_cache = /home/amerii/.clearml/apt-cache.1
agent.docker_force_pull = false
agent.default_docker.image = nvidia/cuda:11.0.3-cudnn8-runtime-ubuntu20.04
agent.enable_task_env = false
agent.hide_docker_command_env_vars.enabled = true
agent.hide_docker_command_env_vars.parse_embedded_urls = true
agent.abort_callback_max_timeout = 1800
agent.docker_internal_mounts.sdk_cache = /clearml_agent_cache
agent.docker_internal_mounts.apt_cache = /var/cache/apt/archives
agent.docker_internal_mounts.ssh_folder = ~/.ssh
agent.docker_internal_mounts.ssh_ro_folder = /.ssh
agent.docker_internal_mounts.pip_cache = /root/.cache/pip
agent.docker_internal_mounts.poetry_cache = /root/.cache/pypoetry
agent.docker_internal_mounts.vcs_cache = /root/.clearml/vcs-cache
agent.docker_internal_mounts.venv_build = ~/.clearml/venvs-builds
agent.docker_internal_mounts.pip_download = /root/.clearml/pip-download-cache
docker_cmd = 084736541379.dkr.ecr.eu-central-1.amazonaws.com/ap_pipeline:latest
entry_point = pipeline_get_alignments.py
working_dir = .

Here is my pipeline function step:

    pipe.add_function_step(
        name="align_sequences",
        function=pipeline_get_alignments,
        function_kwargs={
            "X_train": "${map_sequences.X_train}",
            "X_test": "${map_sequences.X_test}",
        },
        function_return=["X_train", "X_test"],
        cache_executed_step=True,
        tags=["intaRNA"],
        docker="aws_account_id.dkr.ecr.eu-central-1.amazonaws.com/ap_pipeline:latest",
        docker_bash_setup_script="./docker_setup_script.sh",
        packages=packages,
        execution_queue=QUEUE,
    )

What are some steps I can take to debug what is happening?

  
  
Posted 9 months ago
103 Views
0 Answers
9 months ago
9 months ago