Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, I'Ve Successfully Run A Task Locally, And Now I'M Trying To Clone It And Send It To A Queue. It Looks Like The Environment Is Built Successfully, But It Hangs Here:

Hi all, I've successfully run a Task locally, and now I'm trying to clone it and send it to a Queue. It looks like the environment is built successfully, but it hangs here:

Environment setup completed successfully
Starting Task Execution:

Is there any way of figuring out why the remote Task hangs and how would I go about debugging it?

WebApp: 1.15.1-478 β€’ Server: 1.15.1-478 β€’ API: 2.29

  
  
Posted one year ago
Votes Newest

Answers 46


Thank you! Although it's still really weird how it was failing silently - would it be worth changing the logging level for that error somewhere?

  
  
Posted one year ago

Will try non-root and get back to you. I’m also trying to reproduce on a different machine too

  
  
Posted one year ago

  • Agent on laptop, Server on Kube - Fail

So I'm 100% sure there is something wrong with our ClearML Server deployment on Kube

Yeah that feels like a network config issue...

Is there a verbose setting in the agent that could help us diagnose,

yes running with debug turned on on.
since you managed to reproduce on your latop you can try to run the agent with --debug to test, specifically:

clearml-agent --debug daemon ....

if you are running it in venv mode (which I think the setup) you can also just specify the Task ID and test that (no daemon just execution)

clearml-agent --debug execute --id <task_id_here>
  
  
Posted one year ago

Our server is deployed on a kube cluster. I'm not too clear on how Helm charts etc.

The only thing that I can think of is that something is not right the the load balancer on the server so maybe some requests coming from an instance on the cluster are blocked ...
Hmm, saying that aloud that actually could be?! Try to add the following line to the end of the clearml.conf on the machine running the agent:

api.http.default_method: "put"
  
  
Posted one year ago

Although it's still really weird how it was failing silently

totally agree, I think the main issue was the agent had the correct configuration, but the container / env the agent was spinning was missing it,
I'll double check how come it did not print anything

  
  
Posted one year ago

@<1724960464275771392:profile|DepravedBee82> I just realized, the agent is Not running in docker mode, correct? (i.e. venv mode)
If this is the case how come it is running as root? (could it be is is running inside a container? how was that container spinned?)

  
  
Posted one year ago

This is exactly my problem, too, which I described above! If you find any solution, would be glad if you could share. πŸ™‚ Of course, I also share mine when I get one.

  
  
Posted one year ago

Yes the agent is running in venv mode afaik. As for why it’s running as root - I’ll ask our engineer …

  
  
Posted one year ago

I've added that flag, removed all PL loggers & callbacks and all references to Hydra, but no luck 😞

  
  
Posted one year ago

My understanding is that on remote execution Task.init is supposed to be a no-op right?

  
  
Posted one year ago

confirmed that the change had been added by

Make sure you see them in the Task log in the UI (the agent print it when it starts)

Any insight on how we can reproduce the issue?

Can this be reproducible using a simple script that we can also run?

  
  
Posted one year ago

Can this be reproducible using a simple script that we can also run?

Not really unfortunately - happy to share my code, but I've managed to reproduce this with different codebases.

As a summary of what I've tried:

  • Agent on the H100 machine, Server on Kube - Fail
  • Agent on laptop, Server on Kube - Fail
  • Agent on laptop, Server on Docker Desktop - Pass
    So I'm 100% sure there is something wrong with our ClearML Server deployment on Kube rather than an issue with the agents or code. As for which of the 7 containers could be at fault... :man-shrugging: . I'm not seeing anything out of the ordinary in the logs. Is there a verbose setting in the agent that could help us diagnose, i.e. each step of what goes on in Task.init ?
  
  
Posted one year ago

Nope - confirmed to be running on the OS's Python environment,

okay so bare metal root is definitely not recommended.
I'm not sure how/why it get's stuck though 😞
Any chance you can run the agent as non-root?
Also maybe preferred in docker mode, so it is easier for you to control the environment of the Task

  
  
Posted one year ago

Hi @<1724960464275771392:profile|DepravedBee82>
After

Starting Task Execution:

It will literally start the process running your code,
Can you send the full log of the Task? what is the code doing? which system is running the agent (i.e. Windows/Mac/Linux docker etc)

  
  
Posted one year ago

THAT WORKED! πŸŽ‰

  
  
Posted one year ago

Ok so my train.py now looks like this:

print("Before import")

from pathlib import Path

import hydra
import lightning as L
import torch
from coolname import generate_slug
from omegaconf import DictConfig

from src.datasets import JobDataModule
from src.models import JobModel
from src.utils import LogSummaryCallback, get_num_steps, prepare_loggers_and_callbacks

from clearml import Task

for i in range(torch.cuda.device_count()):
    print(torch.cuda.get_device_properties(i).name)

print("Before task")

task = Task.init(project_name="ClearML Testing", task_name="FMNIST")
task.set_repo(
    repo="git@ssh.dev.azure.com:v3/mclarenracing/Application%20Engineering/ml-queue-test"
)
task.set_packages("requirements.txt")

print("After task")

And the log looks like this:

Starting Task Execution:
Before import
2024-07-19 09:06:09
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
NVIDIA H100 80GB HBM3
Before task

So it looks like it's getting stuck at Task.init

  
  
Posted one year ago
135K Views
46 Answers
one year ago
one year ago
Tags