in my case using self-hosted and agent inside a docker container:
47:45 : taks foo pulled
[ git clone, pip install, check that all requirements satisfied, and nothing is downloaded]
48:16 : start training
Found the issue: my bad practice for import 😛
You need to import clearml before doing argument parser. Bad way:
import argparse
def handleArgs():
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--config-file', type=str, default='train_config.yaml',
help='train config file')
parser.add_argument('--device', type=int, default=0,
help='cuda device index to run the training')
args = parser....
I also have the same issue. Default argument are fine but all supplied argument in command line become duplicated !
Solved @<1533620191232004096:profile|NuttyLobster9> . In my case:
I need to from clearml import Task
very early in the code (like first line), before importing argparse
And not calling task.connect(parser)
like for dataset_dir
I would expect a single path, not an array of 2 paths duplicated
Without clearml-session, how one could set this up ?? I cannot find any documentation/guide on how to do this ... The official doc seems to say: you start a code server that then connect to vscode.dev Then from your laptop, you go to vscode.dev in order to access to your code server. Is there anyway you do this but without going to vscode.dev ???
do you have a video showing the use case for clearml-session ? I struggle a bit about how is it used for ?
you should know where your latest model is located then just call task.upload_artifact
on that file ?
You are using CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL the wrong way
if you want plot, you can simply generate plot with matplotlib and clearml can upload them in the Plot or Debug Sample section
yup, you have the flexibility and option, that what so nice with ClearML
you can upload the df as artifact.
Or the statistics as a DataFrame and upload as artifact ?
with
df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]},
index=['falcon', 'dog', 'spider', 'fish'])
import clearml
task = clearml.Task.current_task()
task.get_logger().report_table(title='table example', series='pandas DataFrame', iteration=0, table_plot=df)
# logger.report_table(title='table example',series='pandas DataFrame',iteration=0,tabl...
Are you running within a zero-trust environment like ZScaler ?
Feels like your issue is not ClearML itself, but issue with https/SSL and certificate from your zero-trust system
Based on this : it feels like S3 is supported
@<1523701087100473344:profile|SuccessfulKoala55> Yes, I am aware of that one. It build docker container ... I wanted to build without docker. Like when clearml-agent run in non-docker mode, it is already building the running env inside it caching folder structure. I was wondering if there was a way to stop that process just before it execute the task .py
nice !! That is exactly what I am looking for !!
interesting, the issue happen with mamba
venv. Now I use a python native venv and it is detecting correctly