Hi BitterLeopard33 ,
You want to have the data section in the dataset task uri?
And the packages versions doesn’t match the python? Can you install the python on the system (not as venv)?
Hi PompousParrot44 , you mean delete experiment?
trying to understand what reset the task
Hi ColossalDeer61 ,
Not from the UI, but you can run a simple script to do that (assuming you can parse your configuration file), here is an example:
` from trains import Task
configuration_file = {
"stage_1": {
"batch_size": 32,
"epochs": 10
},
"stage_2": {
"batch_size": 64,
"epochs": 20
},
}
template_task = Task.get_task(task_id=<YOUR TEMPLATE TASK>)
for name, params in configuration_file.items():
# clone the template task into a new...
and it should be the default for docker mode
Hi CleanPigeon16 .
Do you get anything in the UI regarding this failure (in the RESULTS -> CONSOLE section)?
Hi ObliviousCrocodile95 ,
Trying to understand, you want to have the clearml-agent docker mode with pre install virtual environment?
Currently this setup means that I clone my repository in the docker image - so the commit & changes aren’t reflected in this environment. Any way to remedy this?
The clearml-agent should clone your repository and apply the changes from the parent task.
The PipelineController task? If so, you can get the task ( pipeine_task = Task.get_task(task_id=your pipeline task id)
) and after pipeine_task.get_output_destination()
, can this do the trick?
Can you try installing the package on the docker’s python but not on the venv?
Hi JitteryCoyote63 ,
You can get some stats (for the last month) under the workers section in your app, clicking a specific worker will give you some more options
Those doesn’t includes stats per training, but per worker
PanickyMoth78 are you getting this from the app or one of the tasks?
The state folder is not affected
this is /mnt/machine_learning/datasets
folder?
Hi WackyRabbit7 ,
It should take the latest one (sorting according to date/time)
Hey SubstantialElk6 ,
You can try adding environment vars with that info:
os.environ["CLEARML_API_HOST"] = api_server os.environ["CLEARML_WEB_HOST"] = web_server os.environ["CLEARML_FILES_HOST"] = files_server os.environ["CLEARML_API_ACCESS_KEY"] = access_key os.environ["CLEARML_API_SECRET_KEY"] = secret_key
We can certainly add a trains.conf
brief, thanks for the feedback 🙂
the controller task? same as here - https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_controller.py
Yap, you right the math part. You can add column from metrics and hyper-params too, but currently we don’t have total duration as a column.
Let me check about the duration and what we can do
python invoked oom-killer
Out of memory, CloudySwallow27 in the scaler app task, can you check if you have scalers reporting?
Hi LethalCentipede31
You can report plotly with task.get_logger().report_plotly
, like in https://github.com/allegroai/clearml/blob/master/examples/reporting/plotly_reporting.py
For seaborn, once you use plt.show
it will be in the UI (example https://github.com/allegroai/clearml/blob/master/examples/frameworks/matplotlib/matplotlib_example.py#L48 )
Hi CooperativeSealion8 ,
trains
is configured according to ~/trains.conf
file, in this file you should define the app, api and files servers.
You can do it with our great wizard, just typetrains-init
in your CLI and follow the instructions,
` ❯ trains-init
TRAINS SDK setup process
Please create new trains credentials through the profile page in your trains web app (e.g. )
In the profile page, press "Create new credentials", then press "Copy to clipboard".
Paste cop...
logger.report_matplotlib_figure(title="some title", series="some series", figure=fig, iteration=1, report_interactive=Fasle)
Hi TightElk12 ,
You can view the Started column, and if you like only running experiments, you can filter to only Running
in status column. Can this do the trick?