Hi PompousBeetle71 ,
Can you please share with me some more information? Where can you see the tags in the server? Do you mean in the web-app? Do you see the tags under the task or the model?
PompousBeetle71 , Can you try those and tell me if it’s still empty?
` from trains import InputModel
print(InputModel(<Put a string copy from the UI with the tag id>).tags) `I can’t reproduce this issue, and I just want to be sure it’s not a new model
model id can be found like in the pic. after clicking the ID mark
- can you share the tag name?
Hi PompousBeetle71 , did you upgrade only trains
or trains-server
as well?
how can i check if it is loaded?
When a task is starting, the configuration will be print first
it worked with trains-agent init
Do you have 2 configuration files? ~/trains.conf
and ~/clearml.conf
?
You can loop over the tasks you want to delete, Based on the cleanup service:
` import logging
from trains.backend_api.session.client import APIClient
client = APIClient()
you can get the tasks you want to delete with client.tasks.get_all, in this example we will get you all the tasks in a project, but you have other filters too
tasks = client.tasks.get_all(project=[<your project id>])
for task in tasks:
try:
# try delete a task from system
client.tasks.delete(task=ta...
Hi FierceHamster54 ,
I think
And is this compatible with the
Task.force_store_standalone_script()
option ?
is causing the issue, you are storing the entire script as a standalone without any git, so once you are trying to import other parts of the git, BTW any specific reason using it in your pipeline?
👍 what do you get in the UI under EXECUTION -> SOURCE CODE ?
Hi GiganticTurtle0 ,
Does it happen when you change the parameters from the UI too? or only from code? same flow as in https://github.com/allegroai/clearml/blob/master/examples/automation/manual_random_param_search_example.py#L47 ?
Hi
I specify the repo for each step by using the ‘repo’ arguments from PipelineDecorator.component.
Here is my reference
MoodyCentipede68 do you see the repo under EXECUTION tab?
With this scenario, your data should be updated when running the pipeline
if you are using add_function_step
you can pass packages=["protobuf<=3.20.1"]
(there is an example in the sdk docs https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_function_step-1 )
Hi PompousParrot44 , you mean delete experiment?
DepressedChimpanzee34 how do you generate the task thats running remotely? once the agent pulled the task, this is your running configuration (it will pull the same configuration from the server as you see in the UI)
and under uncommitted changes you have the entire script?
Hi UnevenDolphin73 ,
try to re run it, a new instance will be created, under this specific instance Actions you have Monitoring and troubleshoot, and you can select Get system logs
I want to verify you scaler doesnt have any failures in this log
can you share the local run log?
When connecting a nested dict the keys will be in the struct of period/end
and period/start
, so those are the keys you need to change, in additional to the section name, General
if name not given.
This should work for example in your example:
` cloned_task = Task.clone(source_task=template_task,
name=template_task.name+' for params', parent=template_task.id)
put back into the new cloned task
cloned_task.set_parameters({"General/period/start": "...
I just tried and everything works.
I run this for the template task:
` from clearml import Task
task = Task.init(project_name="Examples", task_name="task with connected dict")
period = {"start": "2020-01-01 00:00", "end": "2020-12-31 23:00"}
task.connect(period, name="period") `
and this for the clone one:
` from clearml import Task
template_task = Task.get_task(task_id="<Your template task id>")
cloned_task = Task.clone(source_task=template_task,
name=templat...
Hi BattyLizard6 ,
Do you have a toy example so I can check this issue my side?
Hi LethalCentipede31
You can report plotly with task.get_logger().report_plotly
, like in https://github.com/allegroai/clearml/blob/master/examples/reporting/plotly_reporting.py
For seaborn, once you use plt.show
it will be in the UI (example https://github.com/allegroai/clearml/blob/master/examples/frameworks/matplotlib/matplotlib_example.py#L48 )
you can use clearml logger for the reporting, like in this example https://github.com/allegroai/clearml/blob/master/examples/reporting/media_reporting.py
Hi TeenyFly97 ,
With task.close()
the task will do a full shutdown process. This includes repo detection, logs, metrics and artifacts flush, and more. The task will not be the running task anymore and you can start a new task.
With task.mark_stopped()
, the task logs will be flushed and the task will mark itself as stopped
, but will not perform the full shutdown process, so the current_task
will still be this task.
For example:
` from trains import Task
task = Task.in...
You are definitely right! We will fix this issue, Thanks 🙂
Hi GiganticTurtle0 ,
All the packages you are using should be under installed packages
section in your task (in the UI). ClearML analyze and the full report should be under this section.
You can add any package you like with Task.add_requirements('tensorflow', '2.4.0')
for tensorflow version 2.4.0 (or Task.add_requirements('tensorflow', '')
for no limit).
If you dont want the package analyzer, you can configure in your ~/clearml.conf file: ` sdk.development.detect_with_...
You can configure env vars in your docker compose, but what is your scenario? Maybe there are some other solutions
Hi SmarmySeaurchin8
You can configure TRAINS_CONFIG_FILE
env var with the conf file you want to run it with. Can this do the trick?