oups yes, you are right. output_uri is used for the artifacts
for the logger it is https://clear.ml/docs/latest/docs/references/sdk/logger#set_default_upload_destination
btw what do you get when you do task.get_logger().get_default_upload_destination() ?
Hi BeefyHippopotamus73
did you managed to get rid of your issue ?
can you try to create an empty text file and provide its path to Task.force_requirements_env_freeze( your_empty_txt_file) ?
Hi,
ClearML indeed has TensorBoard auto reporting. I suggest you to have a look here, wherre you could find links to some examples : https://clear.ml/docs/latest/docs/fundamentals/logger#automatic-reporting-examples
You could also have a look at the example of pytorch-lightning integration here :
https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch-lightning/pytorch_lightning_example.py
Hi,
It would be great if you could also send your clearml package version 🙂
Just to keep you updated, as promised 🙂
we have found the bug and will release a fix asap. for that too i will keep you updated 🙂
You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')
Hey Igor
I am not the expert about this topic. I have someone who better knows the topic that is coming back to you straight after his meeting. 🙂
check that your task are enqueued in the queue the agent is listening to.
from the webUI, in your step's task, check the default_queue in the configuration section.
when you fire the agent you should have a log that also specifies which queue the agentis ssigned to
finally, in the webApp, you can check the Workers & Queues section. There you could see the agent(s), the queue they are listening to, and what tasks are enqueued in what queue
hey TenderCoyote78
Here is an example of how to dump the plots to jpeg files
` from clearml.backend_api.session.client import APIClient
from clearml import Task
import plotly.io as plio
task = Task.get_task(task_id='xxxxxx')
client = APIClient()
t = client.events.get_task_plots(task=task.id)
for i, plot in enumerate(t.plots):
fig = plio.from_json(plot['plot_str'])
plio.write_image(fig=fig, file=f'./my_plot_{i}.jpeg') `
hi WickedElephant66
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#models-artifacts-and-metrics
I am trying to find you some example, hold on 🙂
To provide an upload destination for the artifact, you can :
add the parameter default_output_uri to Task.init ( https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) set the destination into clearml.conf : sdk.development.default_output_uri ( https://clear.ml/docs/latest/docs/configs/clearml_conf#sdkdevelopment )
To enqueue the pipeline, you simply call it, without run_locally or debug_pipeline
You will have to provide the parameter execution_queue to your steps, or defau...
Hi
could you please share the logs for that issue (without the cred 🙂 ) ?
this is because the server is thought as a bucket too = the root to be precise. Thus you will always have at least a subfolder created in local_folder - corresponding to the bucket found at the server root
Hi UnevenDolphin73
I am going to try to reproduce this issue, thanks for the details. I keep you updated
btw can you screenshot your clearml-agent list and UI please ?
you can specify the destination of the uploading like that :
when you initiate a task, you can set the parameter output_uri. If you set it to True, then the model will be uploaded to the uri specified in your conf file. Youcan also directly specify an url or you can use OutputModel.set_default_upload_uri or set_upload_destination ( https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#outputmodelset_default_upload_uri or https://clear.ml/docs/latest/docs/references/sdk/model_...
i dont know if it will help but here is what i would test :
remove temporary the task init in the controller use name and project parameters that dont have spaces in their name dont use services as a default queue
In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms
` log = task.get_logger()
x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1
fig = go.Figure()
fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))
fig.update_layout(barmode='overlay') ...
you can freeze your local env and thus get all the packages installed. With pip (on linux) it would be something like that :
pip freeze > requirements.txt
(doc here https://pip.pypa.io/en/stable/cli/pip_freeze/ )
hi AbruptHedgehog21
which s3 service provider will you use ?
do you have a precise list of the var you need to add to the configuration to access your bucket ? 🙂
hey ApprehensiveSeahorse83
can you please check that the trigger is correctly added ? Simply retrieve the return value of add_task_triggerres = trigger.add_task_trigger( .....print(f'Trigger correctly added ? {res}')
it is a bit old - i recommand you to test again with the latest version 1.4.1
can you please give me some more details about what you intent to do ? it would be easier then to reproduce the issue
the fact that the minio server is called "bucket" in the doc (
) is for sure confusing. i will check the reason of this choice, and also why we dont begin to build the structure from the bucket (the real one
)
i keep you updated
hey H4dr1en
you just specify the packages that you want to be installed (no need to specify the dependancies) and the version if needed.
Something like :
pytorch==1.10.0
Also, change this line of the conf file to false :
development {
# Development-mode options
# dev task reuse window
task_reuse_time_window_in_hours: 72.0
# Run VCS repository detection asynchronously
vcs_repo_detect_async: true <== change to false