Hi WickedElephant66
When you are in the Projects section of the WebApp (second icon on the left), enter either "All Experiments" or any project you want to access to. Up on the center is the Models section. You csn find the url the model can be downloaded from, in the details, section
hey Maximilian,
which version of clearml are you using ?
Hi Alek
It should be auto logged. Could you please give me some details about your environment ?
hi MoodySheep3
I think that you use ParameterSet the way it is supposed to be 🙂
When I run my examples, I also get this warning - which is weird ! because
This is just a warning, the script continues anyway (and reaches end without issue) Those HP exists - and all the sub tasks corresponding to a given parameters set find them !
Hi MotionlessCoral18
Have these threads been useful to solve your issue ? Do you still need some support ? 🙂
yes but it is supposed to be logged in the task corresponding to the step the model is being saved from. monitor_model makes the logging to the main pipeline task.
hey RoughTiger69
Can you describe me how you are setting up the environment variable please ?
Setting up that flag will skip the virtual env installation : the agent will use your environment and the packages installed into it.
Using Task.add_requirements(requirements.txt) allows to add specific packages at will. Note that this function will be executed even with the flag CLEARML_AGENT_SKIP_PIP_VENV_INSTALL set
you can run a clearml agent on your machine, in a way that it is dedicated to a certain queue. You can then clone the experiment you are interested in (either bleonging to your workspace or to the one from you partner), and enqueue it on into the queue you assigned your worker to.
clearml-agent daemon --queue 'my_queue'
Hi PanickyMoth78
There is indeed a versioning mechanism available for the open source version 🎉
The datasets keep track of their "genealogy" so you can easily access the version that you need through its ID
In order to create a child dataset, you simply have to use the parameter "parent_datasets" when you create your dataset : have a look at
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#datasetcreate
You also alternatively squash datasets together to create a c...
can you please provide the apiserver log and the elasticsearch log?
Hi MotionlessCoral18
You need to run some scripts when migrating, to update your old experiments. I am going to try to find you soem examples
Hi MoodySparrow34
We have an user that wrote this example https://github.com/marekcygan/clearml-slurm-workers
It is a simple glue code to spin SLURM workers when the tasks are enqueued. Hope it will help
hey GiganticMole91
you can set the logger to set your bucket as your default upload destination :
task.get_logger().set_default_upload_destination(' s3://xxxxx ')
Hi CourageousKoala93
Yes, you can use Google as a storage. You can have a look at the docs https://clear.ml/docs/latest/docs/integrations/storage/#configuring-google-storage
Basically, this part of the doc will show you how to set the credentials into the configuration file.
You will also have to specify the destination uri, by adding to Task.init() : output_uri="path to my bucket"
Do not hesitate to ask for some precisions if needed
hi ObedientToad56
the API will return you raw objetcs, thus not dictionary
you can use the SDK. For example, if task_id is your pipeline main task id, then you can retrieve the configuration objects this way :
task = Task.get_task(task_id=task_id) config = task.get_configuration_object_as_dict('Pipeline') for k in list(config.keys()): print(f'Step {k} has job id {config[k]["job_id"]}')
Hey LuckyKangaroo60
So far there isnt a CLI command to check the conf file format : if there is an error, it is detected from the beginning of the execution and the program fails. Here is what i use as a conf for accessing my local docker based minio :
`
s3 {
# S3 credentials, used for read/write access by various SDK elements
# Default, used for any bucket not specified below
region: ""
# Specify explicit keys
key: "david"
...
you also have to set your agent to use you partner credentials
simply do a :
clearml-agent init
and paste your partner's credentials
Hey Atalya 🙂
Thanks for your feedback. This is indeed a good feature to think asbout.
So far there is no other ordering than the alphabetical. Could you please create a feature request on github ?
Thanks
In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms
` log = task.get_logger()
x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1
fig = go.Figure()
fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))
fig.update_layout(barmode='overlay') ...
hey
"when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty"
you are right, i think there is a bug here. We will release a fix asap 🙂
report_scalar pernits to manually report a scalar series. This is the dedicated function. There could be other ways to report a scalar, for example through tensorboard - in this case you would have to report to tensorboard, and clearML will automatically report the values
Hi HandsomeGiraffe70
There is a way, this is the API. You can use it this way :
retrieve the task the model belongs to retrieve the model you want (from a lit of input and output models) create the metadata inject them to the model
Here is an example :
` from clearml import Task
from clearml.backend_api import Session
from clearml.backend_api.services import models
from clearml.backend_api.services.v2_13.models import MetadataItem
task = Task.get_task(project_name=project_name, task_name=...
Can you try to add the flagauto_create=True
when you call Dataset.get ?
No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?
hey ApprehensiveSeahorse83
can you please check that the trigger is correctly added ? Simply retrieve the return value of add_task_triggerres = trigger.add_task_trigger( .....
print(f'Trigger correctly added ? {res}')