Hi HandsomeGiraffe70
There is a way, this is the API. You can use it this way :
retrieve the task the model belongs to retrieve the model you want (from a lit of input and output models) create the metadata inject them to the model
Here is an example :
` from clearml import Task
from clearml.backend_api import Session
from clearml.backend_api.services import models
from clearml.backend_api.services.v2_13.models import MetadataItem
task = Task.get_task(project_name=project_name, task_name=...
report_scalar pernits to manually report a scalar series. This is the dedicated function. There could be other ways to report a scalar, for example through tensorboard - in this case you would have to report to tensorboard, and clearML will automatically report the values
Hi Alek
It should be auto logged. Could you please give me some details about your environment ?
Hi CrookedMonkey33
Have a look at the SDK doc. You could use a Model function such as get_local_copy
https://clear.ml/docs/latest/docs/references/sdk/model_model#get_local_copy
Hi Igor
we are working on your issue and will update you asap
hi DizzyHippopotamus13
Yes you can generate a link to the experiments using this format.
However I would suggest you to use the SDK for more safety :task = Task.get_task(project_name=xxx, task_name=xxx)
url = task.get_output_log_web_page()
Or in one lineurl = Task.get_task(project_name=xxx, task_name=xxx).get_output_log_web_page()
Hi WittyOwl57 ,
The function is :
task.get_configuration_object_as_dict ( name="name" )
with task being your Task object.
You could find a bunch of pretty similar functions in the docs. Have a look at here : https://clear.ml/docs/latest/docs/references/sdk/task#get_configuration_object_as_dict
hi ScaryBluewhale66
Can please elaborate, i am not sure that i get your question ? What do you need to compare ?
Hi,
We are going to try to reproduce this issue and will update you asap
can you also check that you can access the servers ?
try to do curl http://<my server>:port
for your different servers ? and share the results 🙂
Hey
Is this issue solved ?
🙂 thanks !
hi ReassuredTiger98
Can you give some details on which function you are calling for deleting please ?
hi ObedientToad56
the API will return you raw objetcs, thus not dictionary
you can use the SDK. For example, if task_id is your pipeline main task id, then you can retrieve the configuration objects this way :
task = Task.get_task(task_id=task_id) config = task.get_configuration_object_as_dict('Pipeline') for k in list(config.keys()): print(f'Step {k} has job id {config[k]["job_id"]}')
hey UnevenDolphin73
you can mount your s3 in a local folder and provide that folder to your clearml.conf file.
I used s3fs to mount by s3 bucket as a folder. Then i modified agent.venvs_dir and agent.venvs_cache
(As mentionned here https://clear.ml/docs/latest/docs/clearml_agent#environment-caching )
it works locally and not on a remote exec : can you check that the machine that the agent if executed from is correctly configured ? the agent there needs to be provided with the correct credentials the autolog uses the file extension to determine what it is reporting. can you try to use the regular .pt extension ?
hi NervousFrog58
Can you share some more details with us please ?
Do you mean that when you have an experiment failing, you would like to have a snippet that reset and relaunch it, the way you do through the UI ?
Your ClearML packages version, and your logs would be very userful too 🙂
Hi NonsensicalWoodpecker96
you can you the SDK 🙂
task = Task.init(project_name=project_name, task_name=task_name)
task.set_comment('Hi there')
Hi TeenyBeetle18
If the dataset could be basically built from a local machine, you could use the sync_folder (sdk https://clear.ml/docs/latest/docs/references/sdk/dataset#sync_folder or cli https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_folder_sync#syncing-a-folder ). then you would be able to modify any part of the dataset and create a new version, with only the items that changed.
There is also an option to download only parts of the dataset, have a l...
If the data is updated into the same local / network folder structure, which serves as a dataset's single point of truth, you can schedule a script which uses the dataset sync
functionality which will update the dataset based on the modifications made to the folder.
You can then modify precisely what you need in that structure, and get a new updated dataset version
In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms
` log = task.get_logger()
x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1
fig = go.Figure()
fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))
fig.update_layout(barmode='overlay') ...
You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')
you can freeze your local env and thus get all the packages installed. With pip (on linux) it would be something like that :
pip freeze > requirements.txt
(doc here https://pip.pypa.io/en/stable/cli/pip_freeze/ )
hey H4dr1en
you just specify the packages that you want to be installed (no need to specify the dependancies) and the version if needed.
Something like :
pytorch==1.10.0
Hi
could you please share the logs for that issue (without the cred 🙂 ) ?
Hi RobustRat47
Is your issue solved ? 🙂
for instance
export CLEARML_AGENT__AGENT__PACKAGE_MANAGER_ TYPE=conda && clearml-agent daemon --queue my queue
Have you tried try to set your agent in conda mode ( https://clear.ml/docs/latest/docs/clearml_agent#conda-mode ) ?