Agent is a process that pulls task from a queue and assigns ressources (worker) to them. In the pipeline, when not runned locally, steps are enqueued tasks
you are in a regular execution - i mean not a local one. So the different pipeline tasks has been enqueued. You simply need to fire an agent to pull the enqueued tasks. I would advice you to specify the queue in the steps (parameter execution_queue ).
You then fire your agent :
clearml-agent daemon --queue my_queue
Hi SillySealion58
you can discriminate between your output models when you instantiate them. There are like parameters name, tags or comment that all belong to the constructor OutputModel .
It would thus be a way of using the same filename for all the checkpoints, and have them differentiated in the task. Does it make sense ?
i have found some threads that deal with your issue, and propose interesting solutions. Can you have a look at this ?
Hi MotionlessCoral18
You need to run some scripts when migrating, to update your old experiments. I am going to try to find you soem examples
Hi MotionlessCoral18
Have these threads been useful to solve your issue ? Do you still need some support ? 🙂
Have you tried try to set your agent in conda mode ( https://clear.ml/docs/latest/docs/clearml_agent#conda-mode ) ?
hi DizzyHippopotamus13
Yes you can generate a link to the experiments using this format.
However I would suggest you to use the SDK for more safety :task = Task.get_task(project_name=xxx, task_name=xxx)
url = task.get_output_log_web_page()
Or in one lineurl = Task.get_task(project_name=xxx, task_name=xxx).get_output_log_web_page()
it is a bit old - i recommand you to test again with the latest version 1.4.1
can you please give me some more details about what you intent to do ? it would be easier then to reproduce the issue
Can you try to add the flagauto_create=True
when you call Dataset.get ?
what versions do you have for the clearml packages ?
If you face an issue, can you send me a snippet, so that i could better understand what is happening ? thanks
Hi TeenyBeetle18
If the dataset could be basically built from a local machine, you could use the sync_folder (sdk https://clear.ml/docs/latest/docs/references/sdk/dataset#sync_folder or cli https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_folder_sync#syncing-a-folder ). then you would be able to modify any part of the dataset and create a new version, with only the items that changed.
There is also an option to download only parts of the dataset, have a l...
If the data is updated into the same local / network folder structure, which serves as a dataset's single point of truth, you can schedule a script which uses the dataset sync
functionality which will update the dataset based on the modifications made to the folder.
You can then modify precisely what you need in that structure, and get a new updated dataset version
hey Ofir
did you tried to put the repo in the decorator where you need the import ?
if you can send me some code to illustrate what you are doing, it could help me to reproduce the issue
for instance
export CLEARML_AGENT__AGENT__PACKAGE_MANAGER_ TYPE=conda && clearml-agent daemon --queue my queue
Hey Atalya 🙂
Thanks for your feedback. This is indeed a good feature to think asbout.
So far there is no other ordering than the alphabetical. Could you please create a feature request on github ?
Thanks
hi RattyLouse61
here is a code example, i hope it will help you to understand better the backend_api.
` from clearml import Task, Logger
from clearml.backend_api import Session
from clearml.backend_api.services import events
task = Task.get_task('xxxxx', 'xxxx')
session = Session()
res = session.send(events.GetDebugImageSampleRequest(
task=task.id,
metric=title,
variant=series)
)
print(res.response_data) `
Hi CrookedMonkey33
Have a look at the SDK doc. You could use a Model function such as get_local_copy
https://clear.ml/docs/latest/docs/references/sdk/model_model#get_local_copy
DepravedSheep68 you could also try to add the port to your uri.
Output_uri: "s3://...... : port"
Last (very) little thing : could you please open a Github issue for this irrelevant warning 🙏 ? It makes sense to register on GH those bugs, because our code and releases are hosted there.
Thank you !
http://github.com/allegroai/clearml/issues
Hi NonsensicalWoodpecker96
you can you the SDK 🙂
task = Task.init(project_name=project_name, task_name=task_name)
task.set_comment('Hi there')
hi NervousFrog58
Can you share some more details with us please ?
Do you mean that when you have an experiment failing, you would like to have a snippet that reset and relaunch it, the way you do through the UI ?
Your ClearML packages version, and your logs would be very userful too 🙂
Hey ReassuredTiger98
Is there any update from your side ?
I confirm that you need to put your key and secret in the credentials section of the configuration file . As Idan, I let my policy configuration untouched
Hi PanickyMoth78
There is indeed a versioning mechanism available for the open source version 🎉
The datasets keep track of their "genealogy" so you can easily access the version that you need through its ID
In order to create a child dataset, you simply have to use the parameter "parent_datasets" when you create your dataset : have a look at
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#datasetcreate
You also alternatively squash datasets together to create a c...
Hi SparklingElephant70
The function doesn't seem to find any datasets which project_name matches your request.
Some more detailed code on how you create your dataset, and how you try to retrieve it, could help me to better understand the issue 🙂
but that i still not explaining why it was working 2 days ago and now it is not !
i am investigating, and will keep you updated
Hope it will help 🤞 . Do not hesitate to ask if the error persists