Hi Jax
A worker is the ressources allocated to one agent. An active worker is working on an enqueued job.
This graph shows that some of your agents are working on tasks, and some are idling. Those ones might be assigned to empty queues. does that sound logical ?
hi ObedientToad56
the API will return you raw objetcs, thus not dictionary
you can use the SDK. For example, if task_id is your pipeline main task id, then you can retrieve the configuration objects this way :
task = Task.get_task(task_id=task_id) config = task.get_configuration_object_as_dict('Pipeline') for k in list(config.keys()): print(f'Step {k} has job id {config[k]["job_id"]}')
hi RattyLouse61
here is a code example, i hope it will help you to understand better the backend_api.
` from clearml import Task, Logger
from clearml.backend_api import Session
from clearml.backend_api.services import events
task = Task.get_task('xxxxx', 'xxxx')
session = Session()
res = session.send(events.GetDebugImageSampleRequest(
task=task.id,
metric=title,
variant=series)
)
print(res.response_data) `
If you face an issue, can you send me a snippet, so that i could better understand what is happening ? thanks
hey Martin,
DefiantHippopotamus88 joined the thread. He faced the same issue in the thread you sent
https://clearml.slack.com/archives/CTK20V944/p1656537337804619?thread_ts=1656446563.854059&cid=CTK20V944
Can you try to add the flagauto_create=Truewhen you call Dataset.get ?
Hey TartSeagull57
We have released a version that fixes the bug. It is a RC but it is stable. Version number is 1.4.2rc1
Hi TeenyBeetle18
If the dataset could be basically built from a local machine, you could use the sync_folder (sdk https://clear.ml/docs/latest/docs/references/sdk/dataset#sync_folder or cli https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_folder_sync#syncing-a-folder ). then you would be able to modify any part of the dataset and create a new version, with only the items that changed.
There is also an option to download only parts of the dataset, have a l...
Hello DepravedSheep68 ,
In order to store your info into the S3 bucket you will need two things :
specify the uri where you want to store your data when you initialize the task (search for the parameter output_uri in the Task.init function https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) specify your s3 credentials into the clear.conf file (what you did)
yes but it is supposed to be logged in the task corresponding to the step the model is being saved from. monitor_model makes the logging to the main pipeline task.
thanks for all those precisions. I will try to reproduce and keep you updated π
you are in a regular execution - i mean not a local one. So the different pipeline tasks has been enqueued. You simply need to fire an agent to pull the enqueued tasks. I would advice you to specify the queue in the steps (parameter execution_queue ).
You then fire your agent :
clearml-agent daemon --queue my_queue
Hi SteepDeer88
I wrote this script to try to reproduce the error. I am passing there +50 parameters and so far everything works fine. Could you please give me some more details about your issue, so that we could reproduce it ?
from clearml import Task
import argparse
'''
COMMAND LINE:
python -m my_script --project_name my_project --task_name my_task --execute_remotely true --remote_queue default --param_1 parameter...
hi SoggyBeetle95
i reproduced the issue, could you confirm me that it is the issue ?
here is what i did :
i declared some secret env var in the agent section of clearml.conf i used extra_keys to have hidden on the console, it is indeed hidden but in the execution -> container section it is clear
Could you please give me some details about what you need to achieve ? It could also help if you could explain me what you mean by : When I use Task.create it works ?
A screenshot would be welcomed here π
you can run a clearml agent on your machine, in a way that it is dedicated to a certain queue. You can then clone the experiment you are interested in (either bleonging to your workspace or to the one from you partner), and enqueue it on into the queue you assigned your worker to.
clearml-agent daemon --queue 'my_queue'
Hey
Is this issue solved ?
π thanks !
hi VexedKoala41
Your agent is running into a docker container that may have a different version of python installed. It tries to install a version of the package that doesn't exist for this python version.
Try to specify the latest matching version Task.add_requirements( βipythonβ , '7.16.3')
Hey Ofir
We would need to know a bit more about the issue. Especially on what you try to import. Is it a file that is on your repo or does the issue occurs when you try to import a standard pypi package ?
you might have a proxy error or a firewall blocking somewhere
Hi Alek
It should be auto logged. Could you please give me some details about your environment ?
hey SmugSnake6
Can you give some more precisions on your configuration please ? (clearml, agent, server versions)
Also, if you have some example code to share it could help us reproduce the issue and thus help you a lot faster π (script, command line for firing your agent)
Last (very) little thing : could you please open a Github issue for this irrelevant warning π ? It makes sense to register on GH those bugs, because our code and releases are hosted there.
Thank you !
http://github.com/allegroai/clearml/issues
hey Ofir
did you tried to put the repo in the decorator where you need the import ?
if you can send me some code to illustrate what you are doing, it could help me to reproduce the issue
Hi UnevenDolphin73
Let me resume, so that i ll be sure that i got it π
I have a minio server somewhere like some_ip on port 9000 , that contains a clearml bucket
If I do StorageManager.download_folder(remote_url=' s3://some_ip:9000/clearml ', local_folder='./', overwrite=True)
Then i ll have a clearml bucket directory created in ./ (local_folder), that will contain the bucket files
have you tried to add the requirements using Task.add_requirements( local_packages ) in your main file ?
Yep sorry I have not pasted the import line. You should add something like this :
from clearml.backend_api.services import events
π

