Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All, I Would Like To Know The Input Page_Size Of Clearml/Backend_Api/Services/V2_13/Tasks.Py Getallrequest. I Have Confirmed The Response Rows Will Be Change Depends On Given Page_Size. However Even When I Pass Page_Size Number Over 500, The Response W

Hi all,
I would like to know the input page_size of clearml/backend_api/services/v2_13/tasks.py GetAllRequest.
I have confirmed the response rows will be change depends on given page_size.
however even when I pass page_size number over 500, the response will be 500.
Is there any limit to set the number of page_size?

  
  
Posted 2 years ago
Votes Newest

Answers 8


Thank you for additional comment.
But when i call with scroll_id="" i caught error message "Validation error (Integer size parameter greater than 1 should be provided when working with scroll)" .
so I tried to call with scroll_id=1 however i still got same error.

if it is correct way to increment page, i will use this way like example in the following https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py

  
  
Posted 2 years ago

Here is my code.
session = clearml.backend_api.Session() url = r"{}/{}".format(session.get_api_server_host(), "tasks.get_all") payload = { 'only_fields' : ["id", "name", "project", "completed", "user"], 'page' : 0, 'page_size' : 5, 'scroll_id' : 1, 'order_by' : [ 'completed' ] } response = requests.post(url, data=json.dumps(payload), auth=(session.access_key, session.secret_key)) response.json()and response.json() shows
{'meta': {'id': 'd61c02a39a394d0a933c34a871747313', 'trx': 'd61c02a39a394d0a933c34a871747313', 'endpoint': {'name': 'tasks.get_all', 'requested_version': '2.18', 'actual_version': '1.0'}, 'result_code': 400, 'result_subcode': 12, 'result_msg': 'Validation error (Integer size parameter greater than 1 should be provided when working with scroll)', 'error_stack': None, 'error_data': {}}, 'data': {}}

  
  
Posted 2 years ago

Hi StraightParrot3 , page increment is indeed correct in this case.
To use scroll_id, you will need to start the call by specifying scroll_id="", size=<some int>

  
  
Posted 2 years ago

CostlyOstrich36
Thank you for your reply.
I tried following code. however i couldn't find scroll_id in the response.
` session = clearml.backend_api.Session()
url = r"{}/{}".format(session.get_api_server_host(), "tasks.get_all")
payload = { 'only_fields' : ["id", "name", "project", "completed", "user"],
'page' : 0,
'page_size' : 10,
'order_by' : [ 'completed' ]
}
response = requests.post(url, data=json.dumps(payload), auth=(session.access_key, session.secret_key))
response.json()

{'meta': {'id': '9ab21d7d66f84da6bdb8aa37141fe83b',
'trx': '9ab21d7d66f84da6bdb8aa37141fe83b',
'endpoint': {'name': 'tasks.get_all',
'requested_version': '2.18',
'actual_version': '1.0'},
'result_code': 200,
'result_subcode': 0,
'result_msg': 'OK',
'error_stack': '',
'error_data': {}},
'data': {'tasks': [{'completed': '2021-02-14T17:10:24.514000+00:00',
'id': '2a619936b4204f5ebfeefefd66bddf03',
'name': 'Example data',
'project': 'db23ea641a6d4482b9d9b52bdb67dcd9',
'user': 'allegroai'},
:
{'completed': '2021-04-24T10:59:43.710000+00:00',
'id': 'aea81aea27014b98a92287da60672575',
'name': 'hyper-parameters example',
'project': '57f9a10d127f44ffa192cb36cd6de8fe',
'user': 'allegroai'}]}}

10 results `

I requested to get 5 results at once and repeat it with page incremented,
payload = { 'only_fields' : ["id", "name", "project", "completed", "user"], 'page' : 0, 'page_size' : 5, 'order_by' : [ 'completed' ] }
payload = { 'only_fields' : ["id", "name", "project", "completed", "user"], 'page' : 1, 'page_size' : 5, 'order_by' : [ 'completed' ] }
I could get same 10 results.
how can i get scroll_id ?
Is page increment way in above correct way?

  
  
Posted 2 years ago

I'm not sure what's wrong, but you should simply use the session's send_request() method to send this request, don't use requests.post yourself

  
  
Posted 2 years ago

Thanks.

I changed my code using session.send_request() not using requests.post() .
session = clearml.backend_api.Session() payload = { 'only_fields' : ["id", "name", "project", "completed", "user"], 'page' : 0, 'page_size' : 5, 'scroll_id' : 1, 'order_by' : [ 'completed' ] } response = session.send_request("tasks", "get_all", json=payload) response.json()but still I got same error.
{'meta': {'id': 'ede50394c3554b94a2726630a5c40d0b', 'trx': 'ede50394c3554b94a2726630a5c40d0b', 'endpoint': {'name': 'tasks.get_all', 'requested_version': '2.18', 'actual_version': '1.0'}, 'result_code': 400, 'result_subcode': 12, 'result_msg': 'Validation error (Integer size parameter greater than 1 should be provided when working with scroll)', 'error_stack': None, 'error_data': {}}, 'data': {}}

and I also tried APIClient
from clearml.backend_api.session.client import APIClient client = APIClient() tasks = client.tasks.get_all(page=0,page_size=10) tasksit gives following response and it works.
{'id': 'e8e09741583b4a0499baf588eb3f8bdb', 'name': '2D plots reporting'} {'id': 'b18427cc76e34c19aa475acef1ff8bfa', 'name': '3D plot reporting'} {'id': 'd0bf83f3a7854c4585e0db19d891a4cf', 'name': 'Abseil example'} {'id': '8dc205923f3d4674ad40482391454f8e', 'name': 'Automatic Hyper-Parameter Optimization'} {'id': '9751c847f6664f52a096e1264b258fad', 'name': 'Example Dataset'} {'id': '2a619936b4204f5ebfeefefd66bddf03', 'name': 'Example data'} {'id': '6008e7fbd2594037b2bb4d4aec6c3bf7', 'name': 'Export models to Artifacts'} {'id': '33534ab12ea646a3bab6326a5efbc261', 'name': 'Keras HP optimization base'} {'id': 'cc65eb033cce43b3a20e2174d1dbe686', 'name': 'Keras HP optimization base: General/batch_size=128 General/epochs=30 General/layer_1=128 General/layer_2=128'} {'id': 'd8efdaef91814392995157fcc0a56cf0', 'name': 'Keras HP optimization base: General/batch_size=128 General/epochs=30 General/layer_1=128 General/layer_2=384'}
when i call with scroll_id, and then the error occurred with unsupported arguments scroll_id.
from clearml.backend_api.session.client import APIClient client = APIClient() tasks = client.tasks.get_all(page=0,page_size=10, scroll_id=1) tasks
` ---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [47], line 3
1 from clearml.backend_api.session.client import APIClient
2 client = APIClient()
----> 3 tasks = client.tasks.get_all(page=0,page_size=10, scroll_id=1)
4 tasks

File .venv\lib\site-packages\clearml\backend_api\session\client\client.py:422, in make_action.<locals>.get(self, *args, **kwargs)
417 @wrap
418 def get(self, *args, **kwargs):
419 return TableResponse(
420 service=self,
421 entity=entity,
--> 422 result=self.session.send(request_cls(*args, **kwargs)),
423 dest=dest,
424 fields=kwargs.pop("only_fields", None),
425 )

File .venv\lib\site-packages\clearml\backend_api\services\v2_13\tasks.py:7617, in GetAllRequest.init(self, id, name, user, project, page, page_size, order_by, type, tags, system_tags, status, only_fields, parent, status_changed, search_text, all, any, **kwargs)
7596 def init(
7597 self,
7598 id=None,
(...)
7615 **kwargs
...
---> 24 raise ValueError('Unsupported keyword arguments: %s' % ', '.join(kwargs.keys()))
25 elif allow_extra_fields and kwargs:
26 self._extra_fields = kwargs

ValueError: Unsupported keyword arguments: scroll_id `

  
  
Posted 2 years ago

StraightParrot3 how exactly did you call it?

  
  
Posted 2 years ago

Hi StraightParrot3 , page_size is indeed limited to 500 from my understanding. You need to scroll through the tasks. The first tasks.get_all response will return scroll_id , you need to use this scroll_id in your following call. Every call afterwards will return a different scroll_id which you will always need to use in your next call to continue scrolling through the tasks. Makes sense?

  
  
Posted 2 years ago
972 Views
8 Answers
2 years ago
one year ago
Tags