Reputation
Badges 1
43 × Eureka!Hi @<1523701070390366208:profile|CostlyOstrich36> , update for you here. I had noticed that the issue was not present for smaller datasets, which led us to discover that the problem was being caused by some nginx (I think) settings with the new server deployment. This was blocking the upload of the "dataset content" object. So our devops team was able to resolve the issue. Thanks very much for your help.
I believe you should be able to set the queue_name parameter to None to accomplish this.
Hi @<1523701435869433856:profile|SmugDolphin23> , so I need to call the pipeline function again in the remote context? I guess I thought when I start it up, my local session parses the pipeline and then transmits it to the server to run but it sounds like, it just copies the code and then i need to effectively call it again in the agent?
Hi @<1523701070390366208:profile|CostlyOstrich36> , I would expect the loss_func parameter to be FocalLoss instead of ['FocalLoss', 'FocalLoss', 'FocalLoss', 'FocalLoss'] (and same for the validation_split_name parameter. I will try to put together an example, though it might take a little time before I can do it.
I actually have a question about your original code snipped, @<1556450111259676672:profile|PlainSeaurchin97> . I have been trying to figure out a way to access the task object when running remotely so that I can instantiate the logger but when I tried task_id = os.getenv("CLEARML_TASK_ID") , it’s returning None . I also tried Task.current_task() and also got None back. What is the recommended way to access the Task object from within the remote agent?
@<1523701225533476864:profile|ObedientDolphin41> , I was searching for anyone having an issue like me and found this thread. I have created a simple pipeline using decorators and when I try to clone it in the UI, I get that base_task_id is empty error. It works fine when triggered programmatically from my machine. I’m wondering if you could elaborate on how you utilized the get_configuration_object and set_configuration_object methods to solve this? In my case, I’m not setting a...
I don't see any console errors
I’m using SDK version 1.10.2 and yes, it’s self-hosted. Here is the version info for the server:
WebApp: 1.9.1-312 • Server: 1.9.1-312 • API: 2.23
Thanks!
I think this is what you're looking for but let me know if you meant something different:
{
"meta": {
"id": "76fffdf3b04247fa8f0c3fc0743b3ccb",
"trx": "76fffdf3b04247fa8f0c3fc0743b3ccb",
"endpoint": {
"name": "tasks.get_by_id_ex",
"requested_version": "2.30",
"actual_version": "1.0"
},
"result_code": 200,
"result_subcode": 0,
"result_msg": "OK",
"error_stack": "",
"error_data"...
Hi @<1523701070390366208:profile|CostlyOstrich36> , this is what our devops engineer said:
the proxy-body-size limitation crashed for the Clearml api, for WEB and FileServer I set it to unlimited, but for the API I didn't change it.
Hi @<1523701205467926528:profile|AgitatedDove14> , sure. I just need to scrape them for any sensitive info then i'll post to this thread. Thanks for your reply.
No, i'm not seeing that "Dataset Content" section. We have some older datasets that were copied from a prior server deployment that do have the section and it appears in the UI.
Hi Max, thanks very much for your message! I understand what you’re saying now, though I suppose this is not my issue since I’m not setting any of the decorator values with variables. I’ll post a query in the main channel with code snippets to see if anyone has ideas. Thank you!
Hi Martin, I see . That makes sense though I would have expected the behavior to be the same when running remotely the first time as well . In any case, this solved the issue for me . Thanks for looking at it
Hi @<1523701205467926528:profile|AgitatedDove14> , sorry for the delayed reply. So what you’re saying is to first kick off a new run and then rename the underlying Pipeline Task, which will cause that particular run to become a new pipeline name? But you have to do this only after you’ve started the run.
What would be most ideal would be to be able to right-click on a pipeline run and have a “clone” option, like you can with a task, where you can start a new run with a new name in a single ...
@<1523701205467926528:profile|AgitatedDove14> : FYI here it is None