Hi @<1523701295830011904:profile|CluelessFlamingo93> , when running remotely the agent assumes it will be a different machine. I think the best way to solve this is to add utils to your repository and import it from there during code execution.
What do you think?
Hi @<1533257278776414208:profile|SuperiorCockroach75> , can you please expand on what you mean / expecting? I'm not sure I understand your issue
What do you get when you call get_configuration_objects() now?
SubstantialElk6 , interesting. What metrics are you looking for?
Hi @<1668427950573228032:profile|TeenyShells80> , can you please elaborate on the process? Exactly what steps you took, what CLI commands. Also what is happening when you say it's not working? Are there console logs? Please add some information 🙂
Is there a vital reason why you want to keep the two accounts separate when they run on the same machine?
Also, what if you try aligning all the cache folders for both configuration files to use the same folders?
Hi @<1702130048917573632:profile|BlushingHedgehong95> , I would suggest the following few tests:
- Run some mock task that uploads an artifact to the files server. Once done, verify you can download the artifact via the web UI - there should be a link to it. Save that link. Then delete the task and mark to delete all artifacts. Test the link again to see that it fails to delete
- Please repeat the same with a dataset
In that case you have the "packages" parameter for both the controller and the steps
@<1544853695869489152:profile|NonchalantOx99> , it seems like the server isn't reachable. Isn't self hosted?
I think the call tasks.get_all should have you covered to extract all information you would need.
None
The request body should look something like this:
{
"id": [],
"scroll_id": "b77a32d585604b098f685b00f30ba2c2",
"refresh_scroll": true,
"size": 15,
"order_by": [
"-last_update"
],
"type": [
"__$not",
"annotation_manual",
"__$not",
"annotation",
"__$not",
"dataset_i...
Hi SteepDeer88 , I think this is the second case. Each artifact URL is simply saved as a string in the DB.
I think you can write a very short migration script to rectify this directly on MongoDB OR manipulate it via the API using tasks.edit endpoint
I think this is what you're looking for - None
I think you're right. But it looks like an infrastructure issue related to Yolo
UnevenDolphin73 , can you verify that the process is not running on the machine? for example with htop or top
Does it save the code in the uncommitted changes?
Aw you deleted your response fast
Yeah I misread the part where it's not in ps aux ^^
Hi GrievingDeer61 , you need to create the queue yourself or change the queue that is being used to something you created 🙂
Hi DullPeacock33 , I think what you're looking for is this:
https://clear.ml/docs/latest/docs/references/sdk/task#execute_remotely
This will initialize all the automagical stuff but won't require running the script locally.
What do you think?
Hi Juan, can you please elaborate? What is pac? What is failing to clone the repo, can you provide an error message?
Can you add a log?
#git_host=" http://bitbucket.org "
If you run an agent in docker mode ( --docker ) the agent will run a docker run command and the task will be executed inside a container. In that scenario, I think, if you kill the daemon then the docker will stay up and finish the job (i think, haven't tested)
or add requirements manually via code
Hi @<1547028079333871616:profile|IdealElephant83> , this is what the community channel is for, support, news & discussions related to ClearML OS 🙂
It seems that the SDK can't reach the API server. Are you seeing anything in the API server logs? Is it possible you're being blocked by an internal firewall?
I don't think so. However you can use the API as well 🙂