![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/SuperiorPanda77.png)
Reputation
Badges 1
29 × Eureka!Shards that I can see are using a lot of disk space are
events-training_stats_scalar
events-log
- And then various
worker_stats_*
Thanks @<1523701087100473344:profile|SuccessfulKoala55> , I’ve taken a look and is this force merging you’re referring to? Do you know how often ES is configured to merge in clearml server?
From my limited understanding of it, I think it's the client that does the saving and communicating to the fileserver not the server, whereas deletion is done by the GUI/server which I guess could have different permissions somehow?
Is that deletion when deleting a task in the GUI?
CostlyOstrich36 thanks for getting back to me!
yes!
That's great! Please can you let me know how to do it/how to set the default files server?
However it would be advisable to also add the following argument to your code :
That's useful thanks, I didn't know about this kwarg
Is the GCP disk image released for it? I get access denied with this link: https://storage.googleapis.com/allegro-files/clearml-server/clearml-server-1-3-0.tar.gz
Cheers!
it might be an issue in the UI due to this unconventional address or network settings
I think this is related to an https://github.com/allegroai/clearml-server/issues/112#issue-1149080358 that seems to be a reoccurring issue across many different setups
Yes please that would be great 👍
I realise I made a mistake and hadn't actually used connect_configuration
!
I think the issue is the bandwidth yeah, for example when I doubled the number of CPUs (which doubles the allowed egress) the time taken to upload halved. It is puzzling because as you say it's not that much to upload.
For now I've whittled down the number of entries to a more select but useful few and that has solved the issue. If it crops up again I will try connect_configuration
properly.
Thanks for ...
I think a note about the fileserver should be added to the https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_security page!
Maybe it was the load on the server? meaning dealing with multiple requests at the same time delayed the requests?!
Possibly but I think the server was fine as I could run the same task locally and it took a few seconds (rather than 75) to upload. The egress limit on the agent was 32 Gbps which seems much larger than what I though I was sending but I don't have a good idea of what that limit actually means in practice!
I think you should open a github feature request since there is currently no way to do this via UI
Will do. Is there a way to do it no via the UI? E.g. in the server configuration (I'm running a self hosted server)?
And regarding the first question - Edit your
~/clearml.conf
That would change what file server is used by me locally or an agent yes, but I want to change what is shown by the GUI so that would need to be a setting on the server itself?
Ah apologies for getting the wrong end of the stick a bit!
Not sure if it helps you or not, but when the link to an artifact didn't work for me it was because the URL being used was internal to the server (I had an agent that had access to internal endpoints). In my case setting the agent fileserver url to the public domain solved my issue.
That said, maybe the connect dict is not the best solution for thousand key dictionary
Seems like it isn't haha!
What is the difference with connect_configuration
? The nice thing about it not being an artifact is that we can use the gui to see which hashes have changed (which admittedly when there are a few thousand is tricky anyway)
Ah right, nice! I didn’t think it was as I couldn’t see it in the Task
reference , should it be there too?
OK that's great, thanks for the info SuccessfulKoala55 👍
And what is the difference in behaviour betweenTask.init(..., output_uri=True)
and Task.init(..., output_uri=None)
?
Yep GCP. I wonder if it's something to do with Container-Opimized OS, which is how I'm running the agents
Hi CostlyOstrich36 thanks for the response and makes sense.
What sort of problems could happen, would it just be the corruption of the data that is being written or could it be more breaking?
For context, I’m currently backing up the server (spinning it down) every night but now need to run tasks over night and don’t want to have any missed logs/artifacts when the server is shutdown.
Hi CostlyOstrich36 , thanks for getting back to me!
I want to launch multiple tasks from one python process to be run by multiple agents simultaneously.
My current process for launching one task remotely is to use task.execute_remotely
, and then I separately spin up a VM and execute a ClearML agent on that VM with the task ID.
Ideally, I would like to create multiple tasks in this way - so do Task.init(…)
, set up some configuration, and then task.execute_remotely
in a l...
CostlyOstrich36 I use the GCP disk image to launch a Compute Engine instance which sits behind an HTTP load balancer