Reputation
Badges 1
29 × Eureka!CostlyOstrich36 I use the GCP disk image to launch a Compute Engine instance which sits behind an HTTP load balancer
Thanks CumbersomeCormorant74
it might be an issue in the UI due to this unconventional address or network settings
I think this is related to an https://github.com/allegroai/clearml-server/issues/112#issue-1149080358 that seems to be a reoccurring issue across many different setups
Shards that I can see are using a lot of disk space are
events-training_stats_scalar
events-log
- And then various
worker_stats_*
Thanks @<1523701087100473344:profile|SuccessfulKoala55> , I’ve taken a look and is this force merging you’re referring to? Do you know how often ES is configured to merge in clearml server?
OK that's great, thanks for the info SuccessfulKoala55 👍
Hi CostlyOstrich36 thanks for the response and makes sense.
What sort of problems could happen, would it just be the corruption of the data that is being written or could it be more breaking?
For context, I’m currently backing up the server (spinning it down) every night but now need to run tasks over night and don’t want to have any missed logs/artifacts when the server is shutdown.
Ok, thanks Jake!
task.get_parameters
and task.get_parameters_as_dict
have the keyword argument cast
which attempts to convert values back to their original type, but interestingly doesn't seem to work for properties:
` task = Task.init()
task.set_user_properties(x=5)
task.connect({"a":5})
task.get_parameters_as_dict(cast=True)
{'General': {'a': 5}, 'properties': {'x': '5'}} Hopefully would be a relatively easy extension of
get_user_properties ` !
Yep GCP. I wonder if it's something to do with Container-Opimized OS, which is how I'm running the agents
I think you should open a github feature request since there is currently no way to do this via UI
Will do. Is there a way to do it no via the UI? E.g. in the server configuration (I'm running a self hosted server)?
Ah right, nice! I didn’t think it was as I couldn’t see it in the Task
reference , should it be there too?
And here is a PR for the other part.
connect_configuration
seems to take about the same amount of time unfortunately!
Maybe it was the load on the server? meaning dealing with multiple requests at the same time delayed the requests?!
Possibly but I think the server was fine as I could run the same task locally and it took a few seconds (rather than 75) to upload. The egress limit on the agent was 32 Gbps which seems much larger than what I though I was sending but I don't have a good idea of what that limit actually means in practice!
I realise I made a mistake and hadn't actually used connect_configuration
!
I think the issue is the bandwidth yeah, for example when I doubled the number of CPUs (which doubles the allowed egress) the time taken to upload halved. It is puzzling because as you say it's not that much to upload.
For now I've whittled down the number of entries to a more select but useful few and that has solved the issue. If it crops up again I will try connect_configuration
properly.
Thanks for ...
That said, maybe the connect dict is not the best solution for thousand key dictionary
Seems like it isn't haha!
What is the difference with connect_configuration
? The nice thing about it not being an artifact is that we can use the gui to see which hashes have changed (which admittedly when there are a few thousand is tricky anyway)