
Reputation
Badges 1
29 × Eureka!Is that deletion when deleting a task in the GUI?
Could well be the same as https://github.com/allegroai/clearml-server/issues/112 which is also discussed https://clearml.slack.com/archives/CTK20V944/p1648547056095859 ๐
OK that's great, thanks for the info SuccessfulKoala55 ๐
Shards that I can see are using a lot of disk space are
events-training_stats_scalar
events-log
- And then various
worker_stats_*
Updating the server has solved the issue ๐
It seems to be an issue that a few people are having problems with: https://github.com/allegroai/clearml-server/issues/112
Ah apologies for getting the wrong end of the stick a bit!
Not sure if it helps you or not, but when the link to an artifact didn't work for me it was because the URL being used was internal to the server (I had an agent that had access to internal endpoints). In my case setting the agent fileserver url to the public domain solved my issue.
From my limited understanding of it, I think it's the client that does the saving and communicating to the fileserver not the server, whereas deletion is done by the GUI/server which I guess could have different permissions somehow?
When you generate new credentials in the GUI, it comes up with a section to copy and paste into either clearml-init
or ~/clearml.conf
. I want the files server displayed here to be a GCP address
And what is the difference in behaviour betweenTask.init(..., output_uri=True)
and Task.init(..., output_uri=None)
?
I ran into something similar, for me I'd actually cloned the repository using the address without the git@
(something made it work). ClearML read it from the remote repository URL and used it. When I updated the URL of the remote repository in my git client it then worked.
it might be an issue in the UI due to this unconventional address or network settings
I think this is related to an https://github.com/allegroai/clearml-server/issues/112#issue-1149080358 that seems to be a reoccurring issue across many different setups
Tasks are running locally and recording to our self deployed server, no output in my task log that indicates an issue. This is all of the console output:
2023-01-09 12:53:22 ClearML Task: created new task id=7f94e231d8a04a8c9592026dea89463a ClearML results page:
2023-01-09 12:53:24 ClearML Monitor: GPU monitoring failed getting GPU reading, switching off GPU monitoring
Are there any logs in the server I can check? The server is running v1.3.1 and the issue Iโm see is with version 1....
task.get_parameters
and task.get_parameters_as_dict
have the keyword argument cast
which attempts to convert values back to their original type, but interestingly doesn't seem to work for properties:
` task = Task.init()
task.set_user_properties(x=5)
task.connect({"a":5})
task.get_parameters_as_dict(cast=True)
{'General': {'a': 5}, 'properties': {'x': '5'}} Hopefully would be a relatively easy extension of
get_user_properties ` !
Hi CostlyOstrich36 thanks for the response and makes sense.
What sort of problems could happen, would it just be the corruption of the data that is being written or could it be more breaking?
For context, Iโm currently backing up the server (spinning it down) every night but now need to run tasks over night and donโt want to have any missed logs/artifacts when the server is shutdown.
Ok, thanks Jake!
CostlyOstrich36 thanks for getting back to me!
yes!
That's great! Please can you let me know how to do it/how to set the default files server?
However it would be advisable to also add the following argument to your code :
That's useful thanks, I didn't know about this kwarg
Maybe it was the load on the server? meaning dealing with multiple requests at the same time delayed the requests?!
Possibly but I think the server was fine as I could run the same task locally and it took a few seconds (rather than 75) to upload. The egress limit on the agent was 32 Gbps which seems much larger than what I though I was sending but I don't have a good idea of what that limit actually means in practice!
And regarding the first question - Edit your
~/clearml.conf
That would change what file server is used by me locally or an agent yes, but I want to change what is shown by the GUI so that would need to be a setting on the server itself?
I've tracked down our messages when this occurred and I think we had a different error to you, sorry.
In case it helps our problem was when the below command was run in the repository:$ git remote -v
Returned the https
address rather than the ssh
address.
Then clearml tried to convert this to the ssh
address, which looked like<org>/<repo>/
rather than:<org>/<repo>.git
(Which is possible a separate bug?)
connect_configuration
seems to take about the same amount of time unfortunately!
That said, maybe the connect dict is not the best solution for thousand key dictionary
Seems like it isn't haha!
What is the difference with connect_configuration
? The nice thing about it not being an artifact is that we can use the gui to see which hashes have changed (which admittedly when there are a few thousand is tricky anyway)
Thanks @<1523701087100473344:profile|SuccessfulKoala55> , Iโve taken a look and is this force merging youโre referring to? Do you know how often ES is configured to merge in clearml server?