Hi OddShrimp85 , have you run this task remotely? If you are importing your own package, how is it connected to the task?
CrookedMonkey33 , let me take a look if I can find an AMI 🙂
Or you're thinking only of the current view as it is?
Hi @<1618418423996354560:profile|JealousMole49> , I'm afraid there is no such capability at the moment. Basically metrics mean any metadata that was saved (scalars, logs, plots etc). You can delete some log/metric heavy experiments/tasks/datasets to free up some space. Makes sense?
Thank you for the detailed explanation. Can you please add a log of the ec2 instance itself? You can find it in the artifacts section of the autoscaler task. Is it the same autoscaler setup that used to work without issue or were there some changes introduced into the configuration?
Hi BoredHedgehog47 , yes it can. You would obviously need to set it up first 🙂
Hi @<1544853721739956224:profile|QuizzicalFox36> , from the error you're getting it looks like a permissions issue with the credentials. Check if your credentials have read/write/delete permissions
Hi @<1593051292383580160:profile|SoreSparrow36> , can I assume you're running a self hosted server? Is there any chance you were either using a very old SDK or old backend?
The default behavior now is to create pipeline tasks as hidden and only show them as part of the pipelines UI section.
Hi @<1575294289515122688:profile|JoyousMole49> , it looks like you are over your usage quota. Check in the settings page to see your uages
You can add it to your pip configuration so it will always be taken into account
I mean in the execution section of the task - under container section
What version of clearml
and clearml-agent
are you using, what OS? Can you add the line you're running for the agent?
I don't think there is any out of the box method for this. You can extract everything using the API from one workspace and repopulate it in another workspace also using the APIs.
Hmmm, maybe you could save it as an env var. There isn't a 'default' server per say since you can deploy anywhere yourself. Regarding to check if it's alive, you can either check ping it with curl
or check up on the docker status of the server 🙂
Hi @<1695969549783928832:profile|ObedientTurkey46> , this capability is only covered in the Hyperdatasets feature. There you can both chunk and query specific metadata.
None
2024-02-08 11:23:52,150 - clearml.storage - ERROR - Failed creating storage object
Reason: Missing key and secret for S3 storage access (
)
(edited)
This looks unrelated, to the hotfix, it looks like you misconfigured something and therefor failing to write to s3
FiercePenguin76 , maybe add it as PR 🙂
Hi SteepDeer88 , I think this is the second case. Each artifact URL is simply saved as a string in the DB.
I think you can write a very short migration script to rectify this directly on MongoDB OR manipulate it via the API using tasks.edit
endpoint
@<1523701977094033408:profile|FriendlyElk26> , try upgrading to the latest version, I think it should be fixed on the latest version
Hi @<1523701977094033408:profile|FriendlyElk26> , let's say you have a table, which you report. How would you suggest comparing between two tables?
Hi @<1523701977094033408:profile|FriendlyElk26> , are you using a self hosted server? If so, what version of api/webserver?
Because I think you need to map out the pip cache folder to the docker
Hi @<1576381444509405184:profile|ManiacalLizard2> , not sure I understand. You want to change the commit of a running task from inside the task?
You can do this quite easily with some code and the API 🙂
Can you add a bit more from the log for more context as well?
CluelessElephant89 , I'd wager you might have missed one of the steps in the installation, probably permissions issue, I hope 🙂