Reputation
Badges 1
123 × Eureka!good morning, I tried the script you provided and Im getting somewhere
I know these keys work, url and everything else works because I use these creds daily
@<1523703436166565888:profile|DeterminedCrab71> Thanks for responding
It was unclear to me that I need to set 443 also everywhere in clearml.conf
Setting s3 host urls with 443 in clearml.conf and also in web UI made it work
Im now almost at the finish line. The last thing that would be great is to fix archived task deletion.
For some reason i have error of missing S3 keys in clearml docker compose logs, the folder / files are not deleted in S3 bucket.
You can see how storage_credentials.co...
The problem is that clearml.conf s3 config doesnt support empty region field, even empty strings crashes it
But there are stil some wierd issues, i cannot see the files uploaded in bucket
Specifying it like this, gets me different error:
Exception has occurred: ValueError
- Insufficient permissions (delete failed) for None
botocore.exceptions.ClientError: An error occurred (IllegalLocationConstraintException) when calling the DeleteObject operation: The me-south-1 location constraint is incompatible for the region specific endpoint this request was sent to.
During handling of the above exception, another exception occurred:
File "/home/ma...
Not really, but i think i will figure out the uv caching
I have another question @<1523701070390366208:profile|CostlyOstrich36>
How can i make the clearml agent to just run the image with just the uv
dont install any packages, nothing
i found docker_init_bash_script in clearml.config
i know there are some envs to pass in task init but that does not fully do what i want - just simply run the image, i have all the dependencies
i can add "source /workspace/.venv/bin/activate", to clearml.conf docker_init_bash_script
However it then tries to access pip, but i dont need no pip, how to disable it, i already have my packages, and uv doesnt even require pip
I see in clearml-agent that it is created here
Im basically trying to force the agent to use uv defined python
py file:
task: clearml.Task = clearml.Task.init(
project_name="project",
task_name="task",
output_uri=" None ",
)
clearml.conf:
{
# This will apply to all buckets in this host (unless key/value is specifically provided for a given bucket)
host: " our-host.com "
key: "xxx"
secret: "xxx"
multipart: false
...
ok, is dataset path stored in mongo?
Im unable to find it in elasticsearch (debug images were here)
I do notice another strange thing
Agent-services is down because It has no API key to clearm
I see the debug images in fileserver folder
How can I do that?
I need to save the original hash, otherwise I lose all trackability to about 2k experiments
I already found the source code and i modified it as needed.
How can I now get this info from Task that is created when Dataset is created?
Couldnt find anything like clearml.Dataset(id=id).get_size()
Im doing all of this because there isnt (or im not aware of) any good way understand what datasets are on workers
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
ok, slight update. It seems like artifacts are uploading now to bucket. Maybe my folder explorer used old cache or something.
However, reported images are uploaded to fileserver instead of s3
here is the script im using to test things. Thanks
We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect