Reputation
Badges 1
53 × Eureka!Yea. Added an issue. We can follow up from there. Really hope that clearml serving can work, is a nice project.
And just a suggestion which maybe I can post in GitHub issue too.
It is not very clear what are the purpose of the project name and name, even after I read the --help. Perhaps this is something that can be made clearer when updating the docu?
Thanks AgitatedDove14 . Specifically, I wanted to use my own clearml server and Triton. Thus, I attempted to use --engine-container-args during launch but error saying no such flag. Looked into --help but I guessed it is not updated yet.
By the way, will downloading still happen if the datasets is available in the cache folder? Any specific settings to add to Dataset.get_local_copy()?
OK let me try by adding to vol mount.
@<1523701205467926528:profile|AgitatedDove14> when my codes get the clearml datasets, it stores in the cache e.g. /$HOME/.clearml/cache....
I wanted it to be in a mounted PV instead, so other pods (in same node) who needed same datasets can use without pulling again.
I have yet to figure out how to do so, would appreciate if u could give some guidance
@<1523701205467926528:profile|AgitatedDove14> do u mean not using helm but fill up the values and install with the yaml files directly? E.g. kubectl apply ...
Thanks AgitatedDove14 and TimelyMouse69 . The intention was to have some traceability between the two setups. I think the best way is to enforce some naming convention (for project and name) so we can know how they are related? Any better suggestions?
https://clear.ml/docs/latest/docs/integrations/storage/
Try add the <path to your cert> for s3.credentials.verify.
Do u have an example of how I can define the packages to be installed for every steps of the pipeline?
Thanks I just realised I didn't add --docker
Example i build my docker image using a image in docker hub. In this image, i installed torch and cupy packages. But when i run my experiment in this image, the packages are not found.
Yes, I ran the experiment inside.
Ok. Can I check that only the main script was stored in the task but not the dependent packages?
I guess the more correct way is to upload to some repo where the remote task can still pull from it?
CostlyOstrich36 I mean the dataset object in clearml as well as the data that is tied to this object.
The intent is to bring over to another clearlml setup and keep some form of traceability.
seems like it was broken for numpy version 1.24.1.
Tried with numpy 1.23.5 and it works.
Hi @<1523701070390366208:profile|CostlyOstrich36> , basically
- I uploaded dataset using clearml Datasets. The output_uri is pointed to my s3, thus the dataset is stored in s3. My s3 is setup with http only.
- When I retrieve the dataset for training, using
Dataset.get()
, I encountered ssl cert error as the url to retrieve data washttps://<s3url>/...
instead ofs3://<s3url>/...
which is http. This is weird as the dataset url is without https. - I am not too sure why and I susp...
Hi CostlyOstrich36 I have run this task locally at first. This attempt was successful.
When I use this task to run in a pipeline (task was run remotely), it cannot find the external package. This seems logical but I not sure how to resolve this.
I was browsing clearml agent gihub and saw this. Isn't this for spinning up clearml-agent in a docker and perform like a daemon?
May I know where to set the cert to in env variable?
U want to share your clearml.conf here?
By the way, how can I start up the clearml agent using the clearml-agent image instead of SDK? Do u have an example of the docker run command that includes the queue, gpus etc?
Nice. That should work. Thanks
Not exactly sure yet but I would think user tag for deployed make sense as it should be a deliberated user action. And additional system state is required too since a deployed state should have some pre-requitise system state.
I would also like to ask if clearml has different states for a task, model, or even different task types? Right now I dun see differences, is this a deliberated design?
Nice. It is actually dataset.id
.
Ah I think I was not very clear on my requirement. I was looking at porting project level, not entire clearml data over. Is it possible instead?
I guess we need to understand the purpose of the various states. So far only "archive, draft, publish". Did I miss any?
Yes. But I not sure what's the agent running. I only know how to stop it if I have the agent id
I not very sure tbh. Just want to see if this is useful....
I got SSL error few days back and I solved it by adding cert to /etc/ssl/certs
and perform update-ca-certificates
.
export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
Add this. Note that verify
might not work with sdk.aws.s3.verify
but sdk.aws.s3.credentials
. Pls see the attached image.
Example:aws {
s3 {
credentials: [
{
` ...