
Reputation
Badges 1
59 × Eureka!@<1523701205467926528:profile|AgitatedDove14> do u mean not using helm but fill up the values and install with the yaml files directly? E.g. kubectl apply ...
CostlyOstrich36 I mean the dataset object in clearml as well as the data that is tied to this object.
The intent is to bring over to another clearlml setup and keep some form of traceability.
@<1523701205467926528:profile|AgitatedDove14> I looking at a queue system which clearml q offers that allow user to queue job to deploy an app / inference service. This cam be as simple as a pod or a more complete helm chart.
Ah I think I was not very clear on my requirement. I was looking at porting project level, not entire clearml data over. Is it possible instead?
Yes. But I not sure what's the agent running. I only know how to stop it if I have the agent id
Hi Bart, yes. Running with inference container.
To clarify, there might be cases where we get helm chart /k8s manifests to deploy a inference services. A black box to us.
Users may need to deploy this service where needed to test out against other software components. This needs gpu resources which a queue system will allow them to queue up and eventually get this deployed instead of hard resource allocation to this purpose
@<1523701205467926528:profile|AgitatedDove14> I still trying to figure out how to do so. Coz when I add a task in queue, clearml agent basically creates a pod with the container. How can I make a task that does a helm install or kubectl create deployment.yaml?
Can clearml-serving does helm install or upgrade? We have cases where the ml models do not come from the ml experiments in clearml. But would like to tap on clearml q to enable resource queuing.
Thanks AgitatedDove14 and TimelyMouse69 . The intention was to have some traceability between the two setups. I think the best way is to enforce some naming convention (for project and name) so we can know how they are related? Any better suggestions?
Thanks @<1523701205467926528:profile|AgitatedDove14> . what I could think of is to write a task that may run python subproecss to do "helm install". In those python script, we could point to /download the helm chart from somewhere (e.g. nfs, s3).
Does this sound right to u?
Anything that I was wondering is if we could pass the helm charts /files when we uses clearml sdk, so we could minimise the step to push them to the nfs/s3.
And just a suggestion which maybe I can post in GitHub issue too.
It is not very clear what are the purpose of the project name and name, even after I read the --help. Perhaps this is something that can be made clearer when updating the docu?
I guess we need to understand the purpose of the various states. So far only "archive, draft, publish". Did I miss any?
Hi ExasperatedCrab78 I managed to get it. It was due to ip address set in examples.env.
A more advanced case will be to decide how long this job should run amd terminate after that. This is to improve the usage of gpu
Do u have an example of how I can define the packages to be installed for every steps of the pipeline?
Clearml 1.1.1. Yes, i have boto3 installed too.
@<1523701070390366208:profile|CostlyOstrich36> Yes. I'm running on k8s
Hi CostlyOstrich36 I have run this task locally at first. This attempt was successful.
When I use this task to run in a pipeline (task was run remotely), it cannot find the external package. This seems logical but I not sure how to resolve this.
Ok. Can I check that only the main script was stored in the task but not the dependent packages?
I guess the more correct way is to upload to some repo where the remote task can still pull from it?
I figured out that it maybe possible to do theseexperiment_task = Task.current_task()
OutputModel(experiment_task ).update_weights('
http://model.pt ')
to attach it to the ClearML experiment task.
Yea. Added an issue. We can follow up from there. Really hope that clearml serving can work, is a nice project.
U want to share your clearml.conf here?
It return false. Just to share abit more, I have the requirements.txt in gitlab with my codes and are in folders. Do I need to provide a gitlab path?
SuccessfulKoala55 i tried comment off fileserver, clearml dockers started but it doesn't seems to be able to start well. When I access clearml via webbrowser, site cannot be reached.
Just to confirm, I commented off these in docker-compose.yaml.
apiserver:
command:
- apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
- /opt/clearml/logs:/var/log/clearml
`...
seems like it was broken for numpy version 1.24.1.
Tried with numpy 1.23.5 and it works.