Hi @<1618418423996354560:profile|JealousMole49> , I think you would need to pull all related to data via the API and then register it again through the API.
Is there a firewall in between or something stopping the connection?
OddShrimp85 , Hi 🙂
I'm afraid that the only way to load contents of setup A into setup B is to perform a data merge.
This process basically requires merging the databases (mongodb, elasticsearch, files etc.). I think it's something that can be done in the paid version as a service but not in the open one.
Hi FrothyShrimp23 , you can use Task.mark_completed()
and use force=True
https://clear.ml/docs/latest/docs/references/sdk/task#mark_completed
What's the docker image that you're using?
Hey ItchySeahorse94 , I think this might be what you're looking for 🙂
https://github.com/allegroai/clearml-serving
I tested how many resources ClearML consumes. The entire process of ClearML SDK consumes about 50mb of RAM memory on my side and it requires minimal amount of CPU.
Is it possible that your training steps are that inefficient?
What about if you specify the repo user/pass in clearml.conf?
I think it removes the user/pass so it wouldn't be shown in the logs
Also a newer version to the serving as well
Hi DiminutiveBaldeagle77 ,
Yes - https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_kubernetes_helm/ If you already have K8s cluster it is beneficial since you get scheduling capabilities which are not normally present in K8s
How are you writing your pipelines?
Hi DullPeacock33 , I think what you're looking for is this:
https://clear.ml/docs/latest/docs/references/sdk/task#execute_remotely
This will initialize all the automagical stuff but won't require running the script locally.
What do you think?
When you run your code after you've added Task.init()
into your code, you will get a link in the console. Following that link will take you to the console output of the experiment. From there you can go into 'Execution' tab and see it all there 🙂
with the combination of None :port/bucket
for --storage
?
VexedCat68 I think this will be right up your alley 🙂
https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py#L43
I think it basically runs in offline mode and populates all the relevant fields (Task attributes) in some json (or some other config file). I think you could read this file and compare to something that is expected thus having an ability to run something offline and then verify it's contents.
Also I think it should start with None
Do you mean to kill the clearml-agent process after the task finishes running? What is the use case I'm curious
Hi @<1664079296102141952:profile|DangerousStarfish38> , you can control it in the agent.default_docker.image
section of the clearml.conf
where the agent is running. You can also control it via the CLI when you use the --docker
tag and finally, you can also control it via the webUI in the execution tab -> container -> image section
Hi @<1590514572492541952:profile|ColossalPelican54> , you can use the Logger module to manually report metrics - None
Hi @<1686184974295764992:profile|ClumsyKoala96> , you can set CLEARML_API_DEFAULT_REQ_METHOD to POST
and that should work - None
Do you have the associated email to the account or the workspace ID?
I see, maybe open a GitHub issue for this to follow up
Hi @<1533159639040921600:profile|JoyousReindeer30> , the pipeline controller is currently pending. I am guessing it is enqueued into the services queue. You would need to run an agent on the services queue for the pipeline to start executing 🙂
Hi SharpSeal87 ,
Please check the following docs for Kubernetes 🙂
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_kubernetes_helm/
I would suggest moving each database separately and very carefully and set it in parallel with working server. It is very easy to make mistakes and end up with an empty database
Also, were you looking to use your agents on K8s or only the ClearML server?
I don't think there should be an issue to run the agent inside a docker container
@<1739455989154844672:profile|SmarmyHamster62> , I suggest updating your versions. The server is a bit old
@<1546303277010784256:profile|LivelyBadger26> , it is Nathan Belmore's thread just above yours in the community channel 🙂