Sure, if you can post it here or send in private if you prefer it would be great
Hi @<1577468611524562944:profile|MagnificentBear85> , the webUI uses the API under the hood. So you can open Developer tools (F12) and see what the webUI sends and then replicate that with code.
WDYT?
SarcasticSparrow10 , it seems you are right. At which point in the instructions are you getting errors from which step to which?
Hi TrickySheep9 , can you be a bit more specific?
Do you have resource monitoring on that machine? Any chance that that something ran out of space or memory or cpu?
I think it's the same usage as it were a regular task. Did you encounter any issues?
Hi ReassuredArcticwolf33 , what are you trying to do and how is it being done via code?
TenseOstrich47 , you can specify a docker image withtask.set_base_docker(docker_image="<DOCKER_IMAGE>")You will of course need to login to ECR on that machine so it will be able to download the docker image.
Can you please expand regarding this new TAO and what's the difference to how triton serves at the moment?
Can you please give an example?
Hi @<1724960468822396928:profile|CumbersomeSealion22> , can you provide a log of such a run?
Hi @<1547752791546531840:profile|BeefyFrog17> , can you add the full log?
Hi @<1523706826019835904:profile|ThoughtfulGorilla90> , it's not possible since workspaces are connected to the email itself. I would suggest writing some automation to extract the relevant projects/experiments from one workspace and register them into the new workspace. The API would be the best way to go. You would need to extract all information about the experiment itself and then also extract all the logs/scalars/plots and then simply register everything in the new workspace.
[None](h...
The webUI is using the API to show everything so I would suggest opening developer tools (F12) and seeing what is being sent by the UI when you navigate different sections of the experiments to use as a baseline
Happy to help 🙂
Note that you will get almost all information about the task using tasks.get_by_id , then you would need a few more calls to extract the console/scalars/plots/debugs
Hi @<1590514572492541952:profile|ColossalPelican54> , I'm not sure what you mean. output_uri=True will upload the model to the file server - making it more easily accessible. Refining the model would require unrelated code. Can you please expand?
GreasyLeopard35 , please try with the latest clearml-agent
I think this can give you more information:
https://stackoverflow.com/questions/51279711/what-does-1000-mean-in-chgrp-and-chown
This means it assigns the owner as the first linux user on that machine.
@<1654294834359308288:profile|DistressedCentipede23> , can you please elaborate on the exact workflow you want to build?
When you want to connect your parameters and other objects. Please take as look here:
https://clear.ml/docs/latest/docs/references/sdk/task#connect
You can find a usage example in
https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py
Can you give a bit more info of how you want the pipeline built and where you want to insert/extract the task id? Also how is the model related? Is it the start of the pipeline?
maybe SuccessfulKoala55 might have some input here. But this docker image is designed to be run from k8s glue from my understanding. To run it standalone you have to play with it a bit I think. Maybe try adding -it and /bin/bash at the end
FierceHamster54 , please try re-launching the the autoscaler, the issue seems to be resolved now
Hi IrritableGiraffe81 ,
Please refer to this:
https://clear.ml/docs/latest/docs/references/sdk/task#update_output_model
First, ClearML uploads the file to the preconfigured output destination (see the Task’s
output.destination
property or call the
setup_upload
method)
I think you need to provide output_destination=True in your Task.init()
Also please expand the 500 errors you're seeing. It should give some log
Hi @<1523702932069945344:profile|CheerfulGorilla72> , I think you need to map out the relevant folders for the docker. You can add docker arguments to the task using Task.set_base_docker