Also, is the Pro plan autoscaling option the only way to run clearml in the cloud without having a dedicated VM thatβs running all the time?
Yes, GCP autoscaler is only available in the PRO & Scale/Enterprise
What version of clearml
are you using? Can you provide a code snippet that reproduces this?
Hi @<1566596968673710080:profile|QuaintRobin7> , do you have a self contained snippet that will reproduce this?
I mean in the execution section of the task - under container section
I don't think there should be an issue to run the agent inside a docker container
Also, what happens if you apss it in agent.default_docker.arguments
?
Hi @<1523701122311655424:profile|VexedElephant56> , do you get the same response when you try to run a script with Task.init() without agent on that machine?
What about tasks.get_all and you specify the ID of the task you want as well:
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all
Are you using a self hosted server or the community server?
Before injecting anything into the instances you need to spin them up somehow. This is achieved by the application that is running and the credentials provided. So the credentials need to be provided to the AWS application somehow.
Hi @<1523701083040387072:profile|UnevenDolphin73> , looping in @<1523701435869433856:profile|SmugDolphin23> & @<1523701087100473344:profile|SuccessfulKoala55> for visibility π
Hi @<1524560082761682944:profile|MammothParrot39> , I think you need to run the pipeline at least once (at least the first step should start) for it to "catch" the configs. I suggest you run once with pipe.start_locally(run_pipeline_steps_locally=True)
That's strange indeed. What if you right click one of the pipeline executions and click on run?
Hi EcstaticBaldeagle77 ,
I'm not sure I follow. Are you using the self hosted server - and you'd like to move data from one self hosted server to another?
UnevenDolphin73 , I think I might have skipped a beat. Are you running the autoscaler through the code example in the repo?
Also, I would suggest trying pipelines from decorators, I think it would be much smoother for you
RipeAnt6 Hi!
Yes, you simply need to configure the two following fields in your ~/clearml.conf
api.fileserver: <PATH_TO_NAS>
sdk.development.default_output_uri: <PATH_TO_NAS>
Pending means it is enqueued. Check to which queue it belongs by looking at the info tab after clicking on the task :)
ClumsyElephant70 , I understand it's possible to already combine python and Triton, however it is quite difficult to do and requires a lot of work.
We're working on making ClearML-Serving easier to use and this is one of our next plans in the to do list.
We hoped to release it earlier but got caught with some other pressing issues we had to take care of.
So to sum up the long answer, yes, it is in our plans and will be supported eventually π
Hi @<1560798754280312832:profile|AntsyPenguin90> , I think you would need to wrap the C++ code in python for it to work, but conceptually shouldn't be any special issues
Hi @<1523704207914307584:profile|ObedientToad56> , the virtual env is constructed using the detected packages when run locally. You can certainly override that. For example use Task.add_requirements
- None
There are also a few additional configurations in the agent section of clearml.conf
I would suggest going over
Hi @<1753589101044436992:profile|ThankfulSeaturtle1> , not sure I understand what you mean. Can you please elaborate?
difficult to verify correctness without a publicly available test suite
What'd you mean?
GrittyKangaroo27 , does this happen when you run a regular experiment in agent with same file?
So just rescaling the graph to see 0-1 is what you're looking for?
GreasyPenguin14 , Hi π
I'm guessing that it tries to communicate during task.init()
Try running the function when you initialize the Task
object
But you said that pipeline demo is stuck. Which task is the agent running?