You can configure it in ~/clearml.conf
at api.files_server
Hi SubstantialElk6 ,
From a quick glance I don't see any abilities not covered. Is there some specific capability you're looking for?
How about trying to use register_artifact
None ?
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I would suggest leaving your details here:
None
Hi @<1523701295830011904:profile|CluelessFlamingo93> , when running remotely the agent assumes it will be a different machine. I think the best way to solve this is to add utils to your repository and import it from there during code execution.
What do you think?
Hi @<1670964701451784192:profile|SteepSquid49> , that sounds like the correct setup 🙂
What were you thinking of improving or do you have some pain points in your current setup?
Hi @<1730396272990359552:profile|CluelessMouse37> , I would suggest reviewing the relevant docs regarding pipelines & HPO and then running said examples before integrating them 🙂
None
None
None
[None](https://clear.ml/docs/latest/docs/references/sdk/hpo_optimizati...
Hi @<1706116294329241600:profile|MinuteMouse44> , is there any worker listening to the queue?
Cool, thanks for the info! I'll try to play with it as well 🙂
Hi @<1726047624538099712:profile|WorriedSwan6> , ideally the pipeline controller would be running on the services agent which is part of the server deployment and does not require GPU resources at all
SarcasticSparrow10 , it seems you are right. At which point in the instructions are you getting errors from which step to which?
Sure, if you can post it here or send in private if you prefer it would be great
SmallDeer34 Hi 🙂
I don't think there is a way out of the box to see GPU hours per project, but it can be a pretty cool feature! Maybe open a github feature request for this.
Regarding on how to calculate this, I think an easier solution for you would be to sum up the runtime of all experiments in a certain project rather than looking by GPU utilization graphs
Can you also please share logs of the autoscaler?
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I think you can also control the agent sampling rate (to sample queue every 10 or 20 seconds instead of 5 for example)
Hi @<1695969549783928832:profile|ObedientTurkey46> , is this happening when running on top of the agent or locally?
Hi WhoppingMole85 , can you please provide a small code snippet to play with? How are you saving the tables?
Hi @<1523701875835146240:profile|SkinnyPanda43> , with what email are you trying to login?
SubstantialElk6 , I think this is what you're looking for:
https://clear.ml/docs/latest/docs/references/sdk/dataset#get_local_copyDataset.get_local_copy(..., part=X)
Hi @<1582904426778071040:profile|SteepBat69> , that's an interesting question. In theory I think it should be possible
@<1554638160548335616:profile|AverageSealion33> , what if you just run a very simple piece of code that includes Task.init()
? One of the examples in the repository, does this issue reproduce?
DepressedChimpanzee34 , Hi 🙂
Let's break this one down:
In the 'queues & workers' window if you switch to 'queues' you can actually see all the workers assigned to a specific queue In the workers window, you can see which workers are active and which are not. Is this enough or do you think something else is needed? You can see the resources used by each worker in the workers window. Is that what you mean? You can already do that! Simply drag and drop experiments in the queue window
I'm...
I see, so if you have 60 in queue and you select 30 you'd like to move them up the queue but so out of the selected 30 they would still keep their relative order, correct?
You mean that you have 30 jobs each in a separate queue and you'd like to move all of them to top priority in each queue?
DepressedChimpanzee34 , how are you trying to get the remote config values? Also, what which configurations are we talking about specifically?
@Alex Finkelshtein, if the parameters you're using are like this:
parameters = { 'float': 2.2, 'string': 'my string', }
Then you can update the parameters as mentioned before:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Please note that parameters['float'] = '9.9' will update the parameter specifically. I don't think you can update the parameter en masse...
Can you give an example for your test_config
?
DepressedChimpanzee34 , the only way I see currently is to update manually each parameter
For example:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Does this help?