
Reputation
Badges 1
89 × Eureka!` client.queues.get_default()
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/conda/lib/python3.9/site-packages/clearml/backend_api/session/client/client.py", line 378, in new_func
return Response(self.session.send(request_cls(*args, **kwargs)))
File "/opt/conda/lib/python3.9/site-packages/clearml/backend_api/session/client/client.py", line 122, in send
raise APIError(result)
clearml.backend_api.session.client.client.APIError: APIError: code 4...
Going for something like this:
` >>> queue = QueueMetrics(queue='queueid')
queue.avg_waiting_times `
Nope AWS aren't approving the increased vCPU request. I've explained the use case several times and they've not approved
Umm no luck
q = client.queues.get_all(name='default')[0] from_date = math.floor(datetime.timestamp(datetime.now() - relativedelta(months=3))) to_date = math.floor(datetime.timestamp(datetime.now())) res = client.queues.get_queue_metrics(from_date=from_date, to_date=to_date, interval=1, queue_ids=[q.id])
Hi AgitatedDove14 ,
I noticed that ClearML parses clearml.automation.UniformParameterRange
to configuration space to be used with BOHB. When I've used BOHB previously I can use UniformFloatHyperparameter
from the configuration space package that allows me to set a parameter in logspace. That is the range is defended by something like numpy.logspace
rather than numpy.linspace
Okay thanks for the update 🙂 the account manager got involved and the limit has been approved 🚀
I'll like to call Run Time
via the task object.... I think I need to calculate manually
i.e.
task = clearml.Task.get_task(id) time = task.data.last_update - task.data.started
Okay, I'm going to look into this further. We had around 70 volumes that were not deleted but could have been due to something else.
I also noticed that my queue stats haven't been updated since 7/1/2022 @ 12:41am
`
import os
import glob
from clearml import Dataset
DATASET_NAME = "Bug"
DATASET_PROJECT = "ProjectFolder"
TARGET_FOLDER = "clearml_bug"
S3_BUCKET = os.getenv('S3_BUCKET')
if not os.path.exists(TARGET_FOLDER):
os.makedirs(TARGET_FOLDER)
with open(f'{TARGET_FOLDER}/data.txt', 'w') as f:
f.writelines('Hello, ClearML')
target_files = glob.glob(TARGET_FOLDER + "/**/*", recursive=True)
# upload dataset
dataset = Dataset.create(dataset_name=DATASET_NAME, dataset_project=DATASET_PR...
` python upload_data_to_clearml_copy.py
Generating SHA2 hash for 1 files
100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 733.91it/s]
Hash generation completed
0%| | 0/1 [00:00<?, ?it/s]
Compressing local files, chunk 1 [remaining 1 files]
100%|████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 538.77it/s]
File compression completed: t...
nope you'll just need to install clearml
Looks like it's picking up the projects but then viewing on the UI they disappear
Can you try to go into 'Settings' -> 'Configuration' and verify that you have 'Show Hidden Projects' enabled
Okay great thanks SuccessfulKoala55
Sure, I'll check this out later in the week and get back to you
Hi yes all sorted ! 🙂
Yep figured this out yesterday. I had been tagging G type instances with an alarm as a fail safe if the AWS autoscaler failed. The alarm only stopped the instance and didn't terminate it (which deletes the drive). Thanks anyway CostlyOstrich36 and TimelyPenguin76 🙂
From SuccessfulKoala55 suggestion
This was the response from AWS:
"Thank you for for sharing the requested details with us. As we discussed, I'd like to share that our internal service team is currently unable to support any G type vCPU increase request for limit increase.
The issue is we are currently facing capacity scarcity to accommodate P and G instances. Our engineers are working towards fixing this issue. However, until then, we are unable to expand the capacity and process limit increase."