Then possibly it is another reason. Need to search for in the ES logs
ok. Currently the ebs is 15 GB, is there a recommended size?
curl -XGET "localhost:9200/_cluster/allocation/explain?pretty"
curl: (7) Failed to connect to localhost port 9200 after 0 ms: Couldn't connect to server
Looks like elastic is failing to access a shard. Do you have visibility into machine utilization? How much RAM is elastic consuming?
Also, is this the entire error repeating or is there more context?
On what host did you run the curl command?
I assume that ec2-13-217-109-164.compute-1.amazonaws.com is the ec2 instance where the API is running?
Are you using the files server or S3 for storage? Can you verify on the storage itself that the artifacts are actually uploaded and are downloadable?
That's a big context!
In general, I'm using standard functions; the script is running in SageMaker pipeline.
The model, however, is a composite, and consists of multiple primitive ones.
task = Task.init(
project_name="icp",
task_name=f"model_training_{client_name}",
task_type=Task.TaskTypes.training,
auto_connect_frameworks={'matplotlib': True, 'tensorflow': False,
'tensorboard': False,
'pytorch': False, 'xgboost': False, 'scikit': False, 'fastai': False,
'lightgbm': False, 'hydra': True, 'detect_repository': True, 'tfdefines': False,
'joblib': False, 'megengine': False, 'catboost': False, 'gradio': False
},
output_uri=False
)
task.set_script(repository=repo_url, branch=branch_name, working_dir="./", commit=commit_id)
task.set_parameter("commit_id", commit_id)
task.connect_configuration()
output_model = OutputModel(task=task, name="trained_model")
output_model.update_weights(register_uri=s3_model_uri)
....
task = Task.current_task()
if task is None:
print("Warning: No ClearML task found. Metrics will not be logged to ClearML.")
logger = None
else:
logger = task.get_logger()
logger.report_matplotlib_figure()
logger.report_scalar()
It's the entire error repeating.
And, this happens at the end of the script.
I'm using the recommended instance (t3.large).
And are you still getting exactly this error?
<500/100: events.add_batch/v1.0 (General data error: err=1 document(s) failed to index., extra_info=[events-log-d1bd92a3b039400cbafc60a7a5b1e52b][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[events-log-d1bd92a3b039400cbafc60a7a5b1e52b][0]] containing [index {[events-log-d1bd92a3b039400cbafc60a7a5b1e52b][f3abecd0f46f4bd289e0ac39662fd850], source[{"timestamp":1747654820464,"type":"log","task":"fd3d00d99d88427bbc576cba53db062d","level":"info","worker":"b1193fbdd662","msg":"Starting the training.\nClearML Monitor: GPU monitoring failed getting GPU reading, switching off GPU monitoring","model_event":false,"@timestamp":"2025-05-19T11:40:21.919Z","metric":"","variant":""}]}] and a refresh])>
I need to ssh the instance, right?
I'll check it out.
Can you provide a standalone code snippet that reproduces this behaviour?
to close this thread, file server port wasn't configured
I added
- IpProtocol: tcp
FromPort: 8081
ToPort: 8081
CidrIp: 0.0.0.0/0
to cloudformation template, and it was resolved.
Thanks a bunch, guys
@<1722061389024989184:profile|ResponsiveKoala38> @<1523701070390366208:profile|CostlyOstrich36>
green open events-log-d1bd92a3b039400cbafc60a7a5b1e52b Yh4BPGmgRZKU7STdCghmtw 1 0 96 0 175.1kb 175.1kb 175.1kb
I'm begining to think that there is something besides ClearML. I'll execute the training script on remote (SageMaker), instead of SageMaker local mode.
@<1722061389024989184:profile|ResponsiveKoala38> @<1523701070390366208:profile|CostlyOstrich36>
it's ClearML, I commented out clearml lines, and it ran successfully!
so, the same ClearML monitor error, but another issue now.
btw, the task logs the configuration, artifacts, etc.
I get this error at the end.
it's
ClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start
fzd6tw0x46-algo-1-lswt4 | 2025-05-20 10:02:08,177 - urllib3.connectionpool - WARNING - Retrying (Retry(total=2, connect=2, read=5, redirect=5, status=None)) after connection broken by 'ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x76f683ccb670>, 'Connection to "" timed out. (connect timeout=300.0)')': /
fzd6tw0x46-algo-1-lswt4 | 2025-05-20 10:02:08,178 - urllib3.connectionpool - WARNING - Retrying (Retry(total=2, connect=2, read=5, redirect=5, status=None)) after connection broken by 'ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x76f683cc9810>, 'Connection to "" timed out. (connect timeout=300.0)')': /
fzd6tw0x46-algo-1-lswt4 | 2025-05-20 10:02:08,178 - urllib3.connectionpool - WARNING - Retrying (Retry(total=2, connect=2, read=5, redirect=5, status=None)) after connection broken by 'ConnectTimeoutError(<urlli
no, it's something else.
I commented out the above two line and I was still facing the issue.
I tested that theory before; I commented out these two lines
output_model = OutputModel(task=task, name="trained_model")
output_model.update_weights(register_uri=s3_model_uri)
The issue, however, persisted.
Probably the 9200 port is not mapped from the ES container in the docker compose
The easiest would be to perform "sudo docker exec -it clearml-elastic /bin/bash" and then run the curl command from inside the ES docker
So you watered it down to these lines?
output_model = OutputModel(task=task, name="trained_model")
output_model.update_weights(register_uri=s3_model_uri)
This is what causes the timeout errors? Did you define
I have been rerunning it since yesterday. The error persists.
I can try one more time though.
In ES container please run "curl -XGET localhost:9200/_cat/indices"
Also, it would be great if you could add a recommendation for EBS size in this guide ( None ),
The Elastic Search issue happened with 8 GB, and was resolved with 15 GB.