@<1698868530394435584:profile|QuizzicalFlamingo74> did u find solution?)
@<1706116294329241600:profile|MinuteMouse44> unfortunately no, I created my own upload method with python Django and passed the S3 directories to the clearML dataset to track of datasets.
Which is not an ideal solution but for now it's working.
I think based on the src file dataset.py
the compression False is not possible
def upload(
self,
show_progress=True,
verbose=False,
output_url=None,
compression=None,
chunk_size=None,
max_workers=None,
retries=3,
):
# type: (bool, bool, Optional[str], Optional[str], int, Optional[int], int) -> ()
"""
Start file uploading, the function returns when all files are uploaded.
:param show_progress: If True, show upload progress bar
:param verbose: If True, print verbose progress report
:param output_url: Target storage for the compressed dataset (default: file server)
Examples: `
`, `
` , `
` , `/mnt/share/data`
:param compression: Compression algorithm for the Zipped dataset file (default: ZIP_DEFLATED)
:param chunk_size: Artifact chunk size (MB) for the compressed dataset,
if not provided (None) use the default chunk size (512mb).
If -1 is provided, use a single zip artifact for the entire dataset change-set (old behaviour)
:param max_workers: Numbers of threads to be spawned when zipping and uploading the files.
If None (default) it will be set to:
- 1: if the upload destination is a cloud provider ('s3', 'gs', 'azure')
- number of logical cores: otherwise
:param int retries: Number of retries before failing to upload each zip. If 0, the upload is not retried.
:raise: If the upload failed (i.e. at least one zip failed to upload), raise a `ValueError`
"""
Hi @<1698868530394435584:profile|QuizzicalFlamingo74> , Try compression=False