HI another qn,dataset_upload_task = Task.get_task(task_id=args['dataset_task_id'])
iris_pickle = dataset_upload_task.artifacts['dataset'].get_local_copy()
How would I replicate the above for Dataset ? Like how to get the iris_pickle file. I did some hacking likewise below.ds.get_mutable_local_copy(target_folder='data')
Subesequently, I have to load the file by name also.I wonder whether there is more elegant way
Hi DeliciousBluewhale87 ,
You can just get a local copy of the dataset with ds.get_local_copy()
, this will download the dataset from the dataset task (using cache) and return a path to the downloaded files.
Now, in this path you’ll have all the files that you have in the dataset, you can go over the files in the dataset with ds.list_files()
(or ds.list_files()[0]
if you have only 1 file) and get the one you want
maybe something like:
ds_path = ds.get_local_copy() iris_pickle_file_name = ds.list_files()[0] iris_pickle_path = os.path.join(ds_path, iris_pickle_file_name)
Can this do the trick?
Hi DeliciousBluewhale87 ,
You can get the latest dataset by calling Dataset.get
:
from clearml import Dataset ds = Dataset.get(dataset_project="dataset-project", dataset_name="dataset-task-name")
This will return you the latest dataset from the project