Reputation
Badges 1
33 × Eureka!Apparently found out a solution:dataset_zip = dataset._task.artifacts['data'].get() will return the path to the zip file containing all the files (that will be downloaded to the local machine)
after that:import zipfile zip_file = zipfile.ZipFile(d, 'r') files = zip_file.namelist()retrieving the names of the files
unzip usingimport os os.system(f'unzip {dataset_zip}') # in this case to your script directoryand using the files list one can them open them selectively
Simplified a little bit and removed private parameters, but thats pretty much the code. We did not try with toy examples, since that was already done with the example pipelines when we implemented and the model training itself is quite simple basic there already (only few hyperparameters set)
` from importlib.machinery import EXTENSION_SUFFIXES
import catboost
from clearml import Task, Logger, Dataset
import lightgbm as lgb
import numpy as np
import pandas as pd
import dask.dataframe as dd
import matplotlib.pyplot as plt
MODELS = {
'catboost': {
'model_class': catboost.CatBoostClassifier,
'file_extension': 'cbm'
},
'lgbm': {
'model_class': lgb.LGBMClassifier,
'file_extension': 'txt'
}
}
class ModelTrainer():
def init(sel...
oooohhh.. you mean the key of the nested dict, that would make a lot of sense
did manage to get it working, but only by hardcoding the path of the repository using sys.path.append() with absolute repository path on my machine
The error comes out after the execution of the component backtest_prod
UnsightlyHorse88 , do you know?
I saw regarding the chunks, but it is not clear how one can retrieve the dataset based on files
yes, variations of the data, using only a subset of the features
That's the script that produces the error. You can also observe the struggle with importing the load_model function. (Any tips on best practices to structure the pipeline are also gladly accepted)
Could you supply any reference of this dataset containing other datasets? I might have skipped that when reading the documentation, but I do not recall seeing this functionality.
Steps (pipeline components):
Load the model Infereces witht he model
Its equivalent tomodel = Step1(*args) preds = Step2(model, *args)
After step 1, I have the model loaded as a torch object, as expected. When this object is passed to step 2, inside of step 2, it is read as an object of class 'pathlib2.PosixPath'.
I assume that is because there is some kind of problem in the pickling/loading/dumping of the inputs from a step to another in the pipeline. Is it some kind of known issue or ...
Martin, if you want, feel free to add your answer in the stackoverflow so that I can mark it as a solution.
That would make sense, although clearml, at least on UI, shows the deeper level of the nested dict as a int, as one would expect
Is there a way to do that to trigger separate remote executions?
` import importlib
import argparse
from datetime import datetime
import pandas as pd
from clearml.automation.controller import PipelineDecorator
from clearml import TaskTypes, Task
@PipelineDecorator.component(
return_values=['model', 'features_to_build']
)
def get_model_and_features(task_id, model_type):
from clearml import Task
import sys
sys.path.insert(0,'/home/zanini/repo/RecSys')
from src.dataset.backtest import load_model
task = Task.get_task(task_id=task_i...
Additionally, I have the following error now:
` 2022-08-10 19:53:25,366 - clearml.Task - INFO - Waiting to finish uploads
2022-08-10 19:53:36,726 - clearml.Task - INFO - Finished uploading
Traceback (most recent call last):
File "/home/zanini/repo/RecSys/src/dataset/backtest.py", line 186, in <module>
backtest = run_backtest(
File "/home/zanini/repo/RecSys/.venv/lib/python3.9/site-packages/clearml/automation/controller.py", line 3329, in internal_decorator
a_pipeline.stop()
File...
It is an instance of a custom class.
It works if I use as a helper function, but not as a component (using the decorator)
I was checking here, and apparently if I use a parameter as suggested, together with a Task.init(task_name=f'{task name in this loop}') for each of the loops it should work, right? Creating different tasks in the server
My code pretty much createas a dataset, uploads it, trains a model (thats where the current task starts), evaluates it and upload all the artifacts and metrics. The artifacts and configurations are upload alright, but the metrics and plots are not. As with Lavi, my code hangs on the task.close(), where it seems to be waiting for the metrics, etc but never finishes. No retry message is shown as well.
After a print I added for debug right before task.close() the only message I get in the consol...
Yes, seems indeed it was waiting for the uploads, which weren't happening ( I did give it quite a while to try to finish the process in my tests). I thought it was a problem with metrics, but apprently it was more like the artifacts before them. The artifacts were shown in the webui dashboard, but were not on S3
` all done
ClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start
^CTraceback (most recent call last):
File "/home/zanini/repo/RecSys/src/cli/retraining_script.py", line 710, in <module>
mr.retrain()
File "/home/zanini/repo/RecSys/src/cli/retraining_script.py", line 701, in retrain
self.task.close()
File "/home/zanini/repo/RecSys/.venv/lib/python3.9/site-packages/clearml/task.py", line 1783, in close
self.__shutdown()
File "...
Should work as long as they are in the same file, you can however launch and wait any Task (see pipelines from tasks)
Do I call it as a function normally as in the other or do I need to import? (My initial problem was actually that is was not founding the other function as a pipeline component, so I thought it was not able to import as a secondary sub-component)