Reputation
Badges 1
69 × Eureka!CostlyOstrich36 When I do
logger = task.get_logger()
logger.report_scalar(title='evaluate', series='score', value=5, iteration=task.get_last_iteration())
train(model_dir=trained_model_dst, pipeline_config_path=pipeline_config_path, save_checkpoints_steps=args.checkpoints)
It only captures the first iteration...
No, Its stuck here:
Collecting botocore<1.23.0,>=1.22.9
Using cached botocore-1.22.12-py3-none-any.whl (8.1 MB)
Get stuck on:
Collecting pip<20.2
Using cached pip-20.1.1-py2.py3-none-any.whl (1.5 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.3.1
Uninstalling pip-21.3.1:
Successfully uninstalled pip-21.3.1
Successfully installed pip-20.1.1
Collecting Cython
Using cached Cython-0.29.24-cp38-cp38-win_amd64.whl (1.7 MB)
Installing collected packages: Cython
Successfully installed Cython-0.29.24
Collecting boto3==1.19.9
Using cach...
CostlyOstrich36
The pipline demo is still stack on running,
The first step still on pending, and belong to services queue
MelancholyElk85 So I need to change the paths each time?
Hi SuccessfulKoala55 ,
First I create the task and change the output_uri to some folder I createdtask = Task.init(project_name='main', task_name='task', output_uri=r'C:\Users\Chen\Desktop\folder)Than I used:task.upload_artifact
Haa,
How can I change it?
The pipline is on services
and the first task on defult
The sdk.aws.s3.credentials.0.host and sdk.aws.s3.credentials.0.key, yes
SweetBadger76
There's something I can do to help you check? (task id, project name etc.)
Hi SweetBadger76
It still does not work, I'm trying to access a task called dataset_0 under datasets project.
I get ValueError: No projects found when searching for datasets/.datasets/dataset_0
What is .datasets ?
Or do I have to dive into the code in train function and write the code there?
SuccessfulKoala55
Hi,
Using task.upload_artifact
Yes that's exactly what i do, But I'm trying to figure out if I can write down the line of code
logger.report_scalar(title='evaluate', series='score', value=5, iteration=task.get_last_iteration())
anywhere in the code?
Does the line of code open up another process parallel to training?
But now I have diffrent problem, in the dataset.get_mutable_local_copy
I get:
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/clearml/datasets/dataset.py", line 1276, in _download_part
raise ValueError("Could not download dataset id={} entry={}".format(self._id, data_artifact_name))
ValueError: Could not download dataset