Reputation
Badges 1
12 × Eureka!Thank you, this fixed the issue!
http://IP_address :port/pipeline/test_task.task_id/artifacts/test_artifact/test_artifact.pkl
like so :)
Thank you! In fact I'm already using "start". I should have been more clear: Can I make the Tasks that I'm adding to the pipeline also run locally, such that the entire pipeline runs locally?
The section name needs to be added in both the base task as well as the pipeline task for it to work. Since the parameters also show up in the "General" section in the web interface when the parameters are connected only with their name (without section name), I didn't think that this could matter. Thank you for your help!
Here's the example: Even though I'm passing different parameters to the two clones, they will end up configured with the same (the second) parameter set.
` project_name = 'pipeline_test'
task = Task.init(project_name=project_name,
task_name='pipeline_main',
task_type=Task.TaskTypes.controller,
reuse_last_task_id=False)
pipe = PipelineController(default_execution_queue='default_queue',
add_pipeline_tags=False)
#%%
...
Thank you! Yes that might be the best option. I'll have to divide it already when I create the datasets then, right?
The use case is for example my other question from today. I want to test/debug the parameter_override functionality (and pipelines in general). For this it would be fastest for me if the Tasks that are part of the pipeline are also running locally.
This also helped me 🙂 Really, I'd like it both ways, such that the Task links to the Dataset it created, as well as the Dataset to the Task it was created by.
Right now I'm doingdataset = Dataset.create(...) task.connect({'dataset_id': dataset.id}, name='Datasets')
for the second direction. Is there a better way to do this? (I'm using it to pass Datasets between Tasks, one Task operating on a Dataset that was created by another Task.) Thank you!
Or, when I try to load a dataset from an old task, this is the error that I get:File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/clearml/datasets/dataset.py", line 835, in get raise ValueError('Could not load Dataset id={} state'.format(task.id)) ValueError: Could not load Dataset id=7d05e1cad34441799f79931337612ae1 state
Hello! I'm doing it like this:scalars = task.get_reported_scalars()
it returns a dictionary with all the scalars for all iterations that you can access like so:scalars['epoch_accuracy']['validation: epoch_accuracy']['y']
Oh yes! I should have checked the last messages before posting. Thank you for pointing me to it! I will try the fix.
for example the path that is visible in the web interface in Artifacts/File Path
It's working now for me with 1.1.4rc0
as well, thank you!
Nice!
I can't really think of a reason why not to do it automatically, at least for my usecase. What name would you give the dataset(s) in the Configuration? Also, the IDs as an entry in the Configuration will not be clickable in the web interface, right?
How does clearml detect a preview or thumbnail associated with a file? e.g. if we would add a ['preview'] group into the .hdf5 file (containing a png / tiff /.. image), would it be able to find it?