Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone! Does Anyone Use Pipelines? I Have The Issue With The Logger. Scenario:

Hello everyone!
Does anyone use pipelines? I have the issue with the logger. Scenario:
I have pipeline defined with set of pipeline.add_function_step at some stage I want to report table to the pipeline task and do it with PipelineController.get_logger().report_table and when I execute it I can not see any results However if I add a sleep time of 5 secs it seems to be enough to upload the data and I see the report.
Maybe someone knows better way to wait when upload is completed?

  
  
Posted 2 years ago
Votes Newest

Answers 11


CostlyOstrich36 maybe you have any idea why this code might not work for me?

  
  
Posted 2 years ago

SmugDolphin23 , maybe you have an idea?

  
  
Posted 2 years ago

I'm running 1.7.0 (latest docker available).
Your example did work for me, but I'm gonna try the flush() method now

  
  
Posted 2 years ago

Hey GrotesqueDog77
A few things, first you can call _logger.flush() which should solve the issue you're seeing (We are working to add auto-flushing when tasks end πŸ™‚ )
Second, I ran this code and it works for me without a sleep, does it also work for you?
` from clearml import PipelineController

def process_data(inputs):
import pandas as pd
from clearml import PipelineController
data = {'Name': ['Tom', 'nick', 'krish', 'jack'],
'Age': [20, 21, 19, 18]}
_logger = PipelineController.get_logger()
df = pd.DataFrame(data)
_logger.report_table('Awesome', 'Details', table_plot=df)

pipeline = PipelineController(name='erez', project='erez',version='0.1')
pipeline.add_function_step(name='process_data', function=process_data,
cache_executed_step=True)

pipeline.start_locally(run_pipeline_steps_locally=True) `What SDK vesrion are you using? I'm using V1.7.1. I also didn't pass the data as input so it might affect, but I'd be happy if you can give it a try

  
  
Posted 2 years ago

GrotesqueDog77 checking πŸ™‚

  
  
Posted 2 years ago

AnxiousSeal95 here

  
  
Posted 2 years ago

Hi GrotesqueDog77 ,

Can you please add a small code snippet of this behavior? Is there a reason you're not reporting this from within the task itself or to the controller?

  
  
Posted 2 years ago

Yes it did work! Thank you!

  
  
Posted 2 years ago

Hi CostlyOstrich36

Here is the code example which does not work for me
` def process_data(inputs):
import pandas as pd
from clearml import PipelineController
_logger = PipelineController.get_logger()
df = pd.DataFrame.create(inputs)
_logger.report_table('Awesome', 'Details', table_plot=df)

pipeline = PipelineController(name='best_pipeline', project='test')
pipeline.add_function_step(name='process_data', function=process_data,
function_kwargs=dict(inputs=some_data),
cache_executed_step=True)
pipeline.add_function_step(name='next', function=next,
function_kwargs=(something="${process_data.ouput}")
pipeline.start_locally() `

  
  
Posted 2 years ago

When I add sleep to the process_data it works if it was enough time to upload data

def process_data(inputs): import time import pandas as pd from clearml import PipelineController _logger = PipelineController.get_logger() df = pd.DataFrame.create(inputs) _logger.report_table('Awesome', 'Details', table_plot=df) time.sleep(10)

  
  
Posted 2 years ago

Let me know, if this still doesn't work, I'll try to reproduce your issue πŸ™‚

  
  
Posted 2 years ago
1K Views
11 Answers
2 years ago
one year ago
Tags
Similar posts