Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
New Rc(1.1.2Rc0) Version Available!

New RC(1.1.2rc0) version available! 🎉
Change set:
ClearML Data - Upload dataset now supports chunksize, for multi-part upload/download (useful with large datasets)
ClearML Data - Get Dataset support partial download (i.e. for debugging, or for more efficient multi-node support)
ClearML SDK - Pipelines - Nested pipeline.decorators: I.e. pipeline steps calling other pipeline steps.
ClearML SDK - Pipelines - Add configuration_objects to the pipeline step override options.

Feedback would be welcomed!

  
  
Posted 3 years ago
Votes Newest

Answers


A few more details on the New RC  (1.1.2rc0) change set:

Upload dataset now supports chunksize, for multi-part upload/download (useful with large datasets)
backwards compatibility, i.e. parent datasets do not have to support multi-part datasets
Notice multi-part datasets should be accessed with latest RC
cleaml-data upload --chunk-size Dataset().upload(..., chunk_size=None)
Get Dataset support partial download (i.e. for debugging, or for more efficient multi-node support)
Notice total number of parts equals to the total number of the parts (including parent version)
cleaml-data get --num-parts X --part y Dataset().get_local_copy(..., part=None, num_parts=None,)

Nested pipeline.decorators - I.e. pipeline steps calling other pipeline steps.
class methods to be used inside pipelines to access the pipeline Task (log/artifacts)
Pipeline.get_logger() Pipeline.upload_artifact()

Add configuration_objects to the pipeline step override options:
pipeline.add_step(..., configuration_overrides={'General': dict(key='value'), 'extra': 'raw text here, like YAML'})

Automatically log steps, metrics/artifacts/models on the pipeline itself
pipeline.add_step(..., monitor_metrics, monitor_artifacts, monitor_models)

  
  
Posted 3 years ago
1K Views
1 Answer
3 years ago
one year ago
Tags