Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Was Wondering, If I Want To Use

I was wondering, if I want to use Task.create() instead of Task.init() to create a new experiment object, I am aware that automatic logging will not be done.

Could someone explain what the explicit task method calls are to replicate a call to Task.init() but starting with Task.create() instead?

I currently have a pretty simple experiment initialization section:

` # Connecting with the ClearML process

First add the repo package requirements that aren't on CONDA / PYPI

Task.add_requirements('git+ ')
Task.add_requirements('git+ ')

Now connect the script to ClearML Server as an experiment.

task = Task.init(
project_name='Caltech Birds',
task_name='[Library: '+cfg.MODEL.MODEL_LIBRARY+', Network: '+cfg.MODEL.MODEL_NAME+'] Ignite Train PyTorch CNN on CUB200',
task_type=Task.TaskTypes.training,
output_uri=' '
) `

  
  
Posted 3 years ago
Votes Newest

Answers 2


Hi VivaciousPenguin66 , what's the use-case for replicating what Task.init() does using Task.create() ? 🙂

  
  
Posted 3 years ago

Good question, SuccessfulKoala55

My thoughts are orbiting around environment orchestration and having a bit more control over how an environment is created. I understand that the easiest form of the configuration is to implement it on the clearml-agent side and run a daemon with the configuration as required, whether that be using venv's or docker containers. Of course this limits the deployment type to the queue that the daemon is listening to.

I was considering if that by exposing the experiment creation process in a more granular fashion by using Task.create() as you can pass arguments like specific docker containers and execution scripts with the experiment. This would allow you more control on how environments are created but on an experiment by experiment basis, rather than a queue basis.

  
  
Posted 3 years ago
902 Views
2 Answers
3 years ago
one year ago
Tags
Similar posts