Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
How Do I Get Args Like Epochs To Show Up In The Ui Configuration Panel Under Hyperparameters? I Want To Be Able To Change Number Of Epochs And Learning Rate From Within The Ui.

How do I get args like epochs to show up in the UI configuration panel under hyperparameters? I want to be able to change number of epochs and learning rate from within the UI.

  
  
Posted 2 years ago
Votes Newest

Answers 15


Hi VexedCat68

You can use argparse and all the parameters will be log automagically to the hyperparameters section like in https://github.com/allegroai/clearml/blob/master/examples/frameworks/keras/keras_tensorboard.py#L56 example, or just connecting any dict like in https://github.com/allegroai/clearml/blob/master/examples/frameworks/ignite/cifar_ignite.py#L23 example

  
  
Posted 2 years ago

from sklearn.datasets import load_iris
import tensorflow as tf
import numpy as np
from clearml import Task, Logger
import argparse

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument('--epochs', metavar='N', default=64, type=int)
    args = parser.parse_args()
    parsed_args = vars(args)
    task = Task.init(project_name="My Workshop Examples", task_name="scikit-learn joblib example")
    iris = load_iris()
    data = iris.data
    target = iris.target
    labels = np.unique(target)
    epochs = parsed_args['epochs']
    task.connect(args)
    model = tf.keras.models.Sequential([
        tf.keras.layers.Dense(64, input_dim=4, activation='relu'),
        tf.keras.layers.Dense(len(labels), activation='softmax')
    ])
    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
    model.fit(data, target, epochs=epochs)
    print(model.evaluate(data, target))

if name == "main":
    main()

Here's the script I'm testing with.
Here's the script I run it with.
clearml-task --project ClearML-Learn --name EpochsConnectReturns --requirements requirements.txt --script demo.py

  
  
Posted 2 years ago

Quick follow up question. Once I parse args, should they be directly available for i even enque the project for the first time or will i be able to access hyperparameters after running it once?

  
  
Posted 2 years ago

Also my execution just completed and as of yet, I can only see the hyperparameters as a report. not in a configurable form. I've just started with ClearML and am having these issues.

  
  
Posted 2 years ago

In any case, when sending something to be executed remotely , you can change the values as you see fit. On the other hand, when running locally , it will always use the values explicitly passed in the local run command

  
  
Posted 2 years ago

You can only edit the cloned copy, not the experiment created from the local run

  
  
Posted 2 years ago

My draft is View Only but the cloned toy task one is in normal Draft mode.

  
  
Posted 2 years ago

Basically when I have to re run the experiment with different hyperparameters, I should clone the previous experiment and change the hyperparameters then before putting it in the queue?

  
  
Posted 2 years ago

It seems that is the case. Thank you for all your help guys.

  
  
Posted 2 years ago

Try to clone the task (right click on the task and choose “clone”) and you will get a new task in draft mode, that you can configure ( https://clear.ml/docs/latest/docs/getting_started/mlops/mlops_first_steps#clone-an-experiment )

  
  
Posted 2 years ago

I'm using clear-ml agent right now. I just upload the task inside a project. I've used arg parse as well however as of yet, I have not been able find writable hyperparameters in the UI. Is there any tutorial video you can recommend that deals with this or something? I was following https://www.youtube.com/watch?v=Y5tPfUm9Ghg&t=1100s this one on youtube but I can't seem to recreate his steps as he sifts through his code.

  
  
Posted 2 years ago

Basically when I have to re run the experiment with different hyperparameters, I should clone the previous experiment and change the hyperparameters then before putting it in the queue?

You can also reset it instead of cloning

  
  
Posted 2 years ago

I'll look into it. Thank you everyone.

  
  
Posted 2 years ago

I checked and it seems when i an example from git, it works as it should. but when I try to run my own script, the draft is in read only mode.

  
  
Posted 2 years ago

VexedCat68 there are two possibilities:
This is a new experiment, and you've just run the code locally - an experiment entry will be created in the UI, and the values you used for the args will be in the hyperparams section The experiment already exists in the UI (as a result of running it locally, or even a previous experiment which was already executed remotely) - you can clone it, and than edit the hyper parameters values in the UI. Then, you send this experiment to be executed remotely - when it will run, the values you've just updated in the UI will be the values returned by the argparse parse_args() method

  
  
Posted 2 years ago
564 Views
15 Answers
2 years ago
one year ago
Tags