Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey! I Have My Custom Model, That Uses Models From Populars Frameworks Inside, Such As Lgbm, Catboost Etc. Also It Have Multiple Instances Of One Models Of One Framework.

Hey!
I have my custom model, that uses models from populars frameworks inside, such as LGBM, CatBoost etc. Also it have multiple instances of one models of one framework. https://clear.ml/docs/latest/docs/fundamentals/logger#automatic-reporting doesn’t capture all inside, just one of them. Can I do something to capture all train curves from inner models without changing my model’s code?

  
  
Posted 2 years ago
Votes Newest

Answers 12


EnviousPanda91 , which framework isn't being logged? Can you provide a small code snippet?

  
  
Posted 2 years ago

` import numpy as np
import pandas as pd

from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split

from lightautoml.tasks import Task
from lightautoml.automl.presets.tabular_presets import TabularAutoML

import clearml

cml_task = clearml.Task.get_task(clearml.config.get_remote_task_id())
logger = cml_task.get_logger()

data = pd.read_csv("./examples/data/sampled_app_train.csv")
....

automl = TabularAutoML(task=Task('binary'))
cml_task.connect(automl)
oof_pred = automl.fit_predict(train_data, roles={"target": "TARGET"})
logger.report_single_value("ROCAUC test", roc_auc_score(test_data["TARGET"].values, test_pred.data[:, 0]))
logger.flush() `
TabularAutoML is a custom class that uses some popular frameworks deep inside
https://github.com/sb-ai-lab/LightAutoML

  
  
Posted 2 years ago

EnviousPanda91 so which frame works are being missed? Is it a request to support new framework or are you saying there is a bug somewhere?

  
  
Posted 2 years ago

AgitatedDove14 no, it’s not a request.

I have custom python class, that uses a lot of models from frameworks that supported by ClearML already. I want to enable auto reporting for all models by using command clearml_task.connect(my_custom_class_instance) , but it doesn’t work the way I need it to — there is the only one loss curve, because because this graph is redrawn every time a new instance starts training.
Is there any way to reporting all instances inside my custom class without modifying this class?

  
  
Posted 2 years ago

EnviousPanda91 'connect' will log the object properties, the automagic logging is controlled in the Task.init call. Specifically Which framework produces metrics that are not logged? Your sample code manually reports some scalars/values, do you these as well?

  
  
Posted 2 years ago

AgitatedDove14 for example let’s add to https://github.com/allegroai/clearml/blob/master/examples/frameworks/catboost/catboost_example.py second catboost model training:
... catboost_model = CatBoostRegressor(iterations=iterations, verbose=False) catboost_model2 = CatBoostRegressor(iterations=iterations+200, verbose=False) ... catboost_model.fit(train_pool, eval_set=test_pool, verbose=True, plot=False, save_snapshot=True) catboost_model2.fit(train_pool, eval_set=test_pool, verbose=True, plot=False, save_snapshot=True) ...
as result, we have just one learn and one validation plots, but we want to get two. How can I get plots for both models?

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

Oh, so is it a bug and you should have seen two series on each graph? (I think it is... not sure how to actually name the second instance other than running number)

  
  
Posted 2 years ago

AgitatedDove14 hm, I don’t know what is the right expected behaviour, I’ve expected 2 plots. If my assumption looks right, should I make an issue on github?

  
  
Posted 2 years ago

Yes please 🤩

  
  
Posted 2 years ago

AgitatedDove14 done) btw, could you show me the place in the code where scalars are written? I want to make a hotfix

  
  
Posted 2 years ago
1K Views
12 Answers
2 years ago
7 months ago
Tags