Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
LonelyKangaroo55
Moderator
7 Questions, 11 Answers
  Active since 18 August 2023
  Last activity 3 months ago

Reputation

0

Badges 1

11 × Eureka!
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Hi all, I want to ask about HPO if there possibility to work not only with standard args but with configuration_object (OmegaConf) ? Thanks
one year ago
0 Votes
6 Answers
709 Views
0 Votes 6 Answers 709 Views
7 months ago
0 Votes
4 Answers
639 Views
0 Votes 4 Answers 639 Views
7 months ago
0 Votes
2 Answers
302 Views
0 Votes 2 Answers 302 Views
Hi, I'm currently working with ClearML Pipelines and would like to clarify whether it's officially supported to invoke a sub-pipeline from within another pip...
3 months ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
one year ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
2 years ago
0 Votes
2 Answers
976 Views
0 Votes 2 Answers 976 Views
Hi all, I wanted to know about saving datasets, we want to specify the path to gs by default, as I understand by default it uses the path to file_server? We ...
one year ago
0 Hi, I'M Currently Working With Clearml Pipelines And Would Like To Clarify Whether It'S Officially Supported To

@PipelineDecorator.pipeline(
name="Sub Pipeline",
project="Pipelines",
version="1.0",
multi_instance_support="parallel",
)
def sub_pipeline(parameter):
print(f"Running sub-pipeline with parameter={parameter}")
return parameter * 2

@PipelineDecorator.pipeline(
name="Main Pipeline",
project="Pipelines",
version="1.0",
)
def main_pipeline():
refs = []
for p in [1, 2, 3]:
` ref ...

3 months ago
0 Hi All, Want To Ask, How I Can Debugg Logger Class. I Have Problems With Displaying Graphs On The Ui, Logger.Report_Text Works, But Any Scalar Or Plot Is Not Displayed, I Integrated Tensorboard, But There Are No Logs From It Either. There Are No Warnings

@<1523701070390366208:profile|CostlyOstrich36> self hosted and
class ClearmlLogger:

def __init__(self, task):

self.task = task
self.task_logger = task.get_logger()
self.task_logger.set_default_upload_destination(
' None ')
self.writer = SummaryWriter('runs')

def log_training(self, reconstruction_loss, learning_rate, iteration):
self.task.get_logger().report_scalar(
` ...

one year ago
7 months ago
0 Hi, I Would Like To Know From You, Maybe Someone Has Encountered The Problem That After Deploying An Agent Inside Docker, The Launch Of The Script Itself Occurs With A Delay (Lunch Pipeline Component). I Run Pipelines And Component Should Works Quickly, B

@<1523701070390366208:profile|CostlyOstrich36> Hi,
I have a question related to ClearML’s indexing mechanism for cached datasets. We noticed that when storing the dataset cache folder on an NFS (Network File System), running the command clearml-data get triggers a cache indexing process, which takes a significant amount of time. However, if we remove the NFS cache folder, the command runs almost instantly.
Could you explain how caching works in ClearML? Specifically:

  • Why does ClearML p...
6 months ago
0 Hi, I Would Like To Know From You, Maybe Someone Has Encountered The Problem That After Deploying An Agent Inside Docker, The Launch Of The Script Itself Occurs With A Delay (Lunch Pipeline Component). I Run Pipelines And Component Should Works Quickly, B

- Werkzeug==2.2.3
- xdoctest==1.0.2
- xgboost @ file:///rapids/xgboost-1.7.1-cp38-cp38-linux_x86_64.whl
- yarl @ file:///rapids/yarl-1.8.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- zict @ file:///rapids/zict-2.2.0-py2.py3-none-any.whl
- zipp==3.15.0
Environment setup completed successfully
Starting Task Execution:
2025-01-27 13:22:37
ClearML results page: files_server: [None](gs://path_to_bucket/projects/56898367b0b44f06a2679cd9e05b3a70/...

7 months ago
0 Hi, I Would Like To Know From You, Maybe Someone Has Encountered The Problem That After Deploying An Agent Inside Docker, The Launch Of The Script Itself Occurs With A Delay (Lunch Pipeline Component). I Run Pipelines And Component Should Works Quickly, B

@<1523701070390366208:profile|CostlyOstrich36> Fixed: It was a cache issue in NFS. However, we discovered an important detail—there were two folders in the cache: datasets and global . When we started the ClearML script, it began indexing the entire global folder, which was the reason the script got stuck. After mounting only the datasets folder, there was no delay anymore.
Do you know how to disable indexing? If we mount the global folder on all instances, it grows very f...

7 months ago
0 Hi All, How Can I Get The Status Of A Component From Another Component In The Clearml Pipeline (End, Pending, Running)? I Want To Run The Triton Server As A "Daemon" Thread Inside The Component And So That Other Pipeline Components Can Access It (Request)

@<1523701435869433856:profile|SmugDolphin23> Hi, want to ask connected question. How can I find out the hostname of the component from other component, because we have tasks running on different machines in aws and for the client sdk we need to understand where to send the inference request. I thought about the config-server, to which the triton sends pipelineID: hostname and the client then receives information from it knowing the pipelineID. But maybe there is a simpler solution? Also thi...

2 years ago