Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
FancyOtter74
Moderator
2 Questions, 13 Answers
  Active since 15 September 2023
  Last activity 6 months ago

Reputation

0

Badges 1

13 × Eureka!
0 Votes
13 Answers
968 Views
0 Votes 13 Answers 968 Views
one year ago
0 Votes
5 Answers
881 Views
0 Votes 5 Answers 881 Views
My data processing scripts are run in the cloud with the help of ClearML autoscaler. The cloud doesn't (and won't) have access to Git, which is in our intern...
6 months ago
0 I Am Using Clearml Free Saas. I Have A Task "Mytask" Of Type "Data_Processing" In Project "Myproject" Which Uploads A Dataset In The End Of Its Execution. For Some Reason, After Uploading The Dataset, My Task Appears In Ui Not Under "Myproject", But Under
from clearml import Task, Dataset

task = Task.init(
    project_name="MyProject",
    task_name="MyTask",
    task_type=Task.TaskTypes.data_processing,
    reuse_last_task_id=False,
    output_uri="
"
)

with open("new_file.txt", "w") as file:
    file.write("Hello, world!")

dataset = Dataset.create(parent_datasets=None, use_current_task=True)
dataset.add_files(".", wildcard="new_file.txt", verbose=True)
dataset.upload(verbose=True)
dataset.finalize(verbose=True)
one year ago
0 I Am Using Clearml Free Saas. I Have A Task "Mytask" Of Type "Data_Processing" In Project "Myproject" Which Uploads A Dataset In The End Of Its Execution. For Some Reason, After Uploading The Dataset, My Task Appears In Ui Not Under "Myproject", But Under

I did similarly at my previous work (we had open source clearml deployed). The problem I described here was not present there. I liked this approach. It was convenient that dataset_id and task_id are the same.

one year ago
0 My Data Processing Scripts Are Run In The Cloud With The Help Of Clearml Autoscaler. The Cloud Doesn'T (And Won'T) Have Access To Git, Which Is In Our Internal Network. So I'M Left With Using
    common_module = task.connect_configuration("../common.py", "common.py")
    if not task.running_locally():
        import shutil
        shutil.copy(common_module, "common.py")

    from common import test_common
    test_common()
6 months ago
0 My Data Processing Scripts Are Run In The Cloud With The Help Of Clearml Autoscaler. The Cloud Doesn'T (And Won'T) Have Access To Git, Which Is In Our Internal Network. So I'M Left With Using

it seems that connecting it as config is more convenient than uploading an artifact, because artifacts are deleted when cloning a task. Code is very simple:

6 months ago
6 months ago