Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello, In The Following Context:

My bad I wrote refresh and then edited it to the correct "reload" ๐Ÿ˜ž

5 years ago
0 How Come

WackyRabbit7 interesting! Are those "local" pipelines all part of the same code repository? do they need their own environment ?
What would be the easiest pipeline interface to run them locally? (I would if we could support this workflow, it seems you are not alone in this approach, and of course that you can always use them remotely, i.e. clone the pipeline and launch it on an agent)

4 years ago
0 I Am Using Pipeline From Decorators. In The Pipeline, There Is A Training Step That Returns A Model (I Want This Model To Also Be Uploaded As An Artifact On Clearml). But This Results In The Following Error:

Hi DilapidatedCow43
I'm assuming the returned object cannot be pickled (which is ClearML's way of serializing it)
You can upload it as a model with
` uploaded_model_url = Task.current_task().update_output_model(model_path="/path/to/local/model")

...
return uploaded_model_url `wdyt?

3 years ago
0 How Come

Is this a common case? maybe we should change the run_pipeline_steps_locally argument to False?
(The idea of run_pipeline_steps_locally=True is that it will be easier to debug the entire pipeline on the same machine)

4 years ago
0 I Have 5 Unarchived Pipeline Runs That Were Defined With This Decorator:

Hi John. sort of. It seems that archiving pipelines does not also archive the tasks that they contain so

This is correct, the rationale is that the components (i.e. Tasks) might be used (or already used) as cached steps ...

3 years ago
0 Hi, I Have A Question About

SoggyFrog26 there is a full pythonic interface, why don't you use this one instead, much cleaner ๐Ÿ™‚

4 years ago
0 Hello! Since Today I Get

Sorry, env file for conda, the one you are using to install

4 years ago
0 Please Tell Me, When Migrating A Local Server, We Have Problems That The Saved Images Are Not Displayed, It Says "Unable To Load Image" And Links To The Old Server If You Click "Copy Image Url" Or "Open Image". The Migration Was Done According To Backup'

CheerfulGorilla72

yes, IP-based access,

hmm so this is the main downside of using IP based server, the links (debug images, models, artifacts) store the full URL (e.g. http://IP:8081/ http://IP:8081/... ) This means if you switched IP they will no longer work. Any chance to fix the new server to the old IP?
(the other option is somehow edit the DB with the links, I guess doable but quite risky)

3 years ago
0 Hey Just Wanting To Know: What Is The Recommended Best Practice To Write Clearml Pipelines Between Controller And Decorators ?

So it seems decorator is simply the superior option?

Kind of yes ๐Ÿ˜Š

In which case would we use add_task() option?

When you have existing Tasks, and the piping is very straight forward (i.e. input / output in the code is basically referencing other Tasks/artifacts, and there is no real need to do any magic for serializing/deserializing data between steps

3 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

or at least stick to the requirements.txt file rather than the actual environment

You can also for it to log the requirements.txt with
Task.force_requirements_env_freeze(requirements_file="requirements.txt") task = Task.init(...)

3 years ago
0 I Have Code That Does Torch.Load(Path) And Deserializes A Model. I Am Performing This In Package A.B.C, And The Model’S Module Is Available In In A.B.C.Model Unfortunately, The Model Was Serialized With A Different Module Structure - It Was Originally Pla

it is a pickle issue
โ€˜package model doesnโ€™t existโ€™

Sounds like it, why do you think clearml has anything there ?
BTW:

import_bind

.

__patched_import3

this is just so when packages that clearml autoconnects with are patched if imported After Task.init was called.

3 years ago
0 Hello! When I Use The

Hi DangerousDragonfly8

, is it possible to somehow extract the information about the experiment/task of which status has changed?

From the docstring of add_task_trigger
```py def schedule_function(task_id): pass ```This means you are getting the Task ID that caused the trigger, now you can get all the info that you need with Task.get_task(task_id)
` def schedule_function(task_id):
the_task = Task.get_task(task_id)
# now we have all the info on the Task tha...

3 years ago
0 Re Dataset Object: Is It Possible To Use Sync_Folder And Upload Several Times Along The Code And Then Finalize The Dataset?

EmbarrassedSpider34

Sync_folder and upload
Several times along the code and then

Do notice they overwrite one another...

3 years ago
0 Hello Folks. We'Re A Small Team Currently Considering Adopting Clearml For Experiment Tracking. I Was Wondering If I Start With The Hosted Service And Decide To Switch To A Self-Hosted Server Later, Is There A Way To Export All The Experiments/Data/Etc Fr

Regulatory reasons and proprietary data is what I had in mind. We have some projects that may need to be fully self hosted in the end

If this is the case then, yes do self-hosted, or talk to clearml sales to get the VPC option, but SaaS is just not the right option

I might take a look at it when I get a chance but I think I'd have to see if ClearML is a good fit for our use case before I can justify the commitment

I hope it is ๐Ÿ™‚

3 years ago
0 I’M Wondering If Someone Has An Example Of How To Use The

Hi @<1533620191232004096:profile|NuttyLobster9>
base_task_factory is a function that gets the node definition and returns a Task to be enqueued ,
pseudo code looks like:

def my_node_task_factory(node: PipelineController.Node) -> Task:
  task = Task.create(...)
  return task

Make sense ?

2 years ago
0 Please Tell Me, When Migrating A Local Server, We Have Problems That The Saved Images Are Not Displayed, It Says "Unable To Load Image" And Links To The Old Server If You Click "Copy Image Url" Or "Open Image". The Migration Was Done According To Backup'

Is it possible to do something so that the change of the server address is supported and the pictures are pulled up on the new server from the new server?

The link itself (full link) is stored inside the server. Can I assume the access is IP based not host based (i.e. dns) ?

3 years ago
0 Hi, I'M Trying To Set Up My Trains-Server And I'M Getting The Following:

Should have worked, the error you are getting is docker-compose parsing the yml file
Is this exactly the one from the trains-server repo ?

5 years ago
0 Hi, I Need Your Help Setting Up An Trains Agent Running In Docker. I Have An Python Script Calling Wget As System Command Which Runs Fine On My Dev Engine. When Cloning The Experiment And Scheduling It Into The Services Queue I Get An Error That The Call

Okay, so basically set a template for the pod, specifying the docker image. Make sure you pass the correct trains-server configuration (i.e. api/web/file server addresses and credentials), and select the queue name the agent will listen to.

container image / details
https://hub.docker.com/r/allegroai/trains-agent

https://github.com/allegroai/trains-agent/tree/master/docker/agent

Full environment variable list to pass can be found here:
https://github.com/allegroai/trains-server/blob/...

4 years ago
0 Hello Again, How Can I Use The

Sure thing ๐Ÿ™‚

4 years ago
0 Hey, I'Ve Spin Up A Worker Using Aws Autoscaler In Clearml Self Hosted Server Running On Kubernetes. However, I Can'T Find The Agent On The Workers Page. Any Idea Why It'S Not Showing Up? Full_Log:

@<1595587997728772096:profile|MuddyRobin9> are you sure it was able to spin the EC2 instance ? which clearml version autoscaler are you running ?

2 years ago
0 Can

Hi DashingHedgehong5
Is the text the ,labels on the histogram bucket ?

https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/logger_module/logger_logger.html#clearml.logger.Logger.report_histogram

Notice the xlabels arguments, id this what you are looking for ?

4 years ago
0 Hi, I'M Trying To Install A New Server, This Is A Fresh Ubuntu 18.04 Install. When I Try To Run The Docker Composer Up Command I Get Error Messages Like This One:

CourageousLizard33 if the two series are on the same graph, just click on the series in the legend, you can enable/disable it, and the scale will adjust automatically.
Regarding grouping, this is a feature that can be turned off, the idea is that we split the tag to title/series... So if you have the same prefix you get to group the TF scalars on the same graph, otherwise they will be on a diff title graph. That said you can make force it to have a series per graph like in TB. Makes sense?

5 years ago
Show more results compactanswers