Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi All, I'M Trying To Create A Task In A Jupyter Notebook, And I Always Get This Warning:

SmugDog62 so on plain vanilla Jupyter/lab everything seems to work.
What do you think is different in your setup ?

4 years ago
0 Hey, I'M Running A Pipeline, And 1 Stage Passed - But The Next One Failed. I Fixed The Bug For The Second One - Is There Any Way To Retry The Pipeline From The Failure?

Is there an option to do this from a pipeline, from within theย 

add_step

ย method? Can you link a reference to cloning and editing a task programmatically?

Hmm, I think there is an open GitHub issue requesting a similar ability , let me check on the progress ...

nope, it works well for the pipeline when not I don't choose to continue_pipeline

Could you send the full log please?

3 years ago
0 Is There A Way I Can Create A Dataset As Part Of A Pipeline And Be Able To See That This Dataset Came From This Pipeline / Task ?

Sure:
Dataset.create(..., use_current_task=True)This will basically attach/make the main Task the Dataset itself (Dataset is a type of a Task, with logic built on top of it)
wdyt ?

3 years ago
3 years ago
0 Hi, I Would Like To Check What Would Be The Recommended Hardware Specs For The Server Host Clearml Server. I Had One Configured With 32 Cpu Cores, 64Gb Ram And I Noticed That If We Have A Surge In Remote Task Creation, The Following Delays Occurs.

If the only issue is this line
task.execute_remotely(..., exit_process=True)It has to finish the static analysis of the entire repository (which usually happens in the background but now we have to wait for it). If the repo is large this could actually take 20sec (depending on CPU/drive of the machine itself)

3 years ago
0 Thought I Would Share This. Something To Think About Over The New Year.

Thanks SubstantialElk6 !
Happy new year ๐ŸŽ‰ ๐Ÿบ ๐Ÿพ ๐ŸŽ‡

2 years ago
0 Hi Fam! I’M Trying To Get

Hi QuaintPelican38
Assuming you have open the default SSH port 10022 on the ec2 instance (and assuming the AWS premissions are set so that you can access it). You need to use the --public-ip flag when running the clearml-session. Otherwise it "thinks" it is running on a local network and it registers itself with the local IP. With the flag on it gets the public IP of the machine, then the clearml-session running on your machine can connect to it.
Make sense ?

3 years ago
0 It Would Be Nice To Group Experiments Within Projects Use Cases:

DilapidatedDucks58 Nice!

but it would be great to see predecessors of each experiment in the chain

So maybe we should add "manual pipeline" to create the connection post execution ? is this a one time thing ?
Maybe a service creating these flow charts ?
Should we put them in the Project's readme ? Or in the Pipeline section (coming soon)

2 years ago
0 Hey Since Hydra Does Not Work With

Hmm can you try:
--args overrides="['log.clearml=True','train.epochs=200','clearml.save=True']"

one year ago
0 Is It Possible To View The Actual Code Of A Task? As In The Script That Created The Task?

WackyRabbit7 if this is a single script running without git repo, you will actually get the entire code in the uncommitted changes section.
Do you mean get the code from the git repo itself ?

4 years ago
0 Hi All. I Was Using Clearml Server Hosted On A Box That I Reach Behind Traefik Using Alias For Web, File And Api. After Migration It Works Perfect For New Experiments. I Changed The Name Of The Alias From

But the artifacts and my dataset of my old experiments still use the old adress for the download ( is there a way to change that ) ?

MotionlessCoral18 the old artifacts are stored with direct links, hence the issue, as SweetBadger76 noted you might be able to replace the links directly inside the backend databases

2 years ago
0 Is There An Easy Way To Add A Link To One Of The Tasks Panels? (As An Artifact, Configuration, Info, Etc)? Edit: And Follow Up Regarding The Dataset. As Discussed Somewhere Previously, The Datasets Are Now Automatically Moved To A Hidden "Sub-Project" Pr

Yes. Because my old

has never been resolved (though closed), we use the dataset object to upload e.g. local files needed for remote execution.

Ohh No I remember... following this line, can I assume these files are reused, i.e. this is not a "per instance" . I have to admit that I have a feeling this is a very unique usecase. and Maybe the "old" way Dataset were shown is better suited ?

No, I mean why does it show up in the task view (see attached image), forcing me to clic...

2 years ago
0 Is There A Way To Upload A Dict Object As A Yaml Artifact Instead Of A Json?

Hi ProudChicken98
How about saving it as a local YAML and upload the file itself as an artifact?

3 years ago
0 Hello People

In both case if I get the element from the list, I am not able to get when the task started. Where is info stored?

If you are using client.tasks.get_all( ...) should be under started field
Specifically you can probably also do:
queried_tasks = Task.query_tasks(additional_return_fields=['started']) print(queried_tasks[0]['id'], queried_tasks[0]['started'],)

2 years ago
0 Hello There, I Am Trying To Organize The Dl Code Into A Monorepo, The Repo Will Have A Section Of Shared Packages That Will Be Used By Other Packages That Are The Actual Training Projects. Let'S Say That I Install The Shared Libs With Pip In Editable Mod

Then as you suggested, I would just use sys.path it is probably the easiest and actually very safe (because the subfolders are Always next to the "main" source code)

2 years ago
0 In Pipelines I'Ve Found That Empty Lists Don'T Work As I Would Expect Them To Work. For Example, This Will Work Fine:

Hi SmugSnake6
I think it was just fixed, let me check if the latest RC includes the fix

2 years ago
0 I Have A Question Regarding Reducing Execution Time Of Pulling Results From The Server With The Python Api. As Part Of Some Pipeline, After Running Hpo I Am Pulling All The Results From My Optimizer Task And Also Pulling All The Scalars Associated With Th

Hmm check if this one works:
optimizer._get_child_tasks_ids( parent_task_id=optimizer._job_parent_id or optimizer._base_task_id, order_by=optimizer._objective_metric._get_last_metrics_encode_field(), additional_filters={'page_size': int(top_k), 'page': 0})If it does, let's PR it as a dedicated function

3 years ago
2 years ago
0 Hi, I Have One Doubt Related To Pipeline I Have One Pipeline With Eg 3 Tasks, Preprocess, Train And Test Now I Want To Clone The Pipeline And Change The Hyperparameters Of Train Task, Is It Possible? If So, How??

@<1585078763312386048:profile|ArrogantButterfly10> could it be that in the "base task" of the pipeline step, you do not have any hyper-parameter ? (I mean the Task that the pipeline clones and is supposed to set new hyperparameters for...)

one year ago
one year ago
0 [Webapp : Pipeline] Hey, Me Again. When We Try And Delete A Pipeline On The Web App Pipelines Page, It Shows That It Is Trying To Do So With A Dialogue Box That Opens With A “Deleting” Bar Swishing Across, But Then It Just Hangs, And Becomes Completely Un

Hi ReassuredOwl55

a dialogue box that opens with a โ€œdeletingโ€ bar swishing across, but then it just hangs, and becomes completely unresponsive

I believe this issue was fixed in the latest server version, seems like you are running 1.7 but the latest is 1.9.2. May I suggest an upgrade ?

one year ago
0 Hi, I'Ve Found A Possible Bug. I'M Cloning/Running A Project Without Any Input Model. Which Is As Expected. But, After I Code Actually Start Running An Input Model Shows Up In The So When I Reset The Experiment I Need To Manually Remove The Input Model Na

PompousBeetle71 , basically reset experiment will clear all the outputs, and input model model is well, input, it is not cleared. In the next execution it will be overridden. There is actually a way to change it from the UI, and override the initial model weights.

4 years ago
0 Is It Possible To Perform Debugging Operations With Pycharm Integration Using Remote Session?

ConvolutedChicken69

, does it take the agent off the queue? does it know it's not available to take tasks?

You mean will it "release" the GPU? (i.e. the agent will pull another Task) ?
If so, then no it will not, an "Interactive Session" session is (from the agent's perspective) a Task that will end sometime, and it will continue to monitor and run it, until you manually close it. The idea is that you are actually using the GPU, hence no on else can run a job on it.
To shut it down, ...

2 years ago
0 Is It Possible To Perform Debugging Operations With Pycharm Integration Using Remote Session?

Thanks for the ping ConvolutedChicken69 , I missed it ๐Ÿ˜ž

from what i see in the docs it's only for Jupyter / VS Code, i didn't see anything about pycharm

PyCharm is basically SSH, which is supported ๐Ÿ™‚
(Maybe we should mention it in the docs?)

2 years ago
Show more results compactanswers