Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi People! I Think The Clearml

BTW: what's the clearml-server version ?

2 years ago
0 Hi People! I Think The Clearml

BTW: you should probably update the server, you're missing out on a lot of cool features šŸ™‚

2 years ago
0 When Clearml Converts A

However, when 'extra' is a positional argument then it is transformed to 'str'

Hmm... okay let me check something

3 years ago
0 Hello, I’M Trying To Update Our Clearml Server Running On Kubernetes (1.6.0-213) But I Get This Error:

should i only do mongodb

No, you should do all 3 DBs ELK , Mongo, Redis

2 years ago
0 Hey Guys Trying To Save A Model Via The Outputmodel.Update_Weights Function I Get The Following Error:

Hi @<1546303269423288320:profile|MinuteStork43>

Failed uploading: cannot schedule new futures after interpreter shutdown
Failed uploading: cannot schedule new futures after interpreter shutdown

This is odd where / when exactly are you trying to upload it?

2 years ago
0 Hey Guys Trying To Save A Model Via The Outputmodel.Update_Weights Function I Get The Following Error:
task.mark_completed()

You have that at the bottom of the script, never call it on yourself, it will kill the actual process.
So what is going on you are marking your own process for termination, then it terminates itself leaving the interpreter and this is the reason for the errors you are seeing

The idea of mark_* is to mark an external Task, forcefully.
By just completing your process with exit code (0) (i.e. no error) the Task will be marked as completed anyhow, no need to call...

2 years ago
0 Hey Guys Trying To Save A Model Via The Outputmodel.Update_Weights Function I Get The Following Error:
cannot schedule new futures after interpreter shutdown

This implies the process is shutting down.
Where are you uploading the model? What is the clearml version you are using ? can you check with the latest version (1.10) ?

2 years ago
0 Encountered An Odd Bug. Upon Attempting To Write Images To Clearml (3D Projected, Matplotlib),

If this is the case, then we do not change the maptplotlib backend
Also

I've attempted converting theĀ 

mpl

Ā image toĀ 

PIL

Ā and useĀ 

report_image

Ā  to push the image, to no avail.

What are you getting? error / exception ?

4 years ago
0 Hi There! Is There Any Way To Boost Creating Sha2 Hashes During

Switching to process Pool might be a bit of an overkill here (I think)
wdyt?

3 years ago
0 What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

@<1532532498972545024:profile|LittleReindeer37> nice!!! šŸ˜
Do you want to PR? it will be relatively easy to merge and test, and I think that they might even push it to the next version (or worst case quick RC)

2 years ago
0 What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

right now I can't figure out how to get the session in order to get the notebook path

you mean the code that fires "HTTPConnectionPool" ?

2 years ago
0 What Sort Of Integration Is Possible With Clearml And Sagemaker? On The Page

Hmm and you are getting empty list for thi one:

server_info['url'] = f"http://{server_info['hostname']}:{server_info['port']}/"
2 years ago
0 Hey, Here’S A Quickie – Is It Possible To Specify Different “Types” Of Input Parameters (“Args/…“) Such That They Are Handled Nicely On The Front End? Basically, I Have A Task That Needs A Datetime As Input And It Would Be Really Nice To Have A Gui To Do

@<1523701079223570432:profile|ReassuredOwl55>

Hey, here’s a quickie – is it possible to specify different ā€œtypesā€ of input parameters (ā€œArgs/ā€¦ā€œ) such that they are handled nicely on the front end?

You me cast / checked in the UI ?

2 years ago
0 Hi All, I Am Trying To Spin Up Some Aws Autoscaler Instances, But I Seem To Have Some Issues With The Instance Creation:

Yes the one you create manually is not really of the same "type" as the one you create online, this is why you do not see it there šŸ˜ž

2 years ago
0 Hi All, I Am Trying To Spin Up Some Aws Autoscaler Instances, But I Seem To Have Some Issues With The Instance Creation:

or me it sounds like the starting of the service is completed but I don't really see if the autoscaler is actually running. Also I don't see any output in the console of the autoscaler.

Do notice the autoscaler code itself needs to run somewhere, by default it will be running on your machine, or on a remote agent,

2 years ago
0 Hello! Since Today I Get

Give me a minute

4 years ago
0 Hi All, I Am Trying To Spin Up Some Aws Autoscaler Instances, But I Seem To Have Some Issues With The Instance Creation:

That experiment says it's completed, does it mean that the autoscaler is running or not?

Not running, it will be "running" if actually being executed

2 years ago
0 Hi All, I Am Trying To Spin Up Some Aws Autoscaler Instances, But I Seem To Have Some Issues With The Instance Creation:

Sure go to the "All Projects" and filter by Task Type, application / service

2 years ago
0 Hi, I Am Try To Use Taskscheduler As Cronjob, I Want My Task Running Every 2.40 Am Utc Everyday,

Hi @<1523701260895653888:profile|QuaintJellyfish58>
Based on the docs
None
I think this should have worked, are you running the actual task_scheduler on yout machine? on the services queue ? what's the console output you see there ?

2 years ago
0 Hello. Am New To Clearml. I Wish To Know If There Are Clearml Support For Nvidia Tao (Formerly Known As Transfer Learning Toolkit) ? Thank You

My current experience is there is only print out in the console but no training graph

Yes Nvidia TLT needs to actually use tensorboard for clearml to catch it and display it.
I think that in the latest version they added that. TimelyPenguin76 might know more

3 years ago
0 I Am Also Experiencing A Weird Behaviour When Running A Script Using The Module Flag. For Example I Run:

command line to the arg parser should be passed via the "Args" section in the Configuration tab.
What is the working directory on the experiment ?

4 years ago
0 Hi, I Am Try To Use Taskscheduler As Cronjob, I Want My Task Running Every 2.40 Am Utc Everyday,

i hope can run in same day too.

Fix should be in the next RC šŸ™‚

2 years ago
0 I Am Running Trains=0.16.4 Python==3.7.5 , And Notice That The "Log" Page Sometimes Didn'T Capture The Console Log From My Program. Is This A Known Issue, Anyone Have Experienced Similar Behavior?

Will the new fix avoid this issue and does it still requires theĀ 

incremental

Ā flag?

It will avoid the issue, meaning even when incremental is not specified, it will work
That said the issue any other logger will be cleared as well, so, just good practice ...

From theĀ 

logging

Ā documentation ...

Hmmm so I guess Kedro should not use dictConfig ?! I'm not sure on the exact use case, but just clearing all loggers seems like a harsh approach

4 years ago
0 Hello Guys, I Have A Strange Situation With A Pipeline Controller I'M Testing Atm. If I Run The Controller Directly In My Pycharm On Notebook It Connects Correctly To The K8S Cluster With Trains Installed. After This, If I Go Directly In The Ui, I Reset T

Hi JuicyFox94
I think you are correct, this bug will explain the entire thing.
Basically what happens is that remote_execute stops the local run before the configuration is set on the Task. Then running remotely the code pull the configuration, sees that it is empty and does nothing.
Let me see if I can reproduce it...

4 years ago
0 Hi! I Am Trying To Build And Run A Pipeline. I Pass My Dataset As Parameter Of Pipeline:

I pass my dataset as parameter of pipeline:

@<1523704757024198656:profile|MysteriousWalrus11> I think you were expecting the dataset_df dataframe to be automatically serialized and passed, is that correct ?
If you are using add_step, all arguments are simple types (i.e. str, int etc.)
If you want to pass complex types, your code should be able to upload it as an artifact and then you can pass the artifact url (or name) for the next step.

Another option is to use pipeline from dec...

2 years ago
Show more results compactanswers