Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello Everyone

RobustSnake79 I have not tested, but I suspect that currently all the reports will stay in TB and not passed automagically into ClearML
It seems like something you would actually want to do with TB (i.e. drill into the graphs etc.) no?

3 years ago
0 Hello Everyone

RobustSnake79 this one seems like scalar type graph + summary table, correct?
BTW: I'm not sure how to include the "Recommendation" part 🙂

3 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

SmugOx94 could you please open a GitHub issue with this request, otherwise we might forget 🙂
We might also get some feedback from other users

3 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Hi LovelyHamster1 ,
you mean totally ignore the "installed packages" section, and only use the requirements.txt ?

3 years ago
0 Anyway To Make A Job Fail If The Required Python Version (3.7 Vs 3.8 For Example) Is Not Available In The Agent?

Hmm, we could add an optional test for the python version, and the fail the Task if the python version is not found. wdyt?

3 years ago
3 years ago
0 Hi. Inside A Notebook When I Cerate A New Clearml Task And Then Run Sklearn Gridsearchcv , Clearml Uploads A Lot Of Model. Is There A Way To Force Clearml Not To Upload These Models? Related Question Is What Are These Models Anyway? Their Name Only Contai

model upload and registration i should pass something like

'xgboost': False

or

'xgboost': False, 'scikit': False

?

Exactly! which framework are you using ?

about 2, I refer to the names of the models.

Hmm that is a good point to test, usually this is based on the Task name (I think), so if the Task name contains the HPO params in the name it should be the same on the model name. Do you see the HPO params on the Task name ? Should we open a Gi...

one year ago
0 Hi. Inside A Notebook When I Cerate A New Clearml Task And Then Run Sklearn Gridsearchcv , Clearml Uploads A Lot Of Model. Is There A Way To Force Clearml Not To Upload These Models? Related Question Is What Are These Models Anyway? Their Name Only Contai

The problem is that I currently don't have a way to get them "from outside".

Maybe as a hack (until we add the model object)
` class MyModelCB:
current_args = dict()
@classmethod
def callback(load_save, model_info):
if load_save != "save":
return model_info
model_info.name = "my new name" + str(current_args) # make a name from args
return model_info

WeightsFileHandler.add_pre_callback(MyModelCB.callback)
MyModelCB.current_args = {"args": "value"} `wdyt?

one year ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

The easiest would be as an artifact (I think).
Let's assume you put it into a csv file (with pandas or mnaually)
To upload (from the pipeline Task itself):
task.upload_artifacts(name='summary', artifact_object='~/my/summary.csv')Then if you want to grab it from anywhere else:
task = Task.get_task(task_id='HPO controller Task id here') my_csv = Task.artifacts['summary'].get_local_copy()
If you want to store as dict it might be even easier:
` task.upload_artifacts(name='summary', artifa...

3 years ago
0 Hi Fam! Sorry For The Potential Dumb Question, But I Couldn’T Find Anything On The Interwebs About It. I’M Hosting A Clearml Server On Aws, Using S3 As A Backend For Artifact Storage. I Find That Whenever I Delete Archived Artifacts In The Web App, I Get

The default cleanup service should work with S3 with a correctly configured clearml service agent if I understand the workings correctly.

Yes I think you are correct

I am referring to the UI.

In that case, no 😞 . This is actually a backend server change (from the UI it should be relatively simple). Is this somehow a showstopper ?

2 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

Bad news, there isn't a nice interface to get the table from the Optimizer object (I will make sure we add it, no reason not to).
But you can very easily get all the information you need and more:
all_the_tasks = an_optimizer.get_top_experiments(top_k=100)Then for every task in the list you can get All the information:
for task in all_the_tasks: task_params_as_dict = task.get_parameters() task_scalars = task.get_last_scalar_metrics()Basically the Task object enables you to que...

3 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

You can run this code from anywhere. The 'base_task_id' is actually the pipeline controller Task ID.
BTW: Next version will have a nicer interface to query it, but this code will work on the current version

3 years ago
3 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

, is the team open to PRs from external people?

Yes please do! PRs are welcomed! I thought we fixed the GitHub readme to reflect it, anyhow I'll make sure we do 🙂

3 years ago
0 Hello! I’M Wondering If There Is An Option To Run A Termination Hook Script

basically

would allow blocking the machine from being scaled-in when

Oh this is what I was missing 🙂 That makes sense to me!
So what you are saying is that the AWS autoscaler agent, when it is launching a Task, inside the container you will set "protection flag" when the Task ends, you will unset "protection flag"
Is that correct?

2 years ago
0 Hello! I’M Wondering If There Is An Option To Run A Termination Hook Script

Okay that kind of makes sense, now my followup question is how are you using the ASG? I mean the clearml autoscaler does not use it, so I just wonder on what the big picture, before we solve this little annoyance 🙂

2 years ago
0 Is There A Reason Why All Clearml.Task Methods Regarding Requirements (E.G. Pip Requirements) Are Class Methods? Are Requirements Not Stored In A Task?

Long story short, the Task requirements are async, so if one puts it after creating the object (at least in theory), it might be too late.
Make sense ?

3 years ago
0 Hello! Is There A Way To Avoid Or Accelerate

Xeon E3-1240: 4 - 5 hours!wow... yes definitely worth upgrading 🙂

one year ago
0 I’M Getting These Errors When Using Agent In Docker Mode

it works if I run the same command manually.

What do you mean?
Can you do:
docker run -it <my container here> bashThen immediately get an interactive bash ?

3 years ago
0 Hello, Is There A Way To Disable Dataset Caching So That When

FreshParrot56 we could add this capability, but the main caveat is that f your version depends on multiple parent versions you still need to download and extract all the parent versions, which means that when you clear them you might hurt later performance. Does that make sense? What is the use-case / scenario for you?

one year ago
0 Hi, I'M Facing Some Issues When Try To Run A Pipeline, How Can A Import A Local Library Using Pipelines From Functions? Always Getting "Modulenotfounderror: No Module Named"

Hmm, this means the step should have included the git repo itself, which means the code should have been able to import the .py
Can you see the link to the git repository on the Pipeline step Task ?

2 years ago
0 Hi, I'M Facing Some Issues When Try To Run A Pipeline, How Can A Import A Local Library Using Pipelines From Functions? Always Getting "Modulenotfounderror: No Module Named"

Hi DepressedFox45
Basically move the import into the function, it will automatically detect the package.
@PipelineDecorator.component(...) def step_one(...): import sklearn import pandas as pd # stuffMake sense ?

2 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?

3 years ago
Show more results compactanswers