Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Is It Possible To Create A Serving Endpoint With Pytorch Jit File In Web Interface Only?

DefiantHippopotamus88 you can create a custom endpoint and do that, but it will be running I the same instance , is this what you are after? Notice that Triton actually supports it already, you can check the pytorch example

3 years ago
5 years ago
0 Hi All

CooperativeFox72 could you expand on "not working"?
If you have a yaml file, I would do:
` # local_path = './my_config.yaml'
path = task.connect_configuration(local_path, name=name)
if task.running_locally():
with open(local_path, "r") as config_file:
my_params_dict = yaml.load(config_file, Loader=yaml.FullLoader)
my_params_dict['change_me'] = 'new value'
my_params_text = yaml.dump(my_params_dict)

store back the change, my_params assumed to be the content of the param file (tex...

4 years ago
0 Hi! Is There A Way To Run A Task Without Reporting To The Server? For Example If I Want To Debug A Script By Running It Locally Without It Appearing On The Server

I want to be able to access the data just avoid reporting the experiment results

Yes, you are correct 😞
If you just want to skip the logging you can always add an if to the Tasl.init call ?!

4 years ago
0 Hello Everyone, Is There Any Way To Remove A Serving Instance?

Hi @<1657918706052763648:profile|SillyRobin38>
You mean remove the entire serving session? is it still running somewhere ?
(for example if you take the docker-compose down it will be marked aborted automatically after 2 hours)

one year ago
0 Hi There,

Ok no it only helps if as far as I don't log the figure.

you mean if you create the natplotlib figure and no automagic connect you still see the mem leak ?

2 years ago
0 Reducing Docker Container Spin-Up Time With Clearml Agent

Woot woot!
awesome, this RC is stable you can feel free to use it, the official release is probably due to be out next week :)

3 years ago
0 Hi, Is There A Way To Instantiate A

Hi OutrageousSheep60

Is there a way to instantiate a

clearml-task

while providing it a

Dockerfile

that it needs to build prior to executing the task?

Currently not really, as at the aned the agent does need to pull a container,
But you can cheive basically the same by adding the "dockerfile" script as --docker_bash_setup_script Notice of course that this is an actual bash script not Docker script, so no need for "RUN" prefix.
wdyt?

2 years ago
0 Can I Change The Clearml-Serving Inference Port? 8080 Is Already Used For My Self-Hosted Server.. I Guess I Can Just Change It In The Docker-Compose, But I Find A Little Weird That You Are Using This Port If The Self-Hosted Server Web Is Hosted In It..

ElegantCoyote26 what you are after is:
docker run -v ~/clearml.conf:/root/clearml.conf -p 9501:8085
Notice the internal port (i.e. inside the docker is 8080, but the external one is changed to 9501)

3 years ago
0 Hi There, How Can I Set The Model Metadata Using Code? The Model Object Has The

Hi IrritableGiraffe81
You can access the model object with, task.models['output']
To set the model metadata I would recommend making sure you have the latest clearml package, I think this is relatively new addition

3 years ago
0 Helm Charts Are Gone?

Hi @<1792364603552829440:profile|TestyBeetle31>
Yeah so sorry we finally changed the repository name:
None

Where is the broken this link coming from, we will fix it (we are working on it, and some of the services do not auto forward

7 months ago
0 Hi Community! This Summer I Worked On An

Hi AttractiveWoodpecker16
I think is the correct channel for that question.
(any chance you can move your thread there?)
Specifically just email billing@clear.ml they will cancel (no need to worry about the beginning of the month, just explain and they will not charge over Nov)

EDIT: I know they are working on making it a one click in the UI, main limit is what happens with the data that was stored and was above the free tier threshold, anyhow I think next version will sort that as well.

2 years ago
0 When I Tried To Create A Clearml Serving Inference Endpoint For Yolov8, I Received The Following Error:

This line πŸ™‚
None
Notice Triton (and so is clearml-serving) needs the pytorch model to be converted into torchscript, so that the triton backend can load it

2 years ago
0 Hi

I'd prefer to use config_dict, I think it's cleaner

I'm definitely with you

Good news:

newΒ 

best_model

Β is saved, add a tagΒ 

best

,

Already supported, (you just can't see the tag, but it is there :))

My question is, what do you think would be the easiest interface to tell (post/pre) store, tag/mark this model as best so far (btw, obviously if we know it's not good, why do we bother to store it in the first place...)

5 years ago
0 Hello, Is There A Way To Update A Task Diff Programatically? Eg, I'M Creating A Task Using

Thanks ShakyJellyfish91 ! please let me know what you come up with, I would love for us to fix this issue.

3 years ago
0 Hi All, Is There Anyway To Get The Id Of The Pipeline Using Pipeline Name? I Need The Id Of The Pipeline So That I Can Schedule The Pipeline To Run Via

So inside the pipeline logic you can do Task.current_task().id
Or inside a component Task.current_task().parent

2 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

While if I just download the right packages from the requirements.txt than I don't need to think about that

I see you point, the only question how come these packages are not automatically detected ?

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

SmugOx94 could you please open a GitHub issue with this request, otherwise we might forget πŸ™‚
We might also get some feedback from other users

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Does it work if I launch the clearml-agent on a docker and pip doesn't know the packages to install

Not sure I follow... the "detect_with_pip_freeze" flag (when set) will tell clearml (at runtime) to create the "installed packages" directly from pip freeze (instead of analyzing the code)

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

LovelyHamster1
Also you can use pip freeze instead of the static code analysis , on your development machines set:
detect_with_pip_freeze: false
https://github.com/allegroai/clearml/blob/e9f8fc949db7f82b6a6f1c1ca64f94347196f4c0/docs/clearml.conf#L169

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

I would like to force the usage of those requirements when running any script

How would you force it? Will you just ignore the "Installed Packages" section ?

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?

4 years ago
Show more results compactanswers