Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi All

CooperativeFox72 could you expand on "not working"?
If you have a yaml file, I would do:
` # local_path = './my_config.yaml'
path = task.connect_configuration(local_path, name=name)
if task.running_locally():
with open(local_path, "r") as config_file:
my_params_dict = yaml.load(config_file, Loader=yaml.FullLoader)
my_params_dict['change_me'] = 'new value'
my_params_text = yaml.dump(my_params_dict)

store back the change, my_params assumed to be the content of the param file (tex...

4 years ago
0 Hi! Is There A Way To Run A Task Without Reporting To The Server? For Example If I Want To Debug A Script By Running It Locally Without It Appearing On The Server

I want to be able to access the data just avoid reporting the experiment results

Yes, you are correct 😞
If you just want to skip the logging you can always add an if to the Tasl.init call ?!

4 years ago
0 Hello Everyone, Is There Any Way To Remove A Serving Instance?

Hi @<1657918706052763648:profile|SillyRobin38>
You mean remove the entire serving session? is it still running somewhere ?
(for example if you take the docker-compose down it will be marked aborted automatically after 2 hours)

one year ago
0 Hi There,

Ok no it only helps if as far as I don't log the figure.

you mean if you create the natplotlib figure and no automagic connect you still see the mem leak ?

2 years ago
0 Hi, Is There A Way To Instantiate A

Hi OutrageousSheep60

Is there a way to instantiate a

clearml-task

while providing it a

Dockerfile

that it needs to build prior to executing the task?

Currently not really, as at the aned the agent does need to pull a container,
But you can cheive basically the same by adding the "dockerfile" script as --docker_bash_setup_script Notice of course that this is an actual bash script not Docker script, so no need for "RUN" prefix.
wdyt?

2 years ago
0 Can I Change The Clearml-Serving Inference Port? 8080 Is Already Used For My Self-Hosted Server.. I Guess I Can Just Change It In The Docker-Compose, But I Find A Little Weird That You Are Using This Port If The Self-Hosted Server Web Is Hosted In It..

ElegantCoyote26 what you are after is:
docker run -v ~/clearml.conf:/root/clearml.conf -p 9501:8085
Notice the internal port (i.e. inside the docker is 8080, but the external one is changed to 9501)

3 years ago
0 Helm Charts Are Gone?

Hi @<1792364603552829440:profile|TestyBeetle31>
Yeah so sorry we finally changed the repository name:
None

Where is the broken this link coming from, we will fix it (we are working on it, and some of the services do not auto forward

7 months ago
0 Hi Community! This Summer I Worked On An

Hi AttractiveWoodpecker16
I think is the correct channel for that question.
(any chance you can move your thread there?)
Specifically just email billing@clear.ml they will cancel (no need to worry about the beginning of the month, just explain and they will not charge over Nov)

EDIT: I know they are working on making it a one click in the UI, main limit is what happens with the data that was stored and was above the free tier threshold, anyhow I think next version will sort that as well.

2 years ago
0 Hi

I'd prefer to use config_dict, I think it's cleaner

I'm definitely with you

Good news:

new 

best_model

 is saved, add a tag 

best

,

Already supported, (you just can't see the tag, but it is there :))

My question is, what do you think would be the easiest interface to tell (post/pre) store, tag/mark this model as best so far (btw, obviously if we know it's not good, why do we bother to store it in the first place...)

5 years ago
0 Hello, Is There A Way To Update A Task Diff Programatically? Eg, I'M Creating A Task Using

Thanks ShakyJellyfish91 ! please let me know what you come up with, I would love for us to fix this issue.

3 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

While if I just download the right packages from the requirements.txt than I don't need to think about that

I see you point, the only question how come these packages are not automatically detected ?

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

SmugOx94 could you please open a GitHub issue with this request, otherwise we might forget 🙂
We might also get some feedback from other users

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Does it work if I launch the clearml-agent on a docker and pip doesn't know the packages to install

Not sure I follow... the "detect_with_pip_freeze" flag (when set) will tell clearml (at runtime) to create the "installed packages" directly from pip freeze (instead of analyzing the code)

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

LovelyHamster1
Also you can use pip freeze instead of the static code analysis , on your development machines set:
detect_with_pip_freeze: false
https://github.com/allegroai/clearml/blob/e9f8fc949db7f82b6a6f1c1ca64f94347196f4c0/docs/clearml.conf#L169

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

I would like to force the usage of those requirements when running any script

How would you force it? Will you just ignore the "Installed Packages" section ?

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

SmugOx94

after having installed 

numpy==1.16

 in the first case or 

numpy==1.19

 in the second case. Is it correct?

Correct

the reason is simply that I'd like to setup an MLOps system where

I see the rational here (obviously one would have to maintain their requirements.txt)
The current way trains-agent works is that if there is a list of "installed packages" it will use it, and if it is empty it will default to the requirements.txt
We cou...

4 years ago
0 Hi All, I Was Wondering If It Is Possible To Set The Aws Autoscaler (And Other Aws Services Such As S3) To Assume The Permissions Of A Specific Iam Role. I Didn'T Find Any Reference To This In The Documentation

LovelyHamster1 Now I see... Interesting credentials ability. Specifically all the S3 access on trains is derived from the ~/clearml.conf credentials section :
https://github.com/allegroai/clearml/blob/ebc0733357ac9ead044d0ed32d41447763f5797e/docs/clearml.conf#L73
( or the AWS S3 environment variables )

I'm not sure how this AWS feature works, I suspect it is changing the AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY variables on the ec2 instance. If this is the case, it should work out of...

4 years ago
0 Hey, I'D Like To Store My Trained Models, Results Of Transformers Training, Into Local Disk. I Tried To Set Up

@<1570220844972511232:profile|ObnoxiousBluewhale25> it creates a new Model here
None
If you want it to log to something other than the default file server create the clearml Task before starting the training:

task = Task.init(..., outout_uri="file:///home/karol/data/")
# now training

It will use the existing Task and upload to the destination folder

2 years ago
0 If I Create A Task Using Task.Create And Then In A Separate Piece Of Code I Want To Report To It (By Using

So it is the automagic that is not working.
Can you print the following before calling Both Task.debug_simulate_remote_task and Task.init , Notice you have to call Task.init
print(os.environ)

3 years ago
0 Looking At Clearml-Serving - Two Questions - 1, What’S The Status Of The Project 2. How Does One Say How A Model Is Loaded And Served Etc? For Example, If I Have A Spacy Ner Model, I Need To Specify Some Custom Code Right?

'config.pbtxt' could not be inferred. please provide specific config.pbtxt definition.

This basically means there is no configuration on how to serve the mode, i.e. size/type of lower (input) layer and output layer.
You can wither store the configuration on the creating Task, like is done here:
https://github.com/allegroai/clearml-serving/blob/b5f5d72046f878bd09505606ca1147d93a5df069/examples/keras/keras_mnist.py#L51
Or you can provide it as standalone file when registering the mo...

4 years ago
Show more results compactanswers