Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
If Possible, I Would Like All Together Prevent The Fileserver And Write Everything To S3 (Without Needing Every User To Change Their Config)

If possible, i would like all together prevent the fileserver and write everything to S3 (without needing every user to change their config)

  
  
Posted 2 years ago
Votes Newest

Answers 30


i had a misconception that the conf comes from the machine triggering the pipeline

Sorry, this one :)

  
  
Posted 2 years ago

AgitatedDove14 So it looks like it started to do something, but now it’s missing parts of the configuration

Missing key and secret for S3 storage access(i’m using boto credential chain, which is off by default…)

why isn’t the config being passed to the inner step properly ?

  
  
Posted 2 years ago

Try with sdk.development.default_output_uri as well

  
  
Posted 2 years ago

And i am logging some explicitly

  
  
Posted 2 years ago

and this is for a normal task

  
  
Posted 2 years ago

i had a misconception that the conf comes from the machine triggering the pipeline

  
  
Posted 2 years ago

What is still being sent to the fileserver?

  
  
Posted 2 years ago

that does happen when you create a normal local task, that's why i was confused

The parts that are not passed in both cases are the configurations from the conf file. Only the environment is passed (e.g. git python packages etc) , . For example if you have storage credentials in your conf file , they are not passed to a remote agent, instead the credentials from the remote agent are used when it runs the task.
make sense?

  
  
Posted 2 years ago

Artifacts, nothing is reaching s3

  
  
Posted 2 years ago

yes

  
  
Posted 2 years ago

Yes it worked 🙂
I loaded my entire clearml.conf in the “extra conf” part of the auto scaler, that worked

  
  
Posted 2 years ago

I have but i believe i found the issue

  
  
Posted 2 years ago

which part?

  
  
Posted 2 years ago

PricklyRaven28 at the beginning of the log, the clearml agent should print the configuration, do you have api.fileserver as the S3 bucket?

  
  
Posted 2 years ago

regarding what AgitatedDove14 suggested, i’ll try tomorrow and update

  
  
Posted 2 years ago

If possible, i would like all together prevent the fileserver and write everything to S3 (without needing every user to change their config)

There is no current way to "globally" change the default files server (I think this is part of the enterprise version, alongside vault etc.).
What you can do is use an OS environment to override the conf file:
CLEARML_FILES_HOST=" "PricklyRaven28 wdyt?

  
  
Posted 2 years ago

Yes... I think that this might be a bit much automagic even for clearml 😄

  
  
Posted 2 years ago

and the agent is outputting sdk.development.default_output_uri =
although it’s different in both the original config, and the agent extra config

  
  
Posted 2 years ago

that’s what i started with, doesn’t work in pipelines

  
  
Posted 2 years ago

CostlyOstrich36 This is for a step in the pipeline

  
  
Posted 2 years ago

tried your suggestion, still got to file server…

  
  
Posted 2 years ago

but it makes sense, because the agent in that case is local

  
  
Posted 2 years ago

In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?

  
  
Posted 2 years ago

that does happen when you create a normal local task, that’s why i was confused

  
  
Posted 2 years ago

Is the entire pipeline running on the autoscaler?

  
  
Posted 2 years ago

Yes tnx for clarifying 😁

  
  
Posted 2 years ago

using api.files_server? not default_output ?

  
  
Posted 2 years ago

when i did this with a normal task it worked wonderfully, with pipeline it didn’t

  
  
Posted 2 years ago
1K Views
30 Answers
2 years ago
one year ago
Tags