CooperativeFox72 could you expand on "not working"?
If you have a yaml file, I would do:
` # local_path = './my_config.yaml'
path = task.connect_configuration(local_path, name=name)
if task.running_locally():
with open(local_path, "r") as config_file:
my_params_dict = yaml.load(config_file, Loader=yaml.FullLoader)
my_params_dict['change_me'] = 'new value'
my_params_text = yaml.dump(my_params_dict)
store back the change, my_params assumed to be the content of the param file (tex...
I want to be able to access the data just avoid reporting the experiment results
Yes, you are correct 😞
If you just want to skip the logging you can always add an if to the Tasl.init call ?!
Hi @<1657918706052763648:profile|SillyRobin38>
You mean remove the entire serving session? is it still running somewhere ?
(for example if you take the docker-compose down it will be marked aborted automatically after 2 hours)
Ok no it only helps if as far as I don't log the figure.
you mean if you create the natplotlib figure and no automagic connect you still see the mem leak ?
Oh that makes sense! I'll pass it to the documentation guys 🙏
Hi OutrageousSheep60
Is there a way to instantiate a
clearml-task
while providing it a
Dockerfile
that it needs to build prior to executing the task?
Currently not really, as at the aned the agent does need to pull a container,
But you can cheive basically the same by adding the "dockerfile" script as --docker_bash_setup_script
Notice of course that this is an actual bash script not Docker script, so no need for "RUN" prefix.
wdyt?
ElegantCoyote26 what you are after is:docker run -v ~/clearml.conf:/root/clearml.conf -p 9501:8085
Notice the internal port (i.e. inside the docker is 8080, but the external one is changed to 9501)
As you said you just need to clone, righr click clone?
Still not supported 😞
Yes, I was referring to logging the "clearlm-data" Dataset ID on the Task itself, not an external database.
Make sense?
Hi @<1792364603552829440:profile|TestyBeetle31>
Yeah so sorry we finally changed the repository name:
None
Where is the broken this link coming from, we will fix it (we are working on it, and some of the services do not auto forward
Hi AttractiveWoodpecker16
I think is the correct channel for that question.
(any chance you can move your thread there?)
Specifically just email billing@clear.ml they will cancel (no need to worry about the beginning of the month, just explain and they will not charge over Nov)
EDIT: I know they are working on making it a one click in the UI, main limit is what happens with the data that was stored and was above the free tier threshold, anyhow I think next version will sort that as well.
I can't think of any hack that will satisfy your IT other than than an actual vault...
wdyt?
I'd prefer to use config_dict, I think it's cleaner
I'm definitely with you
Good news:
new
best_model
is saved, add a tag
best
,
Already supported, (you just can't see the tag, but it is there :))
My question is, what do you think would be the easiest interface to tell (post/pre) store, tag/mark this model as best so far (btw, obviously if we know it's not good, why do we bother to store it in the first place...)
Thanks ShakyJellyfish91 ! please let me know what you come up with, I would love for us to fix this issue.
While if I just download the right packages from the requirements.txt than I don't need to think about that
I see you point, the only question how come these packages are not automatically detected ?
SmugOx94 could you please open a GitHub issue with this request, otherwise we might forget 🙂
We might also get some feedback from other users
Does it work if I launch the clearml-agent on a docker and pip doesn't know the packages to install
Not sure I follow... the "detect_with_pip_freeze" flag (when set) will tell clearml (at runtime) to create the "installed packages" directly from pip freeze (instead of analyzing the code)
LovelyHamster1
Also you can use pip freeze
instead of the static code analysis , on your development machines set:detect_with_pip_freeze: false
https://github.com/allegroai/clearml/blob/e9f8fc949db7f82b6a6f1c1ca64f94347196f4c0/docs/clearml.conf#L169
I would like to force the usage of those requirements when running any script
How would you force it? Will you just ignore the "Installed Packages" section ?
Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?
OddShrimp85 you can see the full configuration at the top of the Task log. What do you have there? Also what is the clearml python version?
SmugOx94
after having installed
numpy==1.16
in the first case or
numpy==1.19
in the second case. Is it correct?
Correct
the reason is simply that I'd like to setup an MLOps system where
I see the rational here (obviously one would have to maintain their requirements.txt)
The current way trains-agent
works is that if there is a list of "installed packages" it will use it, and if it is empty it will default to the requirements.txt
We cou...
LovelyHamster1 Now I see... Interesting credentials ability. Specifically all the S3 access on trains is derived from the ~/clearml.conf
credentials section :
https://github.com/allegroai/clearml/blob/ebc0733357ac9ead044d0ed32d41447763f5797e/docs/clearml.conf#L73
( or the AWS S3 environment variables )
I'm not sure how this AWS feature works, I suspect it is changing the AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY variables on the ec2 instance. If this is the case, it should work out of...
@<1570220844972511232:profile|ObnoxiousBluewhale25> it creates a new Model here
None
If you want it to log to something other than the default file server create the clearml Task before starting the training:
task = Task.init(..., outout_uri="file:///home/karol/data/")
# now training
It will use the existing Task and upload to the destination folder
So it is the automagic that is not working.
Can you print the following before calling Both Task.debug_simulate_remote_task
and Task.init
, Notice you have to call Task.initprint(os.environ)
None
notice there is a scroll_id there, you might need to call the API multiple times until you scroll over All the events
could that be it?
'config.pbtxt' could not be inferred. please provide specific config.pbtxt definition.
This basically means there is no configuration on how to serve the mode, i.e. size/type of lower (input) layer and output layer.
You can wither store the configuration on the creating Task, like is done here:
https://github.com/allegroai/clearml-serving/blob/b5f5d72046f878bd09505606ca1147d93a5df069/examples/keras/keras_mnist.py#L51
Or you can provide it as standalone file when registering the mo...