Reputation
Badges 1
25 × Eureka!PompousBeetle71 If this is argparser and the type is defined, the trains-agent will pass the equivalent in the same type, with str that amounts to '' . make sense ?
GentleSwallow91 how come it does not already find the correct pytorch version inside the docker ? whats the clearml-agent version you are using ?
No worries π glad to hear it worked out π
Hi SmarmySeaurchin8
StorageManager docs is broken in the example notebook here:
Thanks π I'll make sure we fix it
I want to display is already stored locally
Sure you can:Logger.current_logger().report_image('title','series', iteration=0, local_path='/my_file/is_here.jpg')
CleanPigeon16 Can you send also the "Configuration Object" "Pipeline" section ?
Then check in the clearml.conf under files_server
And use what you have there (for example http://localhost:8081 )
Hi StickyBlackbird93
Yes, this agent version is rather old ( clearml_agent v1.0.0 )
it had a bug where pytorch wheel aaarch broke the agent (by default the agent in docker mode, will use the latest stable version, but not in venv mode)
Basically upgrade to the latest clearml-agent version it should solve the issue:pip3 install -U clearml-agemnt==1.2.3BTW for future debugging, this is the interesting part of the log (Notice it is looking for the correct pytorch based on the auto de...
neat! please update on your progress, maybe we should add an upgrade section once you have the details worked out
I callΒ
Task.init
Β after I import tensorflow (and thus tensorboard?)
That should have worked...
Can you manually add a TB report before calling opennmt function ?
(I want to verify the Task.init is indeed catching the TB calls, my theory is that somewhere inside the opennmt we loose the TB)
Where again does clearml place the venv?
Usually ~/.clearml/venvs-builds/<python version>/
Multiple agents will be venvs-builds.1 and so on
Hi ConfusedPig65
Any keras model will be automatically uploaded if you pass an upload url to the Task init:task = Task.init('examples', 'keras upload test', output_uri=" ")(You can also pass to output_uri s3://buckket/folder or change the default output_uri in the clearml.conf file)
After this line any keras model will be automatically uploaded (you will see it under the Artifacts Tab)
Accessing models from executed tasks:
` trains_task = Task.get_task('task_uid_here')
last_check...
Hi StickyWhale51
I think this issue is due to some internal race condition, anyhow I think we have an RC out solving it, can you try with:pip install clearml==1.2.0rc2
i have it deployed successfully with istio.
Nice!
the only thing we had to do to get it to work was to modify the nginx.conf in the webserver pod to allow http 1.1
I was under the impression we fixed that, let me check
BTW: you will be loosing the comments π
Hi JitteryCoyote63 , is there a callback for that?
Hi @<1523701304709353472:profile|OddShrimp85>
the venv setup is totally based on my requirements.txt instead of adding on to what the image has before. Why?
Are you using the agent in docker mode ? if this is the case it creates a venv inside the docker, inheriting from the preinstalled docker system packages,
And it is not working ? what's the Working Dir you have under the Execution Tab ?
okay so it is downloaded to your machine, and unzipped , is that part correct?
Yes, actually the first step would be a toggle button for regexp in the search, the second will be even more advanced search.
May I suggest you post it on the UI suggestion issue https://github.com/allegroai/trains/issues/81 ?
Thanks for pinging OutrageousGiraffe8
I think I was able to reproduce.
model is saved to the clearml as an output model when
b
is not a dictionary.
How did you make the example work with the automagic ?
Woot woot!
awesome, this RC is stable you can feel free to use it, the official release is probably due to be out next week :)
Hi CourageousDove78
Not the cleanest, but you can basically pass everything here:
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html#post--tasks.get_all
Reasoning is that it is passed almost as is to the server for the actual query.
Thanks OutrageousGiraffe8
Any chance you can expand the example code to be a fully a reproducible toy code? (I would really like to make sure we fix it)
GiganticTurtle0 so this was already supposed to be out (v1.1) but a minor py2 backwards compatibility delayed it. Anyhow you can now just call pipeline.start(..)
https://github.com/allegroai/clearml/blob/889d2373988a0d6630703cc1c865e09e58f8f981/examples/pipeline/pipeline_from_tasks.py#L47
(to run it locally call start_locally(...) )pip install git+(the new version will be out in a few days, meanwhile you can test the new pipeline interface directly from git)
if I encounter the need for that, I will adapt and open a PRΒ
Great!
GiganticTurtle0
I think that what you are looking for is:param_dict = {'key': 1234} task.connect(param_dict, name='general')Notice that when this code runs manually (i.e. not by the agent), the dict is stored on "general" parameter section of the Task.
But when the code is executed by the Agent, the opposite happens and the parameters from the "general" section of the Task or put back into the param_dict , here the casting is done based on the type of the original values.
Generall...
I managed to do it by using logger.report_scalar, thanks!
Sure, but for future reference where (in ignite callbacks) did you add the report_scalar call ?
Hi OutrageousGiraffe8
Does anybody knows why this is happening and is there any workaround, e.g. how to manually report model?
What exactly is the error you are getting? and with which clearml version are you using?
Regrading manual Model reporting:
https://clear.ml/docs/latest/docs/fundamentals/artifacts#manual-model-logging
Hmm, not a bad idea π
Could you please open a Git Issue, so it will not get forgotten ?
(btw: I'm not sure how trivial it is to implement, nonetheless obviously possible π