GentleSwallow91 how come it does not already find the correct pytorch version inside the docker ? whats the clearml-agent version you are using ?
No worries 🙂 glad to hear it worked out 🙂
Hi SmarmySeaurchin8
StorageManager docs is broken in the example notebook here:
Thanks 🙂 I'll make sure we fix it
I want to display is already stored locally
Sure you can:Logger.current_logger().report_image('title','series', iteration=0, local_path='/my_file/is_here.jpg')
CleanPigeon16 Can you send also the "Configuration Object" "Pipeline" section ?
Then check in the clearml.conf under files_server
And use what you have there (for example http://localhost:8081 )
Hi StickyBlackbird93
Yes, this agent version is rather old ( clearml_agent v1.0.0 )
it had a bug where pytorch wheel aaarch broke the agent (by default the agent in docker mode, will use the latest stable version, but not in venv mode)
Basically upgrade to the latest clearml-agent version it should solve the issue:pip3 install -U clearml-agemnt==1.2.3BTW for future debugging, this is the interesting part of the log (Notice it is looking for the correct pytorch based on the auto de...
neat! please update on your progress, maybe we should add an upgrade section once you have the details worked out
I callÂ
Task.init
 after I import tensorflow (and thus tensorboard?)
That should have worked...
Can you manually add a TB report before calling opennmt function ?
(I want to verify the Task.init is indeed catching the TB calls, my theory is that somewhere inside the opennmt we loose the TB)
Where again does clearml place the venv?
Usually ~/.clearml/venvs-builds/<python version>/
Multiple agents will be venvs-builds.1 and so on
Hi StickyWhale51
I think this issue is due to some internal race condition, anyhow I think we have an RC out solving it, can you try with:pip install clearml==1.2.0rc2
i have it deployed successfully with istio.
Nice!
the only thing we had to do to get it to work was to modify the nginx.conf in the webserver pod to allow http 1.1
I was under the impression we fixed that, let me check
Hi JitteryCoyote63 , is there a callback for that?
Hi @<1523701304709353472:profile|OddShrimp85>
the venv setup is totally based on my requirements.txt instead of adding on to what the image has before. Why?
Are you using the agent in docker mode ? if this is the case it creates a venv inside the docker, inheriting from the preinstalled docker system packages,
And it is not working ? what's the Working Dir you have under the Execution Tab ?
okay so it is downloaded to your machine, and unzipped , is that part correct?
Yes, actually the first step would be a toggle button for regexp in the search, the second will be even more advanced search.
May I suggest you post it on the UI suggestion issue https://github.com/allegroai/trains/issues/81 ?
Thanks for pinging OutrageousGiraffe8
I think I was able to reproduce.
model is saved to the clearml as an output model when
b
is not a dictionary.
How did you make the example work with the automagic ?
Woot woot!
awesome, this RC is stable you can feel free to use it, the official release is probably due to be out next week :)
Hi CourageousDove78
Not the cleanest, but you can basically pass everything here:
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html#post--tasks.get_all
Reasoning is that it is passed almost as is to the server for the actual query.
if I encounter the need for that, I will adapt and open a PRÂ
Great!
GiganticTurtle0
I think that what you are looking for is:param_dict = {'key': 1234} task.connect(param_dict, name='general')Notice that when this code runs manually (i.e. not by the agent), the dict is stored on "general" parameter section of the Task.
But when the code is executed by the Agent, the opposite happens and the parameters from the "general" section of the Task or put back into the param_dict , here the casting is done based on the type of the original values.
Generall...
I managed to do it by using logger.report_scalar, thanks!
Sure, but for future reference where (in ignite callbacks) did you add the report_scalar call ?
Hmm, not a bad idea 🙂
Could you please open a Git Issue, so it will not get forgotten ?
(btw: I'm not sure how trivial it is to implement, nonetheless obviously possible 😉
If you are using the latest RC:pip install clearml==0.17.5rc5You can pass True it will use the "files_server" as configured in your clearml.conf
I used the http link as a filler to point to the files_server.
Make sense ?
Hi @<1651395720067944448:profile|GiddyHedgehong81>
However I need for a yolov8 (Object detection with arround 20k jpgs and .txt files) the data.yaml file:
Just add the entire folder with your files to a dataset, then get it in your code
Add files (you can do that from CLI for example): None
clearml-data add --files my_folder_with_files
Then from code: [Non...
but then the error occurs, after the training und the validating where succesfuly completed
It seems it is failing on the last eval ? could it be testing is missing? is it the same dataset ? can you verify the file is there? (notice I see a mix of / and \ in the file name, this is odd Windows is \ and linux/mac are / , you should never have a mix)
Seems like credentials error
Do you have everything setup correctly in your ~/clearml.conf ?
I think so (you can also comment out the Task.init() just to verify this is not a clearml issue)
Ohh StraightCoral86 did you check cleaml-task ? This is exactly what it does
(this is the CLI, from code you basically call Task.create & Task.enqueue)
Will this solve it ?