Can you export them somehow?
I think you'd have to re-run them to get them logged
Can you try running it via agent without the docker?
Hi @<1739455989154844672:profile|SmarmyHamster62> , can you add some logs from the triton container while you try calling your endpoint?
Also a newer version to the serving as well
What do you mean by public to private mongo? @<1734020208089108480:profile|WickedHare16>
Hi @<1719524663014461440:profile|CornyOwl46> , that sounds like a good plan, take into account that all of the metrics/console logs are stored in elastic so you'd have to replicate that as well
Hi @<1730033904972206080:profile|FantasticSeaurchin8> , can you add a code snippet that reproduces this + a log of the run?
Hi @<1566596960691949568:profile|UpsetWalrus59> , can you add a standalone code snippet that reproduces this behavior?
Then you can define the git credentials that can clone these repositories
Hi @<1618418423996354560:profile|JealousMole49> , I think you would need to pull all related to data via the API and then register it again through the API.
Hi @<1695969549783928832:profile|ObedientTurkey46> , do you have a code snippet that reproduces this behaviour?
Hi @<1749965229388730368:profile|UnevenDeer21> , I think this is what you're looking for
None
And if you clone the the same experiment and run it on the same machine it will again download all packages?
Hi @<1673501387578675200:profile|AdventurousLizard97> , can you please provide the full log of such a run?
Hi @<1523704667563888640:profile|CooperativeOtter46> , are the agents inside the pods running in docker mode?
Can you add a full log of an experiment?
Setup shell script works in docker mode
Hi @<1523701260895653888:profile|QuaintJellyfish58> , in the code example, you can simply set recurring=False
and I think that should do it.
None
Hi @<1614069770586427392:profile|FlutteringFrog26> , if I'm not mistaken ClearML doesn't support running from different repoes. You can only clone one code repository per task. Is there a specific reason these repoes are separate?
If you mean to fetch the notebook via code you can see this example here:
None
What do you mean exactly by run it as notebook? Do you mean you want an interactive session to work on a jupyter notebook?
Hi @<1523701504827985920:profile|SubstantialElk6> , I think as long as the ports are open and pods can communicate between themselves and it should work
I think the call tasks.get_all
should have you covered to extract all information you would need.
None
The request body should look something like this:
{
"id": [],
"scroll_id": "b77a32d585604b098f685b00f30ba2c2",
"refresh_scroll": true,
"size": 15,
"order_by": [
"-last_update"
],
"type": [
"__$not",
"annotation_manual",
"__$not",
"annotation",
"__$not",
"dataset_i...
Hi @<1610083503607648256:profile|DiminutiveToad80> , I'd suggest using the Datasets feature. But you can however of course upload it as artifacts.
Where are you trying to upload it? Can you provide the full log? Also, a code snippet would help.
Ok, thatโs good to know. So with the autoscaler, can you also define what types of machines you need, for example GPU/No GPU, storage size, memory, etc?
Yes! And you can even run with preemptible instances ๐
I think you would need to contact the sales department for this ๐
None