Adding a custom engine example is on the 'to do' list but if you manage to add a PR with an example it would be great 🙂
Can you export them somehow?
this doesn't explain as to why the env variables didn't work though
Maybe you defined the env variables outside the container? Maybe incorrect usage on your end? The env variables work when properly configured.
My guess would be something related to your environments.
Hi @<1673501397007470592:profile|RelievedDuck3> , there is some discussion of it in this video None
Please follow the instructions.
Hi @<1652845271123496960:profile|AdorableClams1> , you set up fixed users in your docker compose, I would check there
Can you elaborate on how you did that?
DilapidatedDucks58 , can you verify please that the machine that is running the cleanup is properly configured with S3 credentials and they work for sure (Can delete and create files)
Anything in Elastic? Can you add logs of the startup of the apiserver?
I think you'd have to re-run them to get them logged
SoreDragonfly16 Hi, what is your usage when saving/loading those files? You can mute both the save/load messages but not each one separately.
Also, do you see all these files as input models in UI?
If you mean to fetch the notebook via code you can see this example here:
None
What do you mean exactly by run it as notebook? Do you mean you want an interactive session to work on a jupyter notebook?
Hi @<1537605927430000640:profile|NarrowSquirrel61> , how did you report the audio/image files?
I mean in the execution section of the task - under container section
I think you need to use an absolute path, not a relative one
Hi @<1526734383564722176:profile|BoredBat47> , it should be very easy and I've done it multiple times. For the quickest fix you can use api.files_server
in clearml.conf
like some details about attributes, dataset size, formats.
Can you elaborate on how exactly you'd be saving this data?
here when we define output_uri in task_init in which format the model would be saved?
It depends on the framework I guess 🙂
Then these should be by default killed by the ClearML server after a few hours. How long was it stuck?
Can you add a code snippet that reproduces this for you please?
Hi @<1587615463670550528:profile|DepravedDolphin12> , can you please provide a link to the doc you read?
Hi @<1546665634195050496:profile|SolidGoose91> , when configuring a new autoscaler you can click on '+ Add item' under compute resources and this will allow you to have another resource that is listening to another queue.
You need to set up all the resources to listen to the appropriate queues to enable this allocation of jobs according to resources.
Also in general - I wouldn't suggest having multiple autoscalers/resources listen to the same queue. 1 resource per queue. A good way to mana...
Hi @<1523704157695905792:profile|VivaciousBadger56> , can you elaborate on this error please?
2023-02-14 13:06:44,336 - clearml.Task - WARNING - Failed auto-detecting task repository: [WinError 123] Die Syntax für den Dateinamen, Verzeichnisnamen oder die Datenträgerbezeichnung ist falsch: '[...]\\<input>'
Hi @<1635813046947418112:profile|FriendlyHedgehong10> , can you please elaborate on the exact steps you took? When you view the model in the UI - can you see the tags you added during the upload?
Interesting! Do they happen to have the same machine name in UI?
Hi @<1523701504827985920:profile|SubstantialElk6> , I think as long as the ports are open and pods can communicate between themselves and it should work
Hi @<1562610703553007616:profile|CloudyCat50> , can you provide some code examples?
@<1523701553372860416:profile|DrabOwl94> , I would suggest restarting the elastic container. If that doesn't help, check the ES folder permissions - maybe something changed
Hi @<1719524669695987712:profile|ClearHippopotamus36> , what if you manually add these two packages to the installed packages section in the execution tab of the experiment?
Hi @<1590514572492541952:profile|ColossalPelican54> , I'm not sure what you mean. output_uri=True
will upload the model to the file server - making it more easily accessible. Refining the model would require unrelated code. Can you please expand?