JitteryCoyote63 , you mean insert temporary access keys or insert access keys temporarily?
assuming that they are in the same setup as user/secret keys then I guess they would work until they expire 🙂
Hi @<1558986867771183104:profile|ShakyKangaroo32> , can you please open a GitHub issue to follow up on this? I think a fix should be issued shortly afterwards
Hi @<1558986867771183104:profile|ShakyKangaroo32> , what version of ClearML server are you using?
You need to separate the Task object itself from the code that is running. If you're manually 'reviving' a task but then nothing happens and no code is running then the task will get aborted eventually. I'm not sure I understand entirely what you're doing but I have a feeling you're doing something 'hacky'.
Usually tasks are timed out by default after not having any action after 2 hours. I guess you could just keep the task alive as a process on your machine by printing something once every hour or 30 minutes
No it wouldn't since something would actually be going on and the python script haven't finished
Hi @<1534496186793201664:profile|PompousBluewhale96> , can you please elaborate on what is going on and what you expected to happen ?
A big part of the way Datasets work is to turn the data into a parameter rather than be part of the code. You will be able to easily reproduce experiments 🙂
Hi @<1541954607595393024:profile|BattyCrocodile47> , I'm happy to hear you're excited!
You certainly can replace 1 & 2 by ClearML and still use BentoML for serving. ClearML is very modular so you can select which parts you want to integrate into your existing solution. Hopefully in the end you yourself would want to fully adopt ClearML in all 4 aspects 🙂
Hope it answers the question!
Can you try deleting the cache folder? It should be somewhere around ~/.clearml
In the UI you can add 'parent' column and filter by parent
Can you try upgrading to the latest agent version? pip install -U clearml-agent
Were there any changes to your Elastic or your server in the past few days?
What version of ClearML-Serving are you using? Are you running on a self hosted server?
Hi @<1544853721739956224:profile|QuizzicalFox36> , I think the correct method in that case is to run from docker. The agent handles the docker, the only thing is that you need to run the agent in docker mode using --docker
. You can then then pass it in docker_args
in add_function_step
or even better you can add the entire script to the shell script that runs after docker init using docker_bash_setup_script
also in add_function_step
[None](https://clear.ml/docs/latest/do...
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , I think the API server can delete things only from the files server currently. However the SDK certain has the capability to delete remote files
Hi @<1675675705284759552:profile|NonsensicalAnt77> , looks like a misconfiguration on your part.
Please see the documentation regarding S3 - None
Note that the way you configured this is for AWS S3, if you scroll down a bit there will be an example for S3 like solution such as minio, this is the configuration style you need, with the port
WackyRabbit7 I don't believe there is currently a 'children' section for a task. You could try managing the children to access them later.
One option is add_pipeline_tags(True)
this should mark all the child tasks with a tag of the parent task
SkinnyPanda43 , I think so yes 🙂
Where did clearml-init create the clearml.conf
?
I'm not sure that is possible. What is your specific use case?
Hi DeterminedCrocodile36 ,
To use a custom engine you need to change the process tree.
https://github.com/allegroai/clearml-serving/tree/main/examples/pipeline
Section 3 is what you're interested in
And here is an example of the code you need to change. I think it's fairly straightforward.
https://github.com/allegroai/clearml-serving/blob/main/examples/pipeline/preprocess.py
Can you try with blank worker_id/work_name in your clearml.conf
(basically how it was before)?
You can force kill the agent using kill -9 <process_id>
but clearml-agent daemon stop should work.
Also, can you verify that one of the daemons is the clearml-services daemon? This one should be running from inside a docker on your server machine (I'm guessing you're self hosting - correct?).
Hi @<1544128920209592320:profile|BewilderedLeopard78> , I don't think there is such an option currently. Maybe open a GitHub feature request to track this 🙂