Unanswered
More Of Pushing Clearml To It'S Data Engineering Limits
I took a stab at writing an automated trigger to handle this. The goal is: anytime a pipeline succeeds or fails, let AWS know so that the input records can be placed onto a retry queue (or not)
I'm trying to get a trigger to work in general, and then I'll add the more complex AWS logic. But I seem to be missing a step somewhere:
I wrote a file called set_triggers.py
from clearml.automation.trigger import TriggerScheduler
TRIGGER_SCHEDULER = TriggerScheduler()
from pprint import pprint
def log_status(task_id: str):
print("REACTING TO EVENT!")
pprint(task_id)
# write this message to a file at /opt/clearml/logs/trigger.log
with open("/opt/clearml/trigger.log", "a") as f:
f.write("REACTING TO EVENT!")
f.write(pprint(task_id))
TRIGGER_SCHEDULER.add_task_trigger(
name="emit_sfn_success_signal",
# trigger_name="emit_sfn_success_signal",
trigger_on_status=["created", "in_progress", "stopped", "closed", "failed", "completed", "queued", "published",
"publishing", "unknown"],
schedule_function=log_status,
schedule_queue="default",
)
And another called basic_task.py
from clearml import Task
TASK = Task.init(project_name="Trigger Project", task_name="Trigger Test")
print("I ran!")
When I run python set_triggers.py; python basic_task.py
, they seem to execute, but I don't see any evidence of the trigger having been executed. Is there any documentation I could read about this process? I was going off of the docstrings in the TriggerScheduler
class.
182 Views
0
Answers
one year ago
one year ago