Reputation
Badges 1
533 × Eureka!I'm saying that because in the task under "INSTALLED PACKAGES" this is what appears
so in my code, I'll use this environment variable to read from disk
Any news on this? This is kind of creepy, it's something so basic that I can't trust my prediction pipeline because sometimes it fails randomly with no reason
AgitatedDove14
So nope, this doesn't solve my case, I'll explain the full use case from the beginning.
I have a pipeline controller task, which launches 30 tasks. Semantically there are 10 applications, and I run 3 tasks for each (those 3 are sequential, so in the UI it looks like 10 lines of 3 tasks).
In one of those 3 tasks that run for every app, I save a dataframe under the name "my_dataframe".
What I want to achieve is once all tasks are over, to collect all those "my_dataframe" arti...
yeah but I see it gets enquequed to the default
which I don't know what it is connected to
If I execute this task using python .....py
will it execute the machine I executed it on?
so putting the docs aside, what permissions should I give to the IAM associated with trains' autoscale ?
👍
Searched for "custom plotly" and "log plotly" in search, didn't thinkg about "report plotly"
that is because my own machine has 10.2 (not the docker, the machine the agent is on)
Yes, I'll prepare something and send
AgitatedDove14 just so you'd know this is a severe problem that occurs from time to time and we can't explain why it happens... Just to remind, we are using a pipeline controller task, which at the end of the last execution gathers artifacts from all the children tasks and uploads a new artifact to the pipeline's task object. Then what happens is that Task.current_task()
returns None
for the pipeline's task...
I suspect that it has something to do with remote execution / local execution of pipelines, because we play with this , so sometimes the pipeline task itself executes on the client, and sometimes on the host (where the agent is also)
Maybe the case is that after start
/ start_locally
the reference to the pipeline task disappears somehow? O_O
I'll check if this works tomorrow
I'm using ip address show
sudo curl
https://raw.githubusercontent.com/allegroai/trains-server/master/docker-compose.yml -o /opt/trains/docker-compose.yml
If you want we can do live zoom or something so you can see what happens
AgitatedDove14 sorry for the late reply,
It's right after executing all the steps. So we have the following block which determines whether we run locally or remotely
if not arguments.enqueue: pipe.start_locally(run_pipeline_steps_locally=True) else: pipe.start(queue=arguments.enqueue)
And right after we have a method that calls Task.current_task()
which returns None