Reputation
Badges 1
13 × Eureka!@<1523701087100473344:profile|SuccessfulKoala55> I disagree. Queues can have multiple workers, and that implies multiple instances of a task can run concurrently.
This is necessary for board farms, or any non-tiny scale of work.
@<1523701070390366208:profile|CostlyOstrich36> Unfortunately I cannot supply anything, as no information is provided. Please see attached screen shot, that is all the information I have.
Thank you. It says in the doco I can use enviromental to control the ID, but I don't understand how -
None
Will it work retroavtively, or do I need to create the agent with said name?
And if so, how do I create an agent with a predetermined name?
I do have 2 initial steps, then one step. However, I abort after the 2 initial steps are done, and I get the "2 experiments aborted."
My third step is ruined because a resource addressed is busy while accessed, which is weird.
Also, when starting the pipeline from a script it starts 2 concurrent runs (experimnt 240 and 241), while it should only start 1.
I think it may be the case - I get that when I don't run through the pipeline but through the IDE option to run a script.
@<1523701070390366208:profile|CostlyOstrich36> Hi!
Thank you very much for the informative answer.
I have a follow-up question on q.1: Is there a pythonic way to retrieve that info mid-run?
Yes. Sometimes, task (on HIL finish but fails on the HIL and does not produce output. Is it possible to not fail the task and still mark it as uncacheable?
@<1523701070390366208:profile|CostlyOstrich36> Will it work? Assume I have 3 workers.
1 takes a task
2 takes a task
1 checks last_worker
If possible, I would like the second option and invalidate caching that completed task. But I am considreing just failing the task.
@<1523701087100473344:profile|SuccessfulKoala55> No, I mean a chip. A piece of hardware that cannot, on it's on, run an agent and as such an attached computer - in this case, the server - will have an agent accessing it via ssh. In my case, I want to have a "board farm" - multiple boards for running inferences on them, and I'd like to have them all connected to the same server.