the one mentioned on the page.
Then I accessed it using the ip directly instead of local host.
Or is there any specific link you can recommend to try and create my own server.
Another question, in the parents sequence in pipe.add_step, we have to pass in the name of the step right?
Wait, so the pipeline step only runs if the pre execute callback returns True? It'll stop if it doesn't run?
Tagging AgitatedDove14 SuccessfulKoala55 For anyone available right now to help out.
If there aren't N datasets, the function step doesn't Squash the datasets and instead just returns -1.
Thus if I get -1, I want the pipeline execution to end or the proceeding task to be skipped.
I have checked in the args, the value is indeed -1. Unless there is some other way for conditional pipeline steps execution.
Here's the screenshot TimelyPenguin76
These are the pipeline steps. Basically unable to pass these .
Some more of the error.
ValueError: Node train_model, parameter '${split_dataset.split_dataset_id}', input type 'split_dataset_id' is invalid
2021-12-30 16:22:00,130 - clearml.Repository Detection - WARNING - Failed auto-generating package requirements: exception SystemExit() not a BaseException subclass
After the step which gets the merged dataset, I should use pipe.stop if it returned -1?
Not sure myself. I have a pipeline step now, that'll return either clearml dataset id or -1. I want to stop the pipeline execution if I get -1 in the output of that step but I'm not sure how to achieve that
Is the only possible way to get a specific node, is to use one of the get_running_nodes or get_processed_nodes, and then checking every node in the list to see if the name matches the one we're looking for?
I don't think I changed anything.
since I've either added add_functional_step or add_step
before pipe.add_step(train_model)?
Okay so I read the docs and the above questions are cleared now thank you. I just have one other question, how would I access the artifact of a previous step within the pre execute callback? Can you share an example?
This is the task scheduler btw which will run a function every 6 hours.
I'm assuming the triton serving engine is running on the serving queue in my case. Is the serving example also running on the serving queue or is it running on the services queue? And lastly, I don't have a clearml agent listening to the services queue, does clearml do this on its own?
As I wrap my head around that, in terms of the example given in the repo, can you tell me what the serving example is in terms of the explanation above and what the triton serving engine is, in context to the above explanation
For anyone reading this. apparently there aren't any credentials for my own custom server for now. I just ran it without credentials and it seems to work.
AgitatedDove14 Just wanted to confirm in what kind of file is the string artifact stored in? txt file or pkl file?
This is the original repo which I've slightly modified.
because those spawned processes are from a file register_dataset.py , however I'm personally not using any file like that and I think it's a file from the library.
I just want to be able pass output from some step as input to some other step.