So packages have to be installed and not just be mentioned in requirements / imported?
Thanks for the fast responses as usual AgitatedDove14 🙂
Hey SuccessfulKoala55 Like I mentioned, I have a spacy ner model that I need to serve for inference.
In params:
parameter_override={'General/dataset_urlWhat’s the General for?
yeah meant this, within clearml.conf:
logging {} sdk {}
AgitatedDove14 - on a similar note, using this is it possible to add to requirements of task with task_overrides?
Does a pipeline step behave differently?
AgitatedDove14 - are there cases when it tries to skip steps?
In this case I have data and then set of pickles created from the data
Something like:
with Task() as t: #train
AgitatedDove14 - it does have boto but the clearml-serving installation and code refers to older commit hash and hence the task was not using them - https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving_service.py#L217
Thanks you! Does this go as a root logging {} element in the main conf? outside SDK right?
Would this be a good use case to have?
# Python 3.6.13 | packaged by conda-forge | (default, Feb 19 2021, 05:36:01) [GCC 9.3.0] argparse == 1.4.0 boto3 == 1.17.70 minerva == 0.1.0 torch == 1.7.1 torchvision == 0.8.2
Ok, just my ignorance then? 🙂
AgitatedDove14 - just saw about start_remotely - https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#start_remotely
This means the services agent will take care of taking it to completion right?
Generally like the kedro project and pipeline setup that I have seen so far, but haven’t started using it in anger yet. Been looking at clearml as well, so wanted to check how well these two work together
But you have to do config.pbtxt stuff right?
Having a pipeline controller and running actually seems to work as long as i have them as separate notebooks
If you don’t mind, can you point me at the code where this happens?