Reputation
Badges 1
371 × Eureka!dataset = Dataset.create(data_name, project_name)
print('Dataset Created, Adding Files...')
dataset.add_files(data_dir)
print('Files added succesfully, Uploading Files...')
dataset.upload(output_url=upload_dir, show_progress
I download the dataset and model, and load them. Before training them again.
Thus I wanted to pass the model id from the prior step to the next one.
CostlyOstrich36 This didn't work, the value is -1 however the pipe didn't stop.
I'm assuming the triton serving engine is running on the serving queue in my case. Is the serving example also running on the serving queue or is it running on the services queue? And lastly, I don't have a clearml agent listening to the services queue, does clearml do this on its own?
I was looking to see if I can just get away with using get_local_copy instead of the mutable one but I guess that is unavoidable.
Basically if I pass an arg with a default value of False, which is a bool, it'll run fine originally, since it just accepted the default value.
You could be right, I just had a couple of packages with this issue so I just removed the version requirement for now. Another issue that might be the case, might be that I'm on ubuntu some of the packages might've been for windows thus the different versions not existing
It works this way. Thank you.
this is the console output
Well yeah, you can say that. In add function step, I pass in a function which returns something. And since I've written the name of the returned parameter in add_function_step, I can use it, but I can't seem to figure out a way to do something similar using a task in add_step
{"meta":{"id":"c3edee177ae348e5a92b65604b1c7f58","trx":"c3edee177ae348e5a92b65604b1c7f58","endpoint":{"name":"","requested_version":1.0,"actual_version":null},"result_code":400,"result_subcode":0,"result_msg":"Invalid request path /","error_stack":null,"error_data":{}},"data":{}}
CostlyOstrich36
AgitatedDove14 Your second option is somewhat like how shortcuts work right? Storing pointers to the actual data?
before pipe.add_step(train_model)?
Big thank you though.
In the case of api call,
given that i have id of the task I want to stop, I would make a post request to [CLEARML_SERVER_URL]:8080/tasks.stop
with the request body set up like the one mentioned in the api?
Also, the steps say that I should run the serving process on the default queue but I've run it on a queue I created called a serving queue and have an agent listening for it.
I'm kind of new to developing end to end applications so I'm also learning how the predefined pipelines work as well. I'll take a look at the clear ml custom pipelines
This works, thanks. Do you have any link to where I can also see the parameters of the Dataset class or was it just on git?
In another answer, I was shown that I can access it like this. How can I go about accessing the value of merged_dataset_id which was returned by merge_n_datasets and stored as an artifact.
Have never done something like this before, and I'm unsure about the whole process from successfully serving the model to sending requests to it for inference. Is there any tutorial or example for it?
Still unsure between finalize and publish? Since upload should upload the data to the server