Reputation
Badges 1
371 × Eureka!That's weird, it doesn't work on my main ubuntu installation but does work on a vm i created of ubuntu on windows
Its a simple DAG pipeline.
I have a step, at which I want to run a task which finds the model I need.
That makes sense. But doesn't that also hold true for dataset.get_local_mutable_copy()?
So it won't work without clearml-agent? Sorry for the barrage of questions. I'm just very confused right now.
It's basically data for binary image classification, simple.
I recall being able to pass a script to the agent using the command line along with a requirements file.
I get the following error.
CostlyOstrich36 This didn't work, the value is -1 however the pipe didn't stop.
Sorry for the late response. Agreed, that can work, although I would prefer a way to access the data by M number of batches added instead of a certain range, since these cases aren't interchangeable. Also a simple thing that can be done is that you can create an empty Dataset in the start, and then make it the parent of every dataset you add.
So the api is something new for me. I've already seen the sdk. Am I misremembering sending python script and requirements to run on agent directly from the cli? Was there no such way?
After the step which gets the merged dataset, I should use pipe.stop if it returned -1?
As I wrap my head around that, in terms of the example given in the repo, can you tell me what the serving example is in terms of the explanation above and what the triton serving engine is, in context to the above explanation
Also could you explain the difference between trigger.start() and trigger.start_remotely()
Basically saving a model on the client machine and publishing it, then trying to download it from the server.
dataset = Dataset.create(data_name, project_name)
print('Dataset Created, Adding Files...')
dataset.add_files(data_dir)
print('Files added succesfully, Uploading Files...')
dataset.upload(output_url=upload_dir, show_progress
Should I not run the scheduler remotely if I'm monitoring a local folder?
On both the main ubuntu and the vm, I simply installed it in a conda environment using pip
were you able to reproduce it CostlyOstrich36 ?
apparently it keeps caliing this register_dataset.py script
Okay so they run once i started a clear ml agent listening to that queue.
However, since a new task started in the project, it would again start a new task.
I've basically just added dataset id and model id parameters in the args.
For anyone reading this. I think I've gotten an understanding. I can add folders to a dataset so I'll be creating single dataset and will just keep adding folders to it. Then keep records of it in a database
Basically want to be able to serve a model, and also send requests to it for inference.
I feel like they need to add this in the documentation 😕
this is the console output
AgitatedDove14 Once a model is saved and published, it should be downloadable right? Cause I keep trying to upload a model from different projects and different tasks but it keeps overwriting the previous one, and the download option is grayed out in the UI.
I was getting a different error when I posted this question. Now i'm just getting this connection error