Reputation
Badges 1
371 × Eureka!It works, however it shows the task is enqueued and pending. Note I am using .start() and not .start_remotely() for now
I then did what MartinB suggested and got the id of the task from the pipeline DAG, and then it worked.
AnxiousSeal95 I'm trying to access the specific value. I checked the type of task.artifacts and it's a ReadOnlyDict. Given that the return value I'm looking for is called merged_dataset_id, how would I go about doing that?
Thank you, this is a big help. I'll give this a go now.
I initially wasn't able to get the value this way.
AnxiousSeal95 I just have a question, can you share an example of accessing an artifact of a previous step in the pre execute callback?
From what I recall, I think resume was set to false, originally and in the cloned task.
I'm curious as to if this is buggy behavior or if it is expected?
There's a whole task bar on the left in the server. I only get this page when i use the ip 0.0.0.0
You can see there's no task bar on the left. basically I can't get any credentials to the server or check queues or anything.
I think I get what you're saying yeah. I don't know how I would give each server a different cookie name. I can see this problem being resolved by clearing cookies or manually entering /login at the end of the url
Also I just want to say thanks for all the help. And this tool is brilliant how it supports an end to end pipeline in this completely new space for MLOps. You guys have been incredibly helpful and what you've made is incredible.
They want to start integrating MLOps into the ML projects here at our company for reproducibility and continual training. ClearML popped up as a potential option so they want me to design a complete pipeline for one of our projects currently being worked on. They're ...
I'll create a github issue. Overall I hope you understand.
I did this but this gets me an InputModel. I went through the InputModel class but I'm still unsure how to get the actual tensorflow model.
Not sure myself. I have a pipeline step now, that'll return either clearml dataset id or -1. I want to stop the pipeline execution if I get -1 in the output of that step but I'm not sure how to achieve that
I download the dataset and model, and load them. Before training them again.
The situation is such that I needed a continuous training pipeline to train a detector, the detector being Ultralytics Yolo V5.
To me, it made sense that I would have a training task. The whole training code seemed complex to me so I just modified it just a bit to fit my needs of it getting dataset and model from clearml. Nothing more.
I think created a task using clearml-task and pointed it towards the repo I had created. The task runs fine.
I am unsure at the details of the training code...
Any way to make it automatically install any packages it finds that it requires? Or do I have to explicitly pass them in packages?
SuccessfulKoala55 Sorry to ping you like this. I have to ask. What's the minimum requirements for clear ml installation. Excluding requirements for databases or file server
The server is on a different machine. I'm experimenting on the same machine though.
Thank you, I'll start reading up on this once I've finished setting up the basic pipeline
keeps retrying and failing when I use Dataset.get
I'm getting this error.
clearml_agent: ERROR: Failed cloning repository.
- Make sure you pushed the requested commit:
- Check if remote worker has valid credentials
You mean I should set it to this?