{"meta":{"id":"c3edee177ae348e5a92b65604b1c7f58","trx":"c3edee177ae348e5a92b65604b1c7f58","endpoint":{"name":"","requested_version":1.0,"actual_version":null},"result_code":400,"result_subcode":0,"result_msg":"Invalid request path /","error_stack":null,"error_data":{}},"data":{}}
For anyone reading this. apparently there aren't any credentials for my own custom server for now. I just ran it without credentials and it seems to work.
i think it downloads from the curl command
set the host variable to the ip assigned to my laptop by the network.
Then I accessed it using the ip directly instead of local host.
If it helps, I can try and record my steps in a video.
this is the console output
because those spawned processes are from a file register_dataset.py , however I'm personally not using any file like that and I think it's a file from the library.
In the case of api call,
given that i have id of the task I want to stop, I would make a post request to [CLEARML_SERVER_URL]:8080/tasks.stop
with the request body set up like the one mentioned in the api?
were you able to reproduce it CostlyOstrich36 ?
So I had an issue that it didn't add the tags for some reason. There was no error, just that there were no tags on the model.
It works this way. Thank you.
That but also in proper directory on the File System
So I got my answer, for the first one. I found where the data is stored in the server
I basically had to set the tag manually in the UI
Let me give it a try.
CostlyOstrich36
To be more clear. An example use case for me would be, that I'm trying to make a pipeline which every time a new dataset/batch is published using clearml-data,
Get the data Train it Save the model and publish it
I want to start this process with a trigger when a dataset is published to the server. Any example which I can look to for accomplishing something like this?
Thank you, I'll take a look
Okay so they run once i started a clear ml agent listening to that queue.
they're also enqueued
But what's happening is, that I only publish a dataset once but every time it polls, it gets triggered and enqueues a task even though the dataset was published only once.
I don't think I changed anything.
from sklearn.datasets import load_iris
import tensorflow as tf
import numpy as np
from clearml import Task, Logger
import argparse
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', metavar='N', default=64, type=int)
args = parser.parse_args()
parsed_args = vars(args)
task = Task.init(project_name="My Workshop Examples", task_name="scikit-learn joblib example")
iris = load_iris()
data = iris.data
target = i...
You mean I should set it to this?
I'm using clearml installed via pip in a conda env. Do I find this file inside the environment directory?
Can you guys let me know what finalize and publish methods do?
I'll look into those 3. Do those files use step 1, step 2 and step 3 files though?
Wait is it possible to do what i'm doing but with just one big Dataset object or something?