
Reputation
Badges 1
371 × Eureka!I have a lot of anonymous tasks running which I would like to close immediately.
As of yet, I can only select ones that are visible and to select more, i'll have to click on view more, which gets extremely slow.
is this the correct way to upload an artifact?
checkpoint.split('.')[0] is the name that I want it assigned and the second argument is the path to the file.
Well yeah, you can say that. In add function step, I pass in a function which returns something. And since I've written the name of the returned parameter in add_function_step, I can use it, but I can't seem to figure out a way to do something similar using a task in add_step
Any way to make it automatically install any packages it finds that it requires? Or do I have to explicitly pass them in packages?
for now installing venv fixes the problem.
Ok since its my first time working with pipelines, I wanted to ask. Does the pipeline controller run endlessly or does it run from start to end with me telling it when to start based on a trigger?
Considering I don't think the function itself requires Venv to run normally but in this case it says it can't find venv
alright, so is there no way to kill it using worker id or worker name?
Thanks for the help. I'll try to continue working on the vm for now.
On both the main ubuntu and the vm, I simply installed it in a conda environment using pip
def watch_folder(folder, batch_size):
count = 0
classes = os.listdir(folder)
class_count = len(classes)
files = []
dirs = []
for cls in classes:
class_dir = os.path.join(folder, cls)
fls = os.listdir(class_dir)
count += len(fls)
files.append(fls)
dirs.append(class_dir)
if count >= batch_size:
dataset = Dataset.create(project='data-repo')
dataset.add_files(folder)
dataset.upload()
dataset.final...
there are other parameters for add_task as well, I'm just curious as to how do I pass the folder and batch size in the schedule_fn=watch_folder part
Here they are. I've created and published the dataset. Then when I try to get a local copy, the code works but i'm not sure how to proceed to be able to use that data.
I hope what I said was clear. Basically in reality they both seem mutable, with just the directory downloaded to being optional in one and in the other it's downloaded to the cache folder always.
This here shows my situation. You can see the code on the left and the tasks called 'Cassava Training' on the right. They keep getting enqueued even though I only sent a trigger once. By that I mean I only published a dataset once.
So it won't work without clearml-agent? Sorry for the barrage of questions. I'm just very confused right now.
So in my head, every time i publish a dataset, it should get triggered and run that task.
Thank you for the help with that.
Thank you, I'll take a look
were you able to reproduce it CostlyOstrich36 ?
It works this way. Thank you.
Yeah exact same usage.
So I had an issue that it didn't add the tags for some reason. There was no error, just that there were no tags on the model.
I'm using clearml installed via pip in a conda env. Do I find this file inside the environment directory?
adding tags this way to a Dataset object works fine. This issue only occured when doing this to a model.
Let me give it a try.