Reputation
Badges 1
24 × Eureka!that's great, thanks !
I don't, Ultralytics just output a model in project/weights/best.pt. They don't expose a way to change that value. I'm happy to rename that file manually from the code, but likely it was already uploaded by ClearML automatically
Hi @<1744891825086271488:profile|RoundElephant20> , thanks for the help. By uploading with StorageManager, will the model be registered in the ClearML Artifact section ?
Oh, wow for some reason I thought I read somewhere in the documentation that the sync was taking care of upload and finalize. Or maybe that was for the CLI ? Anyway that's what I was missing, thank you !
okay cool, I'm currently trying to migrate our stack to run from the git repository and using ClearML Datasets. I am still having an issue with relative imports in python, we were previously modifying PYTHONPATH
in the container, but now I need to modify it manually on the host. I saw there is some documentation about that here , but I'm not sure I understand that correctly since it do...
Great, thanks a lot for the help !
That's correct, I'm on the community server for now. What about for the SDK and CLI ? If they have their own credentials, can they also use clearml-data
and clearml.Dataset.get()
to access my dataset ?
It feels a bit off at the moment to have all the pipelines / tasks / datasets that we will use under "Anthony Courchesne's Workspace" (even though I saw I can rename it)
okay I'll look into it, thanks !
@<1523701070390366208:profile|CostlyOstrich36> Is there a way to migrate datasets and experiments to another workspace ?
ah, okay that make sense, I'll look more into the difference between pro / enterprise. Thanks for the info !
Thanks for the reply ! I am using the enterprise version, do you have a link to some docs for the autoscaler ? On the orchestration tab I can see AWS and GCP but not Azure. (also, I was previously able to see Clearml GPUs, but it looks like they're not available anymore ?)
also, I see that clearml-serving support pytorch, is there any chance for support for TensorRT ?
For more info, I am using jsonargparse to expose my params to clearml, but it looks like it's also picking up the params directly from YOLO
I mean I'm hosting it myself, it's on app.clear.ml
Hi @<1523701070390366208:profile|CostlyOstrich36> , Here's sample code:
from ultralytics import YOLO
from clearml import Task, Dataset
from jsonargparse import CLI
def train_yolo(ds_name: str=None):
dataset_path = Dataset.get(dataset_name=ds_name).get_local_copy()
task = Task.current_task()
if task == None:
task = Task.init(project_name="YOLO", task_name=ds_name)
model = YOLO("yolov8n")
model.train(data=dataset_path)
if __name__ == "__main_...
okay I'll try that. Although I am using parameters from the argparser to set the task name and project. Can I init with dummy values and update those after ?
@<1537605940121964544:profile|EnthusiasticShrimp49> A follow up question about metrics - My pytorch (lightning) experiments are logging to tensorboard and ClearML is automatically picking this up and uploading scalars and debug_images. If I use the set_default_upload_destination
that you mentionned, would that still properly use my URI even though I am not calling Logger.current_logger().report_image
directly ?
Also, I reset than deleted ~80% of the experiment that I had 2 days ago...
okay, and after I can use something like task.set_name("args.ds_name")
?
No ! The way I delete those is like so:
Experiment view -> Reset (one or more) experiment -> expriment is now in draft
Archive experiment
Open archive -> Delete
I get no feedback at all from the operation, but I can see the experiments are no longer available on clearml
I made sure to delete them from the archived tab
Thanks, that is exactly the kind of info I was looking for ! If debug images are counting in the metrics quota that would explain how we reached the limit so quickly.
okay ! Right now in my workflow, I have upload, finalize and publish all happening one after the other without extra logic. From my tests it also looks like I can use a finalized but unpublished dataset without any problem. Should I be handling this differently ?