Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
EnthusiasticShrimp49
Moderator
0 Questions, 96 Answers
  Active since 18 February 2023
  Last activity 2 years ago

Reputation

0
0 Hello Everyone, I Am Having Issues With The Gcp Autoscaler. This Is In The Output Logs:

Hey @<1529271085315395584:profile|AmusedCat74> , I may be wrong , but I think you can’t attach a gpu to an e2 instance , it should be at least an n1, no?

one year ago
0 Hi Guys

The issue may be related to the fact that right now we have some edge cases when working with lightning >= 2.0, we should have better support in the upcoming release

one year ago
0 Hi All, I Have A Newbie Question About Clear-Ml Data. I Have Four Data Sources That Get Combined To Train A Model. I Have Put Each Of These Datasets Into Clear Ml So That I Can Track Their Versions, And Then Create The Fifth 'Combined' Dataset Using The I

Hello @<1604647689662763008:profile|PerfectSwan93> , I tend to agree with you , option one is the best given your use-case. If you keep the same name and project it will result in a version bump on the combined dataset, but it will not point to the previous combined dataset as a parent.

one year ago
0 Hi Team,

This is doing fine-tuning. Training a multi-billion parameter model from scratch would be economically unfeasible for most of existing enterprises

one year ago
0 Hi

Ah, I see now. There are a couple of ways to achieve this.

  • You can enforce that the pipeline steps execute within a predefined docker image that has all these submodules - this is not very flexible, but doesn't require your clearml-agents to have access to your Git repository
  • You can enforce that the pipeline steps execute within a predefined git repository, where you have all the code for these submodules - this is more flexible than option 1, but will require clearml-agents to have acce...
one year ago
0 Hi

Hey @<1678212417663799296:profile|JitteryOwl13> , just to make sure I understand, you want to make your imports inside the pipeline step function, and you're asking whether this will work correctly?

If so, then the answer is yes, it will work fine if you move the imports inside the pipeline step function

one year ago
0 Input_Model = C_Model.Query_Models(Project_Name="A/B", Model_Name="B", Tags=["Pipeline", "Modelval:Tocheck"]) # Path_To_Last_Weights = Input_Model[0].Download_Model_Weights() Path_To_

I can't quite reproduce your issue. From the traceback it seems it has something to do with torch.load . I tried both your code snippet and creating a PyTorch model and then loading it, neither led to this error.

Could you provide a code snippet that is more like the code that is causing the issue? Also, can you please tell what clearml version are you using, and what is the Model URL in the UI? You can use the same filters in UI as the ones you used for Model.query_models to find th...

one year ago
0 Hello, I Having Been Exhausting The Metrics Quota Way To Fast For The Current Use That I Am Making Of Clearml. Is The Quota Cumulative ? I.E. Do We Get 1G Per Month ? I Am Concerned Because If We Upgrade And Need To Pay

Hey @<1644147961996775424:profile|HurtStarfish47> , you can use S3 for debug images specifically , see here: https://clear.ml/docs/latest/docs/references/sdk/logger/#set_default_upload_destination but the metrics (everything you report like scalars, single values, histograms, and other plots) are stored in the backend. The fact that you are almost running out of storage could be because of either t...

one year ago
0 Is There An External Way To Access Pipelinecontroller._Relaunch_Node(Node) ?

Hey @<1639799308809146368:profile|TritePigeon86> , given that you want to retry on connection error, wouldn't it be easier to use retry_on_failure from PipelineController / PipelineDecorator.pipeline None ?

one year ago
0 Hi Again! Are There Way Of Uploading Your Model Architecture To Clearml, And Not The Weights. Would Like To Easily Compare Different Experiments With Slightly Different Architectures And See The Difference In How Data Flows Through The Model.

Hey @<1671689458606411776:profile|StormySeaturtle98> we do support something called "Model Design" previews, basically an architecture description of the model, a la Caffe protobufs. None For example we store this info automatically with Keras

one year ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

You can try to add the force_download=True flag to .get() to ignore the locally cached content. Let me know if it helps.

2 years ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

Also, make sure you use Task.init instead of task.init

2 years ago
0 Hi Everyone, I’M Trying To Create A Pipeline From Tasks Without Uploading The Data Into Clearml Server Because

That's not that much. You can use the AWS autoscaler and provision a spot g4dn GPU instance with a bit more disk. This should cost you less than 50 cents an hour

one year ago
0 Hi Everyone, I’M Trying To Create A Pipeline From Tasks Without Uploading The Data Into Clearml Server Because

Hey Yasir, to use tensorflow prefetch your data needs to be (1) chunked and (2) stored on some server/bucket/network-attached FS. If both conditions are not satisfied, TF prefetch won't help you.

How large is the dataset we're talking about?

one year ago
0 How To Version Models While Training In Production

Hey @<1639074542859063296:profile|StunningSwallow12> what exactly do you mean by "training in production"? Maybe you can elaborate what kind of models too.

ClearML in general assigns a unique Model ID to each model, but if you need some other way of versioning, we have support for custom tags, and you can apply those programmatically on the model

one year ago
0 Hi, When Running A Task, Is There A Way To Run A Script Already In The Container? For Example There Is A Script /Home/Root/Entrypoint.Sh I Would Like To Avoid Specifying A Repo Or A Local Script On My Machine.

Hey @<1681836314334334976:profile|GrotesqueSeaturtle83> , yes, it is possible to do so, but you must configure the docker --entrypoint argument (as part of the docker_arguments ) and the docker image of for said task. In general this isn't a recommended approach. Rather than that, prefer a setup where your task code invokes the functionalities defined in other scripts that are pre-baked in the image.

See docker args here:
[None](https://clear.ml/docs/latest/docs/references/sdk/task/...

one year ago
0 Hello Everyone! I Have A Pipeline That Stores A Metric X (A Single Number) At The End And I Want To Display A Graph Of This Metric In Project Dashboard, So That I Can See How It Changes With Each Pipeline Run And Get A Slack Notification If The Value Of T

Hey @<1661904968040321024:profile|SpotlessOwl43> that's a great question!

how the metric should be saved, via report_single_value?

That's correct

what should I enter into the title and series fields in Project Dashboard?

The title should be "Summary" and series is the name of the single value you reported

one year ago
0 Hello Everyone! I Have A Pipeline That Stores A Metric X (A Single Number) At The End And I Want To Display A Graph Of This Metric In Project Dashboard, So That I Can See How It Changes With Each Pipeline Run And Get A Slack Notification If The Value Of T

Yes, metrics can be saved in both steps and pipelines. As for project dashboards, I think as of now we don't support them in UI for pipelines. But what you can do instead is to run a special "reporting" Task that will query all the pipeline runs from a specific project, and with it you can then manually plot all the important information yourself.

To get the pipeline runs, please see documentation here: [None](https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelineco...

one year ago
0 Hey Guys

Hey, yes, the reason for this issue seems to be our currently limited support for lightning 2.0. We will improve the support in the following releases. Right now one way to circumvent this issue, that I can recommend, is to use torch.save if possible, because we fully support automatic model capture on torch.save calls.

one year ago
0 How To Version Models While Training In Production

This sounds like you don't have clearml installed in the ubuntu container. Either this, or your clearml.conf in the container is not pointing to the server, as a result all information is missing.

I'd rather suggest you change the approach, and run a clearml-agent setup with docker and when you want to run YOLOv5 training you actually execute it remotely on the queue that the agent is listening to

one year ago
0 Hello, I Saw, That Clearml Data Was Integrated Into Yolov5

To link a dataset to a task you need to pass the alias= parameter to the Dataset.get . See here: https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#accessing-datasets

one year ago
0 If You'Re Paying For The Premium Features Would Those Be Available To A Self Hosted Server Or Only On The Web Client?

For on-premise deployment with premium features we have the enterprise plan 😉

one year ago
0 Hi, We Would Like To Incorporate Some Approval Process In Clearml. One Of The Needs Is To Attach Some Pdfs And Word Docs To A Published Experiment, Preferbly Through The Web Ui. The Attachments Could Be In The Form Of The Actual Files, Or Links To The Fil

This sounds like a use case for the enterprise version of ClearML. In it you can set read/write permissions. Publishing is considered a "write", so you can limit who can do it. Another thing that might be useful in your scenario is to try using "Reports", and connect the "approved" experiments info to a report and then publish it. Here's a short video introducing reports .

By the way, please note that if the experiment/report/whatever is publis...

2 years ago
0 Quick Question - Does Clearml'S Task Support Subprocesses Launched Within A Script? I Have This Scenario

Yes, you can do that. But it may make it harder to identify the task later on

2 years ago
0 Is There Anyone Who Is Using Clearml In A Jupyter Notebook. It Looks Like When Using Execute_Remotely Together With A Jupyter Noteebok, Clearml Tries To Launch A Jupyter Notebook Inside The Docker Container. It Fails Then With

Hey @<1582542029752111104:profile|GorgeousWoodpecker69> can you please tell whether you're running this jupyter notebook as part of a repo or as a standalone file, and what command did you run to launch your clearml-agent?

one year ago
0 Is It Possible To Serve Model With Frontend Html Page To Allow Input To Be Entered. Something Like Image Upload To Predict Number On It For Minst Dataset

To my knowledge, no. You'd have to create your own front-end and use the model served with clearml-serving via an API

one year ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

Hey @<1547390444877385728:profile|ThickSnake12> , how exactly do you access the artifact next time? Can you provide a code sample?

2 years ago
Show more results compactanswers