Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
EnthusiasticShrimp49
Moderator
0 Questions, 96 Answers
  Active since 18 February 2023
  Last activity 2 years ago

Reputation

0
0 Hi Everyone, I’M Trying To Create A Pipeline From Tasks Without Uploading The Data Into Clearml Server Because

Hey Yasir, to use tensorflow prefetch your data needs to be (1) chunked and (2) stored on some server/bucket/network-attached FS. If both conditions are not satisfied, TF prefetch won't help you.

How large is the dataset we're talking about?

one year ago
0 Hi! I Am Trying To Build And Run A Pipeline. I Pass My Dataset As Parameter Of Pipeline:

Hey @<1523704757024198656:profile|MysteriousWalrus11> , given your use case, did you consider passing the path to the dataset? Like an address to an S3 bucket

2 years ago
0 Hi Again! Are There Way Of Uploading Your Model Architecture To Clearml, And Not The Weights. Would Like To Easily Compare Different Experiments With Slightly Different Architectures And See The Difference In How Data Flows Through The Model.

Hey @<1671689458606411776:profile|StormySeaturtle98> we do support something called "Model Design" previews, basically an architecture description of the model, a la Caffe protobufs. None For example we store this info automatically with Keras

one year ago
0 Quick Question - Does Clearml'S Task Support Subprocesses Launched Within A Script? I Have This Scenario

Yes, you can do that. But it may make it harder to identify the task later on

one year ago
0 Hi, I Configured An On-Prem File Server For Clearml Which Is Mounted On My Pc.

What happens if you comment or remove the pipe.set_default_execution_queue('default') and use run_locally instead of start_locally ?

Because in the current setup, you are basically asking to run the pipeline controller task locally, while the rest of the steps need to run on an agent machine. If you do the changes I suggested above, you will be able to run everything on your local machine.

2 years ago
0 Hi, I Configured An On-Prem File Server For Clearml Which Is Mounted On My Pc.

The line before the last in your code snippet above. pipe.start_locally .

2 years ago
0 Hi Team,

Hey @<1533257278776414208:profile|SuperiorCockroach75> , we do both, and much more 🙂

one year ago
0 If You'Re Paying For The Premium Features Would Those Be Available To A Self Hosted Server Or Only On The Web Client?

For on-premise deployment with premium features we have the enterprise plan 😉

one year ago
0 Hi Team,

This is doing fine-tuning. Training a multi-billion parameter model from scratch would be economically unfeasible for most of existing enterprises

one year ago
0 How To Version Models While Training In Production

This sounds like you don't have clearml installed in the ubuntu container. Either this, or your clearml.conf in the container is not pointing to the server, as a result all information is missing.

I'd rather suggest you change the approach, and run a clearml-agent setup with docker and when you want to run YOLOv5 training you actually execute it remotely on the queue that the agent is listening to

one year ago
0 Hi Team,

Hello @<1533257278776414208:profile|SuperiorCockroach75> , thanks for asking. It’s actually unsupervised, because modern LLMs are all trained to predict next/missing words, which is an unsupervised method

one year ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

Ah, I think I understand. To execute a pipeline remotely you need to use None pipe.start() not task.execute_remotely . Do note that you can run tasks remotely without exiting the current process/closing the notebook, (see here the exit_process argument None ) but you won't be able to return any values from this task....

one year ago
0 Input_Model = C_Model.Query_Models(Project_Name="A/B", Model_Name="B", Tags=["Pipeline", "Modelval:Tocheck"]) # Path_To_Last_Weights = Input_Model[0].Download_Model_Weights() Path_To_

I can't quite reproduce your issue. From the traceback it seems it has something to do with torch.load . I tried both your code snippet and creating a PyTorch model and then loading it, neither led to this error.

Could you provide a code snippet that is more like the code that is causing the issue? Also, can you please tell what clearml version are you using, and what is the Model URL in the UI? You can use the same filters in UI as the ones you used for Model.query_models to find th...

one year ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

Hey @<1523701083040387072:profile|UnevenDolphin73> , sorry for late reply, I’m investigating now the issue that you mentioned that running a remote task with create_function_task fails. I can’t quite reproduce it, can you please provide a complete runnable code snippet that fails like you just described

one year ago
0 Hi! I Have A Dataset Like This: V1.0.0

You can create a new dataset and specify the parent datasets as all the previous ones. Is that something that would work for you ?

one year ago
0 Hi, I Configured An On-Prem File Server For Clearml Which Is Mounted On My Pc.

Ok, then launch an agent using clearml-agent daemon --queue default that way your steps will be sent to the agent for execution. Note that in this case, you shouldn't change your code snippet in any way.

2 years ago
0 Hi There, Does Anyone Have Suggestions For Best Practice For Deploying A Pipeline So That It Can Run Remotely On A Clearml Server Using A Docker Image? I Am Finding The Clearml Docs And Videos Insufficient To Get The Pipeline To Actually Run To Completion

Hey @<1654294828365647872:profile|GorgeousShrimp11> can you abort all pending experiments that wait to be fetched from this queue and try again ? Off the top of my head it could be that the clearml-agent can’t pull the custom docker image. In general you should treat the docker images not as step definitions but only as the environment , hence setting the entrypoint is not necessary

one year ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

Hey @<1523701083040387072:profile|UnevenDolphin73> what you're building here sounds like a useful tool. Let me understand what you're trying to achieve here, please correct me if I'm wrong:

  • You want to create a set of Step classes with which you can define pipelines, that will be executed either locally or remotely.
  • The pipeline execution is triggered from a notebook.
  • The steps are predefined transformations, the user normally won't have to create their own steps
    Did I get all...
one year ago
0 My Project Pipeline Is Runnung Well And Good But Instead Of Completed It Is Coming As Aborted After Complete Execution

Is this a jupyter notebook or something ? Can you download it properly as either a .ipynb or .py file?

one year ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

I ’ m afraid serializing an entire class won’t be possible , but create_function_task will send the entire environment for remote execution , so you can still access your code

one year ago
0 My Project Pipeline Is Runnung Well And Good But Instead Of Completed It Is Coming As Aborted After Complete Execution

That is not specific enough. Can you show the code? And ideally also the console log of the pipeline

one year ago
0 Is There An External Way To Access Pipelinecontroller._Relaunch_Node(Node) ?

Hey @<1639799308809146368:profile|TritePigeon86> , given that you want to retry on connection error, wouldn't it be easier to use retry_on_failure from PipelineController / PipelineDecorator.pipeline None ?

one year ago
0 My Project Pipeline Is Runnung Well And Good But Instead Of Completed It Is Coming As Aborted After Complete Execution

Hey @<1554275802437128192:profile|CumbersomeBee33> , aborted usually means that someone manually stopped the pipeline or one of it's experiments. Can you provide us with the code you used to run it?

one year ago
0 Hello, I Am Trying To Modify My Clearml-Agent Running On A Aws Autoscaler (From Clearml Applications). I Want To Be Able To Clone My Repo (Working), And Install My Poetry Dependencies From

Do you know whether the agent VM/image has python 3.9 installed ? Also, you emphasised that this happens when setting the package manager to poetry, does it mean this issue doesn’t happen when leaving package manager settings to default values ?

one year ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

Also, make sure you use Task.init instead of task.init

2 years ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

Hey @<1547390444877385728:profile|ThickSnake12> , how exactly do you access the artifact next time? Can you provide a code sample?

2 years ago
0 Hello. I Want To Update An Artifact In A Task (A Pandas Data Frame). I Do This With

You can try to add the force_download=True flag to .get() to ignore the locally cached content. Let me know if it helps.

2 years ago
Show more results compactanswers