Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
5 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
New releases: pip install trains==0.13.3https://github.com/allegroai/trains/releases/tag/0.13.3 pip install trains-agent==0.13.2https://github.com/allegroai/...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Gals, Guys & :robot_face: , if you want to checkout the Hyper-Parameters automation (Using Bayesian Optimization Hyper-Band) We have an example on the demo s...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is it a one time thing? or recurring?
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hi Guys! I have great news, we finally fully implemented support for continuing previously trained models 🎉 Here is a quick example (this is torch, but any ...
5 years ago
0 Votes
10 Answers
1K Views
0 Votes 10 Answers 1K Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on 🚀 🎉 Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
one year ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
Hi
Hi , v0.15 is out, 🎉 🚀 Your feedback had a major influence on the features we added 🙂 thank you! A selected list of features: Column resizing / ordering /...
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
3 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
4 years ago
0 Votes
4 Answers
708 Views
0 Votes 4 Answers 708 Views
Happy new year everyone! 🥂 🎆 Last minute 🎁 v2.0 is now out, with a new UI design! now finally supporting light & dark mode 🤩 Lot's more to come this year...
8 months ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hi
Hi :robot_face: , humans We have the new documentation site up and running 🎉 None 🎊 This is still a work in progress, so we keep the previous version alive...
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
apparently everyone can ...
5 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
we recently released a new version of clearml-session with Persistent Workspace support! 🚀 🎉 Finally you can develop on remote machines with workspace fold...
one year ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is you server using https ?!
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
YEY!!!! Download as CSV 🤯
3 years ago
Show more results questions
0 Prev, I Worked With Clearml (1 Year Back) And Back Then, We Config Seldon Core For The Deployment And Clearml For The Training.. Now There Is Clearml-Serving, Does It And Can It Fulfill A Similar Objective ?

Hi DeliciousBluewhale87
This is the latest clearml-serving (stable release at GTC at the end of the month)
https://github.com/allegroai/clearml-serving/tree/dev

Generally speaking, clearml-sering is a control plane, preprocessing, ML inference, with Nvidia Triton for DL inference (fully transparent).
It allows you to spin an entire fully dynamic & scalable serving on top of k8s cluster. Once you spin the base containers, you can configure them live with a CLI, this includes adding new en...

3 years ago
3 years ago
0 Hello, We Are Currently Working On A Hyperparameter Tuning Job For Object Detection Following This Tutorial

I mean clone the Task in the UI (right click Clone), then go to the execution Tab, to the "installed packages" section, then click on Edit -> go to the torchvision http link, and replace it with torchvision == 0.7.0 and save.
Then right enqueue the Task (to the default queue) and see if the Agent can run it,
DeterminedToad86 Make sense ?

4 years ago
0 I Am Completely Stuck With The Serving. I Did The Custom Example. I See The Endpoint In

Hi ConvolutedSealion94
Yes this seems like the correct curl
How did you spin the clearml-serving containers? is it with the docker-compose or with the helm chart (I remember that there are some pitfalls with the helm chart, and I would actually start with the local docker-compose to debug it)

2 years ago
0 Hi, I Faced With A Silly Error, When I Run The Python Script With Task = Trains.Init(Project_Name='My Project', Task_Name='My Task'). The Task Goes To The Trains Server, But In The Trains Server, In Installed Packages Part One Of The Line

I think it fails because it tries to install trains twice. Could you remove the trains package, and test? I'm also curious how do you have both installed?!

5 years ago
0 Hi, I Am Using Logger.Report_Plotly() To Get My Roc_Curves In The Plot Window. But When Using The Comparing Feature Of Clearml, I Would Like The Plots With The Same Figure Title To Overlap. Is There A Way To Do This ?

Hi BrightGoat74
So merging general purpose plotly plots is very hard (i.e. putting both on the same graph)
But if you report using logger.report_scatter(...) the UI will merge the ROC curves into the dame graph, wdyt?
https://clear.ml/docs/latest/docs/guides/reporting/scatter_hist_confusion_mat_reporting#2d-scatter-plots

3 years ago
0 Sometimes I Notice That At The End Of An Experiment Clearml Keeps Hanging (Something With Repository Detection?) And The Script Does Not End. Do More People See This? Especially In Our Continuous Integration Pipeline This Give Problems Because Tests Are G

Hi GreasyPenguin14
This is what I did, but I could not reproduce the hang, how is this different from your code?
` from multiprocessing import Process
import numpy as np
from matplotlib import pyplot as plt
from clearml import Task, StorageManager

class MyProcess(Process):
def run(self):
# in another process
global logger
# Create a plot
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = ...

3 years ago
0 Hey, I'M Looking Into The Aws Autoscaler. I Couldn'T Find The Task In My Ui, So I Ran The

I should manually copy it to the remote services agents?

The code itself needs to run somewhere, currently this has to be your machine, either you manually run the AWS autoscaler or an agents runs it for you. Make sense ?

4 years ago
0 Hi, I Would Like To Check What Would Be The Recommended Hardware Specs For The Server Host Clearml Server. I Had One Configured With 32 Cpu Cores, 64Gb Ram And I Noticed That If We Have A Surge In Remote Task Creation, The Following Delays Occurs.

If the only issue is this line
task.execute_remotely(..., exit_process=True)It has to finish the static analysis of the entire repository (which usually happens in the background but now we have to wait for it). If the repo is large this could actually take 20sec (depending on CPU/drive of the machine itself)

4 years ago
4 years ago
0 Hi, I Would Like To Check What Would Be The Recommended Hardware Specs For The Server Host Clearml Server. I Had One Configured With 32 Cpu Cores, 64Gb Ram And I Noticed That If We Have A Surge In Remote Task Creation, The Following Delays Occurs.

We are using k8s glue to spawn the job. ...

I think this is actual network latency, nothing to do with the jobs, could it be the server is very far away?
What happens when you manually start a Task from your machine ?
Is the latency fixed? Is it just when starting a new Task?

4 years ago
0 Hi All! I I Tried To Run The

Hi MagnificentSeaurchin79
This means the tensorflow was not directly imported in the repository (which is odd, it might point to the auto package analysis failing to find a the package, if this is the case please let me know)
Regardless, if you need to make sure a package is listed in the requirements either import it or use.
Task.add_requirements('tensorflow') or Task.add_requirements('tensorflow', '2.3.1')

4 years ago
0 Hi, I'Ve Recently Upgraded To 0.15.1 From 0.14.2, And For Some Reason A Code That Previously Worked In Which I'M Getting The Tags Of A Model Using

PompousBeetle71 I think that was you saw as tags in previous version was actually systems tags, now we also have users tags (i.e. .tags). If you still want to access the system tags can you try:
InputModel('aabbcc')._get_base_model().data.system_tags

5 years ago
0 Quick Question: Is It Possible To See Who Aborted A Task?

https://clear.ml/docs/latest/docs/references/sdk/task#mark_stopped
Maybe we should add an argument so you could do:
mark_stopped(force=False, message='it was me who stopped it')And we will automatically add the user name as well ?

3 years ago
0 Hi, I’M Using

GrittyKangaroo27 any chance you can open a GitHub issue so this is not forgotten ?
(btw: we I think 1.1.6 is going to be released later today, then we will have a few RC with improvements on the pipeline, I will make sure we add that as well)

3 years ago
0 Hi All, I Have Deployed A Clearml Server With Docker To One Of Our Local Machine. I Had Set Up The Filesserver Folder As Mount Point To The Cloud. How Easy Is It To Migrate Our Existing Experiments Later On To A Clearml Server That We Deploy In The Cloud

Hi @<1576381444509405184:profile|ManiacalLizard2>
If you make sure all server access is via a host name (i.e. instead of IP:port, use host_address:port), you should be able to replace it with cloud host on the same port

2 years ago
0 Hi All, I Am Starting To Use Clearml-Agent. Run It With

Let me check if we can hack something...

4 years ago
0 Is It Not Possible To Add Artifacts To A Completed Task?

I think you can force it to be started, let me check (I pretty sure you can on aborted Task).

3 years ago
0 Hi Clearml Team, Does Upload_Folder

Hi @<1727497172041076736:profile|TightSheep99>
I think you are correct! it will use the internal individual file upload retry but does not let you control it.
Could you please open a github issue so that we do not forget to add it?

one year ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

JitteryCoyote63 I think this only holds for the conda distribution.
(Actually quite interesting, I wonder what happens if you already installed cudatoolkit...)

4 years ago
0 Hi Everybody, I’M Getting Errors With Automatic Model Logging On Pytorch (Running On A Dockered Agent).

CrookedWalrus33 I found the issue, this is only failing with Python 3.6.
Let me check something

3 years ago
0 Hi, I'Ve Got A Quick Question About

Where is the cleamlr-server running? GCP as well?

3 years ago
0 Hey All

So if I am not using remote machine can I disable this?

yes I think you can, add to your clearml.conf
sdk.development.store_jupyter_notebook_artifact = falseBTW: why would you turn it off ?

2 years ago
Show more results compactanswers