Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, We Have A Use Case That We Would Like To Upload A Local Folder Into The Cloud

Hi OutrageousSheep60

AS-IS

  • without compressing or breaking it up into chunks.

So for that I would suggest to manually archive it, and upload as external link?
Or are you saying you want to control the compression used by Dataset class ?
https://github.com/allegroai/clearml/blob/72d9b22e0d27f317a364acfeacbcf5c70f852e8c/clearml/datasets/dataset.py#L603

one year ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

You can set torch to be installed last:
post_packages: ["horovod", "torch"]
Which will make sure the "trains-agent" version (the one you specified in the "installed packages" will be installed last.

3 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

(obviously if you have dependencies, they will be installed before, and then the correct torch will be installed over the previous version

3 years ago
0 Hi All

The main reason to add the timeout is because the warning was annoying to users 🙂
The secondary was that clearml will start reporting based on seconds from start, then when iterations start it will revert back to iterations. But if the iterations are "epochs" the numbers are lower so you end up with a graph that does not match the expected "iterations" x-axis. Make sense ?

3 years ago
0 Hi All

Hi CooperativeFox72
Sure 🙂
task.set_resource_monitor_iteration_timeout(seconds_from_start=1800)

3 years ago
0 Hi All

This will set more time before the timeout right?

Correct.

task.freeze_monitor()
download()
task.defrost_monitor()

Currently there isn't, but that's a good ides.
What would be the argument of using it vs increasing the timeout ?
btw: setting the resource timeout to 99999 will basically mean that it will wait until the first reported iteration, Not that it will just sleep for 99999sec 🙂

3 years ago
3 years ago
0 Hi, I'M Trying To Set Up My Trains-Server And I'M Getting The Following:

ElegantCoyote26 could you upgrade the docker-compose ?

3 years ago
0 Hi, I'M Trying To Set Up My Trains-Server And I'M Getting The Following:

sudo curl -L " -s)-$(uname -m)" -o /usr/local/bin/docker-compose

3 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

What probably happens is first torch is installed via "trains-agent", then it installs the other packages and they require a different version, so pip automatically replaces it.

3 years ago
0 Hey, I Was Wondering How Can I Do Hparams Tuning With Trains? Couldn'T Find Anything On The Documentation

ShaggyHare67

Now the 

trains-agent

 is running my code but it is unable to import 

trains

 ...

What you are saying is you spin the 'trains-agent' inside a docker? but in venv mode ?

On the server I have both python (2.7) and python3,

Hmm make sure that you run the agent with python3 trains-agent this way it will use the python3 for the experiments

3 years ago
0 Hey, I Have A Problem With The Following Task:

JitteryCoyote63 in the UI what's the value of "config" ? Is it empty, it a string?
Also, could you check if removing the 'type=str' from the add_argument changes the behavior?

4 years ago
0 Another Issue Is The Agent Uses Python 2 For Some Reason Even Though Locally I’M Using Python 3 And The Agent Is Supposed To Use A Python 3 Venv.

that clearml-agent needs to be installed from system python mentioned anywhere in the docs, if not I suggest it gets added.

You are right, I will check and fix if not 🙂

Thank you so much for helping.

My pleasure

3 years ago
0 Hello Everyone, I’M Newcomer For Clearml. I Have Question Related To

Just curious about the timeout, was it configured by clearML or the GCS? Can we customize the timeout?

I'm assuming this is GCS, at the end the actual upload is done GCS python package.
Maybe there is an env variable ... Let me google it

3 years ago
0 Hello, I'M Trying To Save A Keras Model As A Task Artifact, And Then Upload It From Another Task. Does Anyone Know The Syntax For That? What I'Ve Seen Is Not Quite Working.

Okay ConfusedPig65 I found the problem. For some reason the latest TF.keras.load_model . save_model is not tracked.
I'll make sure we push a fix later today

3 years ago
0 What Are Project Default Output ? That The Default Output_Uri Set On The Server Side ? Can I Use Azure Blob Storage ?

Hi @<1576381444509405184:profile|ManiacalLizard2>
Yeah that should work, assuming credentials are set in your clearml.conf

one year ago
0 Hi, Guys! I’M Trying To Connect Clearml To My Task And Getting Strange Error: After

DepressedChimpanzee34
What's the hydra version ?
I tested with 1.1.0dev3 and it worked for me

3 years ago
0 Hey, I Have A Problem With The Following Task:

JitteryCoyote63 I think I failed explaining myself.

  1. I think the problem of the controller is that you are interacting (aka changing hyper parameters)) with a Task created using new SDK version, with an older SDK version. specifically we added section names to the hyper parameters, and only new version of the SDK is aware of it.
    Make sense?
  2. Regrading the actual problem. It seems like this is somehow related to the first one, the task at run time is using an older SDK version , and I t...
4 years ago
0 Hello! How Can I Use "Report_Scatter2D" In Order To Report Timestamp In The X-Axis?

Feel free to open an issue on GitHub making sure this is not forgotten

3 years ago
0 Hi, I Am Saving Plt Chart To Clearml Using

Hi MortifiedDove27
I think you can resize the plot area in the UI (try to drag the horizontal separator)

3 years ago
0 Hi, Which Database Services Are Used To Store The Logged Data Such As Scalar, Text, Matrix, Etc? How Can I Query These For A Downstream Process Programmatically Instead Of Just Within The Web Ui? If Scalar Data Is Stored In Mongodb, Can I Use Pymongo To R

Ohh if this is the case, and this is a stream of constant inference Results, then yes, you should push it to some stream supported DB.
Simple SQL tables would work, but for actual scale I would push into a Kafka stream then pull it (serially) somewhere else and push into a DB

3 years ago
0 I Updated Trains-Server Today, And Now It'S Very Unstable, Web Interface Randomly Stops Working. Anyone Had The Same Problem? I'Ve Never Had Any Problems With Updating The Server Before

web-server seems okay, could you send the logs from the api-server?
Also if you can, the console logs from your browser, when you get the blank screen. Thanks.

4 years ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

like what all are important metric monitoring queries w.r.t. the serving tasks that can be visualized and shown in grafana?

Basically latency amd requests per minute are automatically reported. Additional reports are based on your RestAPI in/out.
Imagine the following restapi request json payload

{x=123, y=456}

and a return json of

{z=789}

The metrics you can add to the monitoring are the keys on both these jsons, i.e. "x", "y", "z"
These metrics can be both log...

one year ago
2 years ago
Show more results compactanswers