Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hey Folks, When I Run

Could it be the credentials are actually incorrect? because it seems like you can access the server? (I assume you were able to browse to it and generate credentials. right?)

4 years ago
0 Hi When We Try And Sign Up A User With Github. The Invitation Link Never Works. Given They Have Already Signed Up With Their Github

Hi @<1533257411639382016:profile|RobustRat47>
sorry for the delay,

Hi when we try and sign up a user with github.

wait, where are you getting this link?

one year ago
0 Hi, I'M On A Machine That Normally Connects To Storage Using

BTW: if you could implement _AzureBlobServiceStorageDriver with the new Azure package, it will be great:
Basically update this class:
https://github.com/allegroai/clearml/blob/6c96e6017403d4b3f991f7401e68c9aa71d55aa5/clearml/storage/helper.py#L1620

3 years ago
0 Quick Question On

Correct 🙂
You can spin it in two modes, either venv or docker (notice that even in docker mode, it will still clone the code into the docker and install the packages inside the docker, but it also inherits from the docker preinstalled system packages, so that the installation process is a lot faster, but you have the ability to change packages without having to build an entire new docker image)

4 years ago
0 I Have A Pipeline With Tasks A->B->C. I Want To Be Able To Trigger It Manually, And Skip A Regardless Of It’S Cache Status. I Want To Pass B Value That Represents A’S Output If Needed. What’S A Good Way To Achieve This (Can Be Ui-Based, Or Pipeline-Gymnas

Decorators are good 🙂
Something along the lines of
` @PipelineDecorator.pipeline(...)
def pipeline(skip_a=False):
if not skip_a:
a = step_a()
else:
# somehow get a previous A?
# let's call it cached A
a = "replace with real'

step_b(a)
... `Is this the gist?
If it is, this looks like, "how can I control whether A is cached or not", is that correct?

3 years ago
0 I Am Trying To Do A Remote Execution Of A Test Task, But It Fails During Env Setup Due To Trying To Install An Obscure Version Of Pytorch. Been Trying To Solve This For Three Days! The Script:

Can you try to manually install it and see what you are getting?
python3.10 -m pip install /home/boris/.clearml/pip-download-cache/cu117/torch-1.12.1+cu116-cp310-cp310-linux_x86_64.whl

2 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

Bad news, there isn't a nice interface to get the table from the Optimizer object (I will make sure we add it, no reason not to).
But you can very easily get all the information you need and more:
all_the_tasks = an_optimizer.get_top_experiments(top_k=100)Then for every task in the list you can get All the information:
for task in all_the_tasks: task_params_as_dict = task.get_parameters() task_scalars = task.get_last_scalar_metrics()Basically the Task object enables you to que...

4 years ago
0 Hi, We Use Clearml To Track All Our Experiments. For Each Experiment The Accuracy The Logged For Both The Training And The Test Set:

Hi GreasyPenguin14
Quick question, any reason not to use a 2D scatter ? or a histogram (or any other non time-series plot)?

4 years ago
0 I See That In The Default Setup, This Command Is Part Of The Docker Bash Setup Script:

I do expect it to 

pip

 install though which doesn’t root access I think

Correct, it is installed on a venv (exactly for that).
It will not fail if the apt-get fails (only warnings)
Let me know if it worked

4 years ago
0 Not Able To Resume A Hyper-Parameter Optmization.

Hi GreasyLeopard35

I try to resume a stopped or aborted parameter optimization experiment,

How are you continuing the HPO? are you runing everything locally? is this with an agent? are you seeing the '[0, 0]' value on the configuration when launching the HPO or when continuing it ?

3 years ago
0 Sorry I Have Again Another Problem, Does Clearml Have Its Own Package Resolution System And Doesn'T Use Pip ? I Use A Lib Named Pyfunctional (

` Collecting inplace-abn==1.0.12
Downloading inplace-abn-1.0.12.tar.gz (137 kB)
ERROR: Command errored out with exit status 1:
command: /home/ubuntu/.clearml/venvs-builds/3.8/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xf3qf6et/inplace-abn_15b6998cb4af4199a7692be5d3a3538f/setup.py'"'"'; file='"'"'/tmp/pip-install-xf3qf6et/inplace-abn_15b6998cb4af4199a7692be5d3a3538f/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f...

4 years ago
0 Hey There, I Would Like To Increase The

I guess I would need to put this in the extra_vm_bash_script param of the auto-scaler, but it will reboot in loop right? Isn’t there an easier way to achieve that?

You can edit the extra_vm_bash_script which means the next time the instance is booted you will have the bash script executed,
In the meantime, you can ssh to the running instance and change the ulimit manually, wdyt?

4 years ago
0 Hello, Does Anybody Here Have Much Experience In Creating Sub-Tasks Or Sub-Pipelines? I'M Not Sure The Concept Is Particularly Well Established But The Docs Mention:

using caching where specified but the pipeline page doesn't show anything at all.

What do you mean by " the pipeline page doesn't show anything at all."? are you running the pipeline ? how ?
Notice PipelineDecorator.component needs to be Top level not nested inside the pipeline logic, like in the original example

@PipelineDecorator.component(
        cache=True,
        name=f'append_string_{x}',
    )
2 years ago
0 Hi All, I'M Trying To Deploy Trains On Rancher (Nice Kubernetes Cluster Orchestration Project) Where I'M Quite New To Rancher And Kubernetes. I Have Been Able To Install Trains Using Helm

Hi WickedGoat98 ,
I think you are correct 😞
I would guess it is something with the ingress configuration (i.e. ConfigMap)

4 years ago
0 Hi Everyone, Is It Possible To Not Create A Copy Of A Dataset When Adding To Clearml? My Data Is Already In A Directory On The Clearml-Server Machine And I Do Not Want To Copy It, Just Add It To Clearml As Dataset.

VexedCat68 make sense, we could also (if implementing this feature) add a special Tag to the dataset , so you know it contains "external" links, wdyt?

3 years ago
0 Hi, Love What You Guys Did With The New Datasets! I Need Some Help Though. I Assume There Will Be A No-Code Way To Do This, Maybe Not Now But In The Future. But Anyway, I Have Three Different Datasets, And I Want To Create A Merged Version Of All Three Of

creating a dataset with parents worked very well and produced great visuals on the UI!

woot woot!

I tried the squash solution, however this somehow caused a download of all the datasets into my

so this actually works, kind or like git squash, bottom line it will repackage the data from all the different versions into one new version. This means downloading the data from all squashed versions, then repackaging it into a single new version. Make sense ?

3 years ago
0 How To Use

MelancholyElk85 can you provide some logs on the pip install fails ?

3 years ago
0 Please Tell Me What Ram Metric Is Tracked By Clearml? What I See In Htop And On The Board Don'T Match Even Though It'S The Same Server 20 Gb Vs 70Gb

Hi @<1523702932069945344:profile|CheerfulGorilla72>

Please tell me what RAM metric is tracked by ClearML?

Free RAM is the entire machine free RAM
Yeah htop shows odd numbers as it doesn't "count" allocated buffers
specifically you can see the code here:
None

2 years ago
0 Hi, I Am Trying To Execeute My Code On Nvidia/Cuda Docker, But It Keeps Running, It Is Not Failed Or Not Aborted. The Last Log Message Is

MysteriousBee56 that is so weird ... last one, I promise 🙂
docker run -t --rm nvidia/cuda:10.1-base-ubuntu18.04 bash -c "echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/docker-clean && apt-get update && apt-get install -y git python3-pip && python3 -m pip install trains-agent && echo \$(which python3) && echo \$(which trains-agent)"

5 years ago
0 Hi, Is There A Simple Way To

You can control it with auto_ arguments in the Task.init call
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit

2 years ago
0 Is There A Reason

Makes sense
we need to figure what would be the easiest way to have an "opt-in" for the demo server, that will still make it a breeze to quickly test code integration ...
Any suggestions are welcomed 🙂

4 years ago
Show more results compactanswers