Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ReassuredTiger98
Moderator
97 Questions, 644 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

611 × Eureka!
0 Can Someone Point Me Whether/How The Services-Agent The Starts With The Clearml-Server Mounts The

In my case I use the conda freeze option and do not even have CUDA installed on the agents.

4 years ago
0 Hello! Since Today I Get

So I just updated the env that clearml-agent created (and where pytorch cpu is installed) with my local environment.yml and now the correct version is installed, so most probably the `/tmp/conda_envaz1ne897.yml`` is the problem here

4 years ago
0 Hello! Since Today I Get

Yea, will do so in 30min

4 years ago
0 I Finally Got The Cleanup_Service.Py To Run. However, Now I Get Errors When Trying To Load Scalars. This Is What I Found In The Logs

[2021-05-07 10:52:00,282] [9] [WARNING] [elasticsearch] POST ` [status:N/A request:60.058s]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 445, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 440, in _make_request
httplib_response = conn.getresponse()
File "/usr/lib64/python3.6/http/client.py", lin...

4 years ago
0 Hi Everyone, I Am Just Wondering Whether The Bugs Regarding The Deletion Of Tasks Is Fixed In The Current Version? E.G. This Happening When You Want To Delete A Lot Of Tasks.

@<1523701087100473344:profile|SuccessfulKoala55> Only when I delete on self-hosted.
@<1523712723274174464:profile|LazyFish41> WebApp: 1.10.0-357 • Server: 1.10.0-357 • API: 2.24

This has been happening with every version of clearml-server ever. Most probably there should be a queue in front of ES, so it does not process to many request at the same time?

2 years ago
0 Did Someone Here Already Try The

🙂 Tell me when you find a good way!

4 years ago
0 I Am Trying Pytorch Nightly Again With Python 3.10. Works Fine Locally, But Fails On Clearml-Agent In Docker Mode.

Thanks for researching this issue. If you have time, you can create the issue since you are way more knowledgeable, but I can also open it if you do not have time 🙂

2 years ago
0 Hi Everyone, Quick Question: When Clearml-Agent Sets Up The Virtual Environment With Pip, Is Finding The Correct Cuda Version For Pytorch Something That Pip Or That Clearml Does?

Hi CostlyOstrich36 , thank you for answering so quick. I think that s not how it works because if this was true, one would have to always match local machine to servers. Afaik clearml finds the correct PyTorch Version, but I was not sure how (custom vs pip does it)

3 years ago
0 Hi Everyone, Is It Possible To Show The Upload Progress Of Artificats? E.G. I Use

So my network seems to be fine. Downloading artifacts from the server to the agents is around 100 MB/s, while uploading from the agent to the server is slow.

4 years ago
0 Hi Everyone, Quick Question: When Clearml-Agent Sets Up The Virtual Environment With Pip, Is Finding The Correct Cuda Version For Pytorch Something That Pip Or That Clearml Does?

I am wondering cause when used in docker mode, the docker container may have a CUDA Version that is different from the host version. However, ClearML seems to use the host version instead of the docker container's version, which is a problem sometimes.

3 years ago
0 Hi Everyone, Quick Question: When Clearml-Agent Sets Up The Virtual Environment With Pip, Is Finding The Correct Cuda Version For Pytorch Something That Pip Or That Clearml Does?

I used the wrong docker container. The docker container I used had version 11.4. Interestingly, the override from clearml.conf and CUDA_VERSION Env variable did not work there.

With the correct docker container everything works fine. Shame on me.

3 years ago
0 Hi, Although

Ok. I just wanted to make sure I have configured my agent properly. Just want to make sure I have to set it on all agents.

4 years ago
0 Is There A Reason Why All Clearml.Task Methods Regarding Requirements (E.G. Pip Requirements) Are Class Methods? Are Requirements Not Stored In A Task?

Mhhm, then maybe it is not clear 😂 to me how clearml.Task is meant to be used. I thought of it as being a container for all the information regarding a single experiment that is reflected on the server-side and by this in the WebUI. Now I init() a Task and it will show in the WebUI. I thought after initialization I can still update the task to my liking, i.e. it being a documentation of my experiment.

4 years ago
0 Hi Everyone, Quick Question: When Clearml-Agent Sets Up The Virtual Environment With Pip, Is Finding The Correct Cuda Version For Pytorch Something That Pip Or That Clearml Does?

I have to correct myself, I do not even have CUDA installed. Only the driver and everything CUDA-related is provided by the docker container. This works with a container that has CUDA 11.4, but now I have one with 11.6 (latest nvidia pytorch docker).

However, even after changing the clearml.conf and overriding with CUDA_VERSION, the clearml-agent prints on the docker container agent.cuda_version = 114 ! (Other changes to the clearml.conf on the agent are reflected in the docker, so only...

3 years ago
0 Hello! Since Today I Get

The problem is that clearml installs cudatoolkit=11.0 but cudatoolkit=11.1 is needed. By setting agent.cuda_version=11.1 in clearml.conf it uses the correct version and installs fine. With version 11.0 conda will resolve conflicts by installing pytorch cpu-version.

4 years ago
4 years ago
0 I Have A Self-Hosted Clearm-Server And And Clearml-Agent Started With

clearml==0.17.4
` task dca2e3ded7fc4c28b342f912395ab9bc pulled from a238067927d04283842bc14cbdebdd86 by worker redacted-desktop:0
Running task 'dca2e3ded7fc4c28b342f912395ab9bc'
Storing stdout and stderr log to '/tmp/.clearml_agent_out.vjg4k7cj.txt', '/tmp/.clearml_agent_out.vjg4k7cj.txt'
Current configuration (clearml_agent v0.17.1, location: /tmp/.clearml_agent.us8pq3jj.cfg):

agent.worker_id = redacted-desktop:0
agent.worker_name = redacted-desktop
agent.force_git_ssh...

4 years ago
0 Hello! Since Today I Get
Thu Mar 11 17:52:45 2021       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.56       Driver Version: 460.56       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |   ...
4 years ago
0 Hello! Since Today I Get
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: linux-64
_libgcc_mutex=0.1=conda_forge
_openmp_mutex=4.5=1_llvm
absl-py=0.12.0=pypi_0
aiostream=0.4.2=pypi_0
attrs=20.3.0=pypi_0
blas=1.0=mkl
bzip2=1.0.8=h7b6447c_0
ca-certificates=2020.10.14=0
cached-property=1.5.2=pypi_0
cachetools=4.2.1=pypi_0
certifi=2020.6.20=py37_0
chardet=4.0.0=pypi_0
clearml=0.17.4=pypi_0
cloudpickle=1.6.0=py_0
cudatoolkit=11.1.1=h6406543_8
cycler...
4 years ago
0 Hi Everyone, Quick Question Regarding Minio And Logging:

So I suppose there is a bug in ClearML.

3 years ago
Show more results compactanswers