Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CloudySwallow27
Moderator
9 Questions, 30 Answers
  Active since 10 January 2023
  Last activity 8 months ago

Reputation

0

Badges 1

26 × Eureka!
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
Hi, I'm using the autoscaler and getting the error Process terminated by user even though I did not terminate anything. This error occurs randomly during tra...
2 years ago
0 Votes
2 Answers
990 Views
0 Votes 2 Answers 990 Views
Hi, I am following the programmatic orchestration example here: https://clear.ml/docs/latest/docs/guides/automation/task_piping . My question is, when I setu...
2 years ago
0 Votes
15 Answers
1K Views
0 Votes 15 Answers 1K Views
2 years ago
0 Votes
14 Answers
973 Views
0 Votes 14 Answers 973 Views
Hi, trying to spin up a clearml agent and gettting this error: ERROR: Could not find a version that satisfies the requirement pywin32==303 (from -r /tmp/cach...
2 years ago
0 Votes
5 Answers
990 Views
0 Votes 5 Answers 990 Views
Hi, when I use the autoscaler to start jobs, I noticed some of them randomly abort in the middle of the jobs and give the following error: Process failed, ex...
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hello, I am using the autoscaler to start jobs. Previously, everything was working. However, now I get this error: Using cached repository in "/home/ubuntu/....
2 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hi, I see that debug samples are taking up a huge amount of space. I want to limit the amount of debug images which are stored. I see there is an option for ...
2 years ago
0 Votes
5 Answers
635 Views
0 Votes 5 Answers 635 Views
Hey! I am running a a web-app on a clearml agent (from a GCP queue) on its localhost ( None ). How can I view the app over the internet?
8 months ago
0 Votes
4 Answers
977 Views
0 Votes 4 Answers 977 Views
Hi, I am successfully starting multiple tasks automatically, but they dont train to completion they start training and then at some point they give me this e...
2 years ago
0 Hi, I See That Debug Samples Are Taking Up A Huge Amount Of Space. I Want To Limit The Amount Of Debug Images Which Are Stored. I See There Is An Option For That Here:

huh. i really like how easy it is w/ the automated TB. Is there a way to still use the auto_connect but limit the amount of debug imgs?

2 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

yall thought of everything. this fixed it! Having another issue now, but will post seperately

2 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

yes. i think the problem is that its trying to recreate the environment the task was spun up on - which was on a windows machine - on a linux ec2 instance

2 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

no 64 bit. but do you mean the PC where I am spinning the task up or the machine where I am running the task

2 years ago
0 Hello, I Am Using The Autoscaler To Start Jobs. Previously, Everything Was Working. However, Now I Get This Error:

Im thinking it may have something to do with
Using cached repository in "/home/ubuntu/.clearml/vcs-cache/ai_dev.git.42a0e941ddbf5c69216f37ceac2eca6b/ai_dev.git"We tried to reset the machines but the cache is still there. any idea how to clear it?

2 years ago
0 Hi, When I Use The Autoscaler To Start Jobs, I Noticed Some Of Them Randomly Abort In The Middle Of The Jobs And Give The Following Error:

gotit. the instance logs showed
/var/log/syslog.1:May 5 03:25:27 ip-172-31-37-234 kernel: [53387.840425] python invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0 /var/log/syslog.1:May 5 03:25:27 ip-172-31-37-234 kernel: [53387.840442] oom_kill_process+0xe6/0x120
I assume this is something I have to fix on my end (or increase instance memory). Does ClearML also happen to have solutions for this?

2 years ago
2 years ago
0 Hi, I'M Using The Autoscaler And Getting The Error

TimelyPenguin76 not sure what you mean by "as a service or via the apps", but we are self-hosting it. Does that answer the question?

Also, not sure what you mean by which "clearml version". How do we check this? The clearml python package is 1.1.4. Is that what you wanted?

2 years ago
0 Hi

Awesome! any way to hear the talk w/o/ registering for the whole conference?

2 years ago
0 Hi

gotit. reg is indeed free 🙂

2 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

yes, on my windows machine I am running:
cloned_task = Task.clone(source_task=base_task, name="Auto generated cloned task") Task.enqueue(cloned_task.id, queue_name='test_queue')I see the task successfully start in the clearml server. In the installed packages section it includes pywin32 == 303 even though that is not in my requirements.txt.

In the results --> console section, I see the agent is running and trying to install all packages, but then stops at pywin32. Some lines from t...

2 years ago
0 Hi, I Am Successfully Starting Multiple Tasks Automatically, But They Dont Train To Completion They Start Training And Then At Some Point They Give Me This Error:

relately, I just noticed that the GPU is not starting. This was in the logs:
2022-04-07 20:59:54.464854: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
Do we need to call a specific instance w/ CUDA preinstalled or does clearml take care of it?

2 years ago
0 Hi, When I Use The Autoscaler To Start Jobs, I Noticed Some Of Them Randomly Abort In The Middle Of The Jobs And Give The Following Error:

We are using self-hosted clearMl w/ the following versions:

Worker CLEARML-AGENT version 1.1.2
The autoscaler instance Clearml-AGENT version: 1.2.3
ClearML WebApp: 1.2.0-153 Server: 1.2.0-153 API: 2.16
python pip package 1.3.2

2 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

is there a way to explicitly make it some install certain packages, or at least stick to the requirements.txt file rather than the actual environment

2 years ago
0 Hello, How Can Increase The Number Of Debug Samples To Be Recorded In One Training?

Thanks AgitatedDove14 . Does this go in the local clearml.conf file w/ each user's credentials, or in the conf file for the server?

2 years ago
2 years ago
0 Hey! I Am Running A A Web-App On A Clearml Agent (From A Gcp Queue) On Its Localhost (

is there a way to get the instance's external IP address from clearml? i wouldve thought it would be in the info tab, but its not

8 months ago
0 Hi, I'M Using The Autoscaler And Getting The Error

We were able to find an error from the autoscalaer agent:

Stuck spun instance dynamic_worker:clearml-agent-autoscale:p2.xlarge:i-015001a93e0910a09 of type clearml-agent-autoscale

2022-04-19 19:16:58,339 - clearml.auto_scaler - INFO - Spinning down stuck worker: 'dynamic_worker:clearml-agent-autoscale:p2.xlarge:i-015001a93e0910a09

2 years ago
2 years ago
0 Hello, How Can Increase The Number Of Debug Samples To Be Recorded In One Training?

Also, this would change it globally. is there a way to set it for specific jobs and metrics?

2 years ago