Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
MagnificentBear85
Moderator
7 Questions, 24 Answers
  Active since 08 June 2023
  Last activity 24 days ago

Reputation

0

Badges 1

22 × Eureka!
0 Votes
4 Answers
956 Views
0 Votes 4 Answers 956 Views
Is there some example of how to develop a HPO in a pipeline setup where each hyperparameter setup is each own step again? Should we first mimick a base task ...
one year ago
0 Votes
2 Answers
173 Views
0 Votes 2 Answers 173 Views
2 months ago
0 Votes
3 Answers
613 Views
0 Votes 3 Answers 613 Views
Hi there, is there a way to save a model simply to the fileserver such that the MODEL URL will point there and not to a local disk (I am running in docker co...
7 months ago
0 Votes
5 Answers
168 Views
0 Votes 5 Answers 168 Views
Hi guys, I have a (potentially very stupid) but important problem. I moved the server to a new machine and hooked up the fileshare that we use for storage. I...
one month ago
0 Votes
5 Answers
99 Views
0 Votes 5 Answers 99 Views
Hi, potentially very silly and simple question, but I'm trying to run the cleanup_service.py in my services queue. However, it is not deleting any task but p...
one month ago
0 Votes
13 Answers
181 Views
0 Votes 13 Answers 181 Views
Hi everyone, I am updating the self-hosted server to a public IP. However, all my datasets cannot be downloaded anymore. I followed instructions from here , ...
2 months ago
0 Votes
3 Answers
173 Views
0 Votes 3 Answers 173 Views
My current training setup is a hyperparameter optimization using the TPEsampler from Optuna. For configuration we use Hydra. There is a very nice plugin that...
one month ago
0 Is There Some Example Of How To Develop A Hpo In A Pipeline Setup Where Each Hyperparameter Setup Is Each Own Step Again? Should We First Mimick A Base Task For Example?

Thanks for responding quickly. For this specific use case I need a regression sklearn model (trained in 10-fold CV) that I want to hyperoptimize using optuna. As my datasets are updated regularly, I'd like to define all of this in a pipeline such that I can easily run everything again once the data is changed.

one year ago
0 My Current Training Setup Is A Hyperparameter Optimization Using The Tpesampler From Optuna. For Configuration We Use Hydra. There Is A Very Nice Plugin That Let'S You Define The Hyperparameters In The Config Files Using The

Yeah, both of them. The HPO though requires everything to be defined by python code. The Hydra config is parsed and stored nicely, but it isn't recognized as describing HPO.

one month ago
0 Is There Some Example Of How To Develop A Hpo In A Pipeline Setup Where Each Hyperparameter Setup Is Each Own Step Again? Should We First Mimick A Base Task For Example?

I'm now thinking I need some main process that runs first a base_template task such that all gets initialized well. In the same process start the HPO which will add subtasks to the queue. This main process (also a task) will then wait until all other tasks (i.e. hyperparameter setups) have completed before wrapping up and reporting back.

one year ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

O yeah, one more thing. The initial link you sent me contains the snippet that is written to file using cat but for me it only works with simply echo on a single line. If I copy from the website, it inserts weird end of line characters that mess it up (at least that's my hypothesis) - so you might want to consider putting a warning on the website or updating to the instruction below

echo 'db.model.find({uri:{$regex:/^http:\/\/10\.0\.0\.12:8081/}}).forEach(function(e,i) { e.uri = e.uri.r...
2 months ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

Could it be that here Failed getting object 10.0.0.12:8081/Esti/ it is without the 'http' part? That I also have to replace all those occurrences?

2 months ago
0 Hi, Potentially Very Silly And Simple Question, But I'M Trying To Run The

Hi @<1523701070390366208:profile|CostlyOstrich36> - I'm using WebApp: 1.16.2-502 • Server: 1.16.2-502 • API: 2.30.

29 days ago
0 Hi, Potentially Very Silly And Simple Question, But I'M Trying To Run The

You mean that created the task? I probably should have added to the problem description that I'm able to delete the task manually, also using the SDK.
I'll elaborate on the setup.
I'm deploying server in the recommended way with very minor changes. The relevant portions of the yamls:

 agent-services:
    networks:
      - backend
    container_name: clearml-agent-services
    image: allegroai/clearml-agent-services:latest
    deploy:
      restart_policy:
        condition: on-failure
 ...
29 days ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

Thanks for the quick and helpful answer @<1722061389024989184:profile|ResponsiveKoala38> ! It works. At least, in the sense that I can see my artifacts are updated. However, my datasets are still on the wrong address. How to update those as well?

2 months ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

Awesome, thanks very much for this detailed reply! This indeed seemed to have updated every url.
One note - I had to call the mongo host as --mongo-host None

2 months ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

Of course, you can see it in the error message that I already shared - but here is another one just in case.

.venv/bin/python -c "from clearml import Dataset; Dataset.get(dataset_project='Esti', dataset_name='bulk_density')"
2024-10-09 18:56:03,137 - clearml.storage - WARNING - Failed getting object size: ValueError('Failed getting object 10.0.0.12:8081/Esti/.datasets/bulk_density/bulk_density.f66a70c6cda440dd8fdaccb52d5e9055/artifacts/state/state.json (401): UNAUTHORIZED')
2024-10-09 ...
2 months ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

Ok, even weirder now - the model paths seem updated to 172. but I have also the csv's as artifacts that are still at 10.
Any clues @<1722061389024989184:profile|ResponsiveKoala38> ?
image
image

2 months ago
0 Hi, Potentially Very Silly And Simple Question, But I'M Trying To Run The

Does this help in any way @<1523701087100473344:profile|SuccessfulKoala55> ? Should I provide something else instead?

25 days ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

Ah okay, this python script is meant to replace all the other scripts? That makes sense then 🙂

2 months ago