Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello, I Am Getting `Valueerror: Could Not Get Access Credentials For '

Yes, hopefully they have a different exception type so we could differentiate ... :) I'll check

3 years ago
0 Getting This Error At

could it be the polling on the Task (can't remember whats the interval), but it will update it's state once every X minutes/seconds

3 years ago
0 Hi, Can I Run An

RoundMosquito25 actually you can πŸ™‚
# check the state every minute while an_optimizer.wait(timeout=1.0): running_tasks = an_optimizer.get_active_experiments() for task in running_tasks: task.get_last_scalar_metrics() # do something herebase line reference
https://github.com/allegroai/clearml/blob/f5700728837188d7d6005726c581c9d74fd91164/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py#L127

one year ago
0 I Am Not Familiar With Pytorch, But Is It Expected That So Many “Models” Are Created? These Are Being Repeated As Well For A Single Task (This Is Training A T5_Model With Transformers):

these are being repeated as well for a single task (this is training a t5_model with transformers):Β (edited)

Seems like someone is storing lots of files with torch.save that ClearML automatically logs.
You can disable the autolog:
task = Task.init(..., auto_connect_frameworks={'pytorch': False})

3 years ago
0 Hi, Is It Possible To Specify Per Experiment (Task In Clearml) Where The Results (Artifacts) Are Saved?

It is the folder the clearml creates and the folder we create ourself to store the predictions

I see... If that is the case, the only solution I can think of is manually uploading the files with StorageManager(...) then get the url, and register it as debug_media or artifact:
logger.report_media("image", "type a", iteration=iteration, url=" ") task.upload_artifact('a link', artifact_object=' ')

3 years ago
0 Hi Everyone, I Have A Question About Using

New RC hopefully solves it @<1643060801088524288:profile|HarebrainedOstrich43> could you check if it works for you now?

pip install clearml==1.14.0rc0
8 months ago
0 Hello! Since Today I Get

Wtf? can you try with = (notice single not double)?

channels:
- defaults
- conda-forge
- pytorch
dependencies:
- cudatoolkit=11.1.1
- pytorch=1.8.0
3 years ago
0 Hi Team, Me Again! Im Curious If Someone Can Explain To Me Better How Task And Optimisers Integrate With Each Other. In The Example Hyperparameter Optimisation, There Is Both A Task Initialised With

I see what you mean.
an_optimizer = HyperParameterOptimizer( base_task_id='39d2c27baa8145929b2e21f686a17046', hyper_parameters=[], objective_metric_title='epoch_accuracy', objective_metric_series='epoch_accuracy', objective_metric_sign='max', optimizer_class=aSearchStrategy, max_iteration_per_job=0, total_max_jobs=0, auto_connect_task=False, ) print(an_optimizer.get_top_experiments(top_k=5))

3 years ago
0 When Trying To Run The Server From The Docker Image ( `Docker-Compose -F /Opt/Clearml/Docker-Compose.Yml Up -D` As Instructed In

@<1523722618576834560:profile|ShaggyElk85> nice !
I think that in theory you can run the DBs arm64 images no?

one year ago
3 years ago
0 Avoiding

Hi RoughTiger69

How about using the pipeline decorator as a way to run this logic?
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py

I think I'm missing the context of where the code is executed....

btw: you can now set the configuration_objects directly when calling add_step πŸ™‚
https://clearml.slack.com/archives/CTK20V944/p1633355990256600?thread_ts=1633344527.224300&cid=CTK20V944

2 years ago
3 years ago
0 Hello Channel, Two Other Related Questions:

I cannot modify an autoscaler currently running

Yes this is a known limitation, and I know they are working on fixing it for the next version

We basically have flask commands allowing to trigger specific behaviors. ...

Oh I see now, I suspect the issue is that the flask command is not executed from within the git project?!

one year ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

Hi @<1671689437261598720:profile|FranticWhale40>
Are you positive the Triton container finished syncing ?
Could you provide the docker log (both the serving and the triton)?
What is the clearml-serving version you are using ?
Could you add a print in the "preprocess" function, just to validate you are getting to the correct model version ?

7 months ago
0 Hello, I Have A Problem With Task.Set_Initial_Iteration(0) In Google Colab. After Continuing The Experiment, Gaps Appear On My Graph, But If You Use Colab. I Tried It On My Computer And Everything Is Normal There.

And it works correctly when running on my computer, and if I use colab, then for some reason it has no effect.

I think I'm lost on this one, when running in colab, is this continuing a previous experiment ?

2 years ago
0 Is It Possible To Add A Callback For A Pipeline From A Step?

Think multiple hyper-paremter sections that we need to reference
(under the Tasks Configuration Tab, the Hyper parameters can have multiple sections)

3 years ago
0 Hi, I Want To Install A Local Package Using Our Package Index, But I'M Struggling With Trying To Make Pip Trust This Host. If I Want To Install It In My Venv I Can Just Pass The

are you referring toΒ 

extra_docker_shell_

scrip

t

Correct

the thing is that this runs before you create the virtual environment, so then in the new environment those settings are no longer there

Actually that is better, because this is what we need to setup the pip before it is used. So instead of passing --trusted-host just do:
` extra_docker_shell_script: ["echo "[global] \n trusted-host = pypi.python.org pypi.org files.pythonhosted.org YOUR_S...

3 years ago
0 Hi. Question About Dataset Upload Errors: When Uploading A

setting max_workers to 1 prevents the error (but, I assume, it may come the cost of slower sequential uploads).

This seems like a question to GS storage, maybe we should open an issue there, their backend does the rate limit

My main concern now is that this may happen within a pipeline leading to unreliable data handling.

I'm assuming the pipeline code will have max_workers, but maybe we could have a configuration value so that we can set it across all workers, wdyt?

If
...

one year ago
Show more results compactanswers