Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Slack security ... Go figure πŸ˜‰
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
We are at AAAI NY, come look us up :)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
docs are up
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
New video is out πŸ™‚ Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
3 years ago
0 Votes
10 Answers
2K Views
0 Votes 10 Answers 2K Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on πŸš€ πŸŽ‰ Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
one year ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hello Everyone!
5 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
5 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
LSTMeow is back! Bots/Gals/Guys feel free to πŸ‘ None
5 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS πŸš† πŸŽ‰ 🎈
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
apparently everyone can ...
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Gals, Guys & :robot_face: , if you want to checkout the Hyper-Parameters automation (Using Bayesian Optimization Hyper-Band) We have an example on the demo s...
5 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
we recently released a new version of clearml-session with Persistent Workspace support! πŸš€ πŸŽ‰ Finally you can develop on remote machines with workspace fold...
one year ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
9 Answers
2K Views
0 Votes 9 Answers 2K Views
Hi
Hi https://github.com/allegroai/trains/releases/tag/0.15.1 / https://github.com/allegroai/trains-server/releases/tag/0.15.1 / https://github.com/allegroai/tr...
5 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is you server using https ?!
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
πŸ™ There is no v1.0 release without a prompt v1.0.1 following it, and we are no different 😊 pip install clearml==1.0.1
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
https://allegro.ai/docs
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Finally
5 years ago
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
Thank you all for taking the time to answer our survey (If you haven't already, we urge you to do so ). Your feedback has a major impact on what we build, do...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
3 years ago
Show more results questions
0 Hello! Does Anyone Know How To Do

try Hydra/trainer.params.batch_size
hydra separates nesting with "."

2 years ago
4 years ago
0 Quick Question.. Is Redis Used As Permanent Data Storage Or Just Cache? Would There Be Any Problems If It Is Restarted And Comes Up Clean?

Hi DisgustedDove53

Is redis used as permanent data storage or just cache?

Mostly cache (Ithink)

Would there be any problems if it is restarted and comes up clean?

Pretty sure it should be fine, why do you ask ?

4 years ago
0 Hey! I Would Like To Connect To Same Task From Multiple Consumer And Upload Debug Image. Is It Possibile? It Seems Like I Can Connect To The Task. Get The Logger But Nothing Is Uploaded.

Logger.current_logger()Will return the logger for the "main" Task.
The "Main" task is the task of this process, a singleton for the process.
All other instances create Task object. you can have multiple Task objects and log different things to them, but you can only have a single "main" Task (the one created with Task.init).
All the auto-magic stuff is logged automatically to the "main" task.
Make sense ?

5 years ago
4 years ago
0 Hello. It'D Be Really Helpful If Someone Could Let Me Know Why I Keep Getting "Misconfigurationexception('No Supported Gpu Backend Found!')" Error. I Am Using "Task.Execute_Remotely(Queue_Name="Default", Exit_Process=True)". Once It Gets Queued, I Clone I

Hi @<1715175986749771776:profile|FuzzySeaanemone21>

and then run "clearml-agent daemon --gpus 0 --queue gcp-l4" to start the worker.

I'm assuming the docker service cannot spin a container with GPU access, usually this means you are missing the nvidia docker runtime component

one year ago
0 Hello There! I Was Trying To Update The Url For Debug Samples After Migration Of The Server To A New Domain And Was Following The Steps From Here:

Hi @<1684010629741940736:profile|NonsensicalSparrow35>
So sorry I missed this thread πŸ™
Basically your issue is the load balancer that prevents the post command, you can change that, just add to any clearml.conf the following line:

api.http.default_method: "put"
one year ago
0 Hi Guys, With The New Venv Caching Available In Clearml, I Have The Following Problem: I Force My Pip Requirements To Be:

Since my deps are listed in the dependencies of my setup.py, I don't want clearml to list the dependencies of the current environment

Make sense πŸ™‚
Okay let me check regrading the "." in the venv cache.

4 years ago
0 Does Clearml Have The Ability To Run A Single Experiment Across Multiple Nodes/Gpus In A K8 Cluster?

Actually this is by default for any multi node training framework torch DDP / openmpi etc.

3 years ago
0 Hi Guys. Say That We Train A Model With 10 Epoch, And Suddenly Interruption Occur On Epoch 5. How Can We Continue The By Using Clearml?

Hi @<1546665666675740672:profile|AttractiveFrog67>

  • Make sure you stored the model's checkpoint (either pass output_uri=True in Task.init or manually upload)
  • When you call Task.init pass " continue_last_task=True "
  • Now you can do last_checkpoint=task.models["output"][-1].get_local_copy() and all you need is to load last_checkpoint
2 years ago
0 When I Pass Invalid Key To

it fails but with COMPLETED status

Which Task is marked "completed" the pipeline Task or the Step ?

4 years ago
0 Hi! Trying To Run The Following Very Basic Code. The First Few Parts Works As They Should:

2021-07-11 19:17:32,822 - clearml.Task - INFO - Waiting to finish uploads

I'm assuming a very large uncommitted changes πŸ™‚

4 years ago
0 Hi, I Am Creating Pipeline From Function With Dynamically Created Steps, Eg. If I Pass Pipeline Param Tune_Optime='Recall,Precision', My Pipeline Is Creating 2 Tasks/Steps - Each For Trained Model. Everything Is Working Really Nice, When I Start Pipeline

Ad1. yes, think this is kind of bug. Using _task to get pipeline input values is a little bit ugly

Good point, let;s fix it πŸ™‚

new pipeline is built from scratch (all steps etc), but by clicking "NEW RUN" in GUI it just reuse existing pipeline. Is it correct?

Oh I think I understand what happens, the way the pipeline logic is built, is that the "DAG" is created the first time the code runs, then when you re-run the pipeline step it serializes the DAG from the Task/backend.
Th...

3 years ago
0 Hi Guys, Following Up On This

Hi JitteryCoyote63
The new pipeline is almost ready for release (0.16.2),
It actually contains this exact scenario support.
Check out the example, and let me know if it fits what you are looking for:
https://github.com/allegroai/trains/blob/master/examples/pipeline/pipeline_controller.py

5 years ago
0 Another Question, I Have Written A Code That Includes A Task Scheduler That Calls A Function. That Function Watches A Folder And If There Are Sufficient Images, It Creates And Publishes The Dataset, After Which It Clears The Folder. Problem, For Some Rea

VexedCat68

a Dataset is published, that activates a Dataset trigger. So if every day I publish one dataset, I activate a Dataset Trigger that day once it's published.

From this description it sounds like you created a trigger cycle, am I missing something ?
Basically you can break the cycle by saying, trigger only on New Dataset with a specific Tag (or create the auto dataset in a different project/sub-project).
This will stop your automatic dataset creation from triggering the "orig...

3 years ago
0 I Use

Hi SteadyFox10

I'll use your version instead and put any comment if I find something.

Feel free to join the discussion πŸ™‚ https://github.com/pytorch/ignite/issues/892

Thansk for theΒ 

ouput_uri

Β can I put in theΒ 

~/trains.conf

Β file ?

Sure you can πŸ™‚
https://github.com/allegroai/trains/blob/master/docs/trains.conf#L152
You can add it in the trains-agent machine's conf file, or/and on your development machine. Notice that once you run ...

5 years ago
0 After I Have Create A Task And Closed It In A Notebook, Any Activity Seems To Trigger Another Task. For Example:

I do it to get project name

you can still get it from the task object (even after closing it)

another place I was using was to see if i am in a pipeline task

Yes that makes sense, this is one of the use cases (to see get access to the Task that is currently running). The bug itself will only happen after closing the Task (it needs to clear OS variable).
You can either upgrade to the 1.0.6rc2 or you can hack/fix it with :
` os.environ.pop('CLEARML_PROC_MASTER_ID', None)
os.envi...

4 years ago
0 Hi, I Was Trying Out The Steps On This (

Hi SubstantialElk6 ,
Are you still getting SSL errors ?

4 years ago
0 Hi, Is There A Simple Way To Make

GiganticTurtle0 BTW, this mock example worked out of the box (python 3.6 on Ubuntu):
` from typing import Any, Dict, List, Tuple, Union

from clearml import Task
from dask.distributed import Client, LocalCluster

def start_dask_client(
n_workers: int = None, threads_per_worker: int = None, memory_limit: str = "2Gb"
) -> Client:
cluster = LocalCluster(
n_workers=n_workers,
threads_per_worker=threads_per_worker,
memory_limit=memory_limit,
)
client = Cli...

4 years ago
0 Hi All, I Am Trying To Execute Somewhat Custom Hpo Scheme With Clearml. I Would Want That A Single Running Python Script Will Be Able To Sample The Optimizer, Init A Task And Report The Result Multiple Times. I Didn'T Find Anything Similar In The Docs Or

the unclear part is how do I sample another point in the optimization space from the optimizer

Just so I'm clear on the issue, you want multiple machines to access the internals of the optimizer class ? or Do you just want a way to understand what is the optimizer sampling space (i.e. the parameters and options per parameter) ?

4 years ago
0 Hi, Is There A Simple Way To Make

No worries πŸ™‚
GiganticTurtle0 I'm glad it was solved πŸ‘

4 years ago
4 years ago
Show more results compactanswers