Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
We are at AAAI NY, come look us up :)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is you server using https ?!
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
5 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS 🚆 🎉 🎈
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
5 years ago
0 Votes
4 Answers
727 Views
0 Votes 4 Answers 727 Views
Happy new year everyone! 🥂 🎆 Last minute 🎁 v2.0 is now out, with a new UI design! now finally supporting light & dark mode 🤩 Lot's more to come this year...
8 months ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
apparently everyone can ...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
YEY!!!! Download as CSV 🤯
3 years ago
0 Votes
10 Answers
1K Views
0 Votes 10 Answers 1K Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on 🚀 🎉 Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
one year ago
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
Thank you all for taking the time to answer our survey (If you haven't already, we urge you to do so ). Your feedback has a major impact on what we build, do...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
https://allegro.ai/docs
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
3 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Hi
Hi ClearML v0.17.1 and ClearML-Agent v0.17.0 are now the official packages & repositories 🎉 🎊 👋 🛤️ This new name brings on many changes, mainly replace a...
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
docs are up
5 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
we recently released a new version of clearml-session with Persistent Workspace support! 🚀 🎉 Finally you can develop on remote machines with workspace fold...
one year ago
Show more results questions
0 When Running Jobs, My Pipeline Controller Always Updates To The Latest Git Commit Id But Sometimes My Pipeline Steps Do Not. This Appears To Be Somewhat Random So I Believe It Is Due To Caching. Has Anyone Else Encountered This Or Have Any Idea How To Fix

AdventurousRabbit79 are you passing cache_executed_step=False to the PipelineController ?
https://github.com/allegroai/clearml/blob/332ceab3eadef4997e897d171957975a247a6dc1/clearml/automation/controller.py#L129
Could you send a usage example ?

my pipeline controller always updates to the latest git commit id

This will only happen if the Task the pipeline creates has no specific commit ID, and instead just uses the latest from the git repo. Is this the case ?

4 years ago
4 years ago
0 Autoscaler Parallelization Issue: I Have An Aws Autoscaler Set Up With A Resource That Has A Max Of 3 Instances Assigned To The

How did you define the decorator of "train_image_classifier_component" ?
Did you define:
@PipelineDecorator.component(return_values=['run_model_path', 'run_tb_path'], ...Notice two return values

3 years ago
0 I Don'T Quite Understand The Way

I understand that it uses time in seconds when there is no report being logged..but, it has already logged three times..

Hmm could it be the reporting started 3 min after the Task started ?

4 years ago
0 Hi There! I'M Trying To Understand How The

I'm thinking of a few plots in my current in-house tooling which are slightly different than the standard charts we look at. For example a custom parallel coordinate chart that can use aggregations, categorical variables, etc.

This can be done by comparing experiments, then check the Hyper-Parameters tab, and select graph from the drop down at the top

So my question in general is pertaining to if I would need to get better at Javascript if I were to make those changes. My guess is ...

5 years ago
0 Hi All! I I Tried To Run The

Sure, is it reproducible ?

4 years ago
0 Hi, I Am Using

Hi @<1695969549783928832:profile|ObedientTurkey46>
Use --services-mode in the agent , it will run many Tasks on the same machine, this is usually associated with the services queue, but can be run on any queue. This way you could have the same machine easily running those multiple "control" tasks.
wdyt?

one year ago
2 years ago
0 How To Use

This is odd, Can you send the full Task log? (remove any pass/user/repo that you think is sensitive)

3 years ago
0 Anyone Doing Sagemaker With Clearml - Something Like The K8S Glue But The Tasks Are Pulled Into Sagemaker Training Jobs

I think my main point is, k8s glue on aks or gke basically takes care of spinning new nodes, as the k8s service does that. Aws autoscaler is kind of a replacement , make sense?

4 years ago
0 Hi, In One Of My Agents With Cuda Version: 11.1 (From Nvidia-Smi), Clearml Agent 0.17.1 Detects Version 100 (I Can See From Experiments Logs:

JitteryCoyote63 the agent.cuda_version (or CUDA_VERSION env) tell the agent which pytorch wheel to download. CUDNN library can be included inside any wheel and it will work as long as the cuda / cudart exist on the system, for example pytorch wheels include the cudnn they use . agent.cudnn_version should actually be deprecated, and is not actually used.

For future reference, dependency order:
Nvidia Drivers CUDA library and CUDA-runtime libraries (libcuda.so / libcudart.so) CUDN...

4 years ago
0 Another Issue Is The Agent Uses Python 2 For Some Reason Even Though Locally I’M Using Python 3 And The Agent Is Supposed To Use A Python 3 Venv.

Can you send the full log? This is odd, it will by default use the python executable it (the agent) is running with.
Regardless you can specify the python executable to be used here:
https://github.com/allegroai/clearml-agent/blob/bd411a19843fbb1e063b131e830a4515233bdf04/docs/clearml.conf#L44

4 years ago
0 If I Have A Task And A Dataset Is Being Created In A Task, How Can I Get A “Link” That This Dataset Is Created In This Task, Similar To How Model Has The Task Where It Came From

Seems like passing the Task object is not working as expected (I'll make sure it is fixed).
Try:
dataset._task.set_parent(Task.current_task().id)

4 years ago
0 Hello, Is There A Way To Update A Task Diff Programatically? Eg, I'M Creating A Task Using

it seems it's following the path of the script i'm using to task.create, eg:

The folder it should run it is the script path you are passing (i.e. "script=ep_fn," )
Wrong path would imply that is it not finding the correct repository, is that the case ?

3 years ago
0 Any Info On The Lifecycle Of Datasets Downloaded To $Home/.Clearml/Cache/Storage_Manager/Datasets Via Get_Local_Copy I Have A Task Running And I Was Watching The Above Path And Datasets Were Being Downloaded And Then They Are All Removed And For A Partic

Hmm, Notice that it does store sym links to parent data versions (to save on multiple copies of the same file). If you call get_mutable_local_copy() you will get a standalone copy

4 years ago
0 Please Tell Me, Is The Limit Of 10 Copies For Comparison, Is It Ideological Or Can It Be Changed Somehow?

Hi Guys,
I hear you guys, and I know this is planned but probably bump down priority.
I know the main issue is the "Execution Tab" comparison, the rest is not an issue.
Maybe a quick Hack to only compare the first 10 in the Execution, and remove the limit on the others ? (The main isue with the execution is the git-diff / installed packages comparison that is quite taxing on the FE)
Thoughts ?

2 years ago
0 I Have A Second Question As Well, Is It Possible To Disable Any Parts Of The Automagical Logging? In My Project I Use Both Config And Argparse. It Works By Giving Path To A Config File As A Console Argument And Then Allow The User To Adjust Values With Mo

Hi UnsightlyShark53 I think you are absolutely right, there is no reason for the trains.errors.UsageError: ArgumentParser.parse_args() ... Error.
As you mentioned, if auto_connect_arg_parser=False is False, it should just ignore what it picked automatically.
I will make sure the error is resolved I will also make sure, you will still be able to connect the argparse manually with task.connect(parser) after the Task has been created. Thanks for the reference! I took a look o...

5 years ago
0 Regarding The “Classic” Datasets (Not Hyper Datasets): Is There An Option To Do Something Equivalent To Dvc’S “

Hi RoughTiger69

but still get the semantics of knowing when an (external) file changed?

How would you know it changed?
This implies you have a way to verify hash, which means you download the data , no?

3 years ago
0 Hello! I'M Trying To Test The (Unpublished) Feature That Should Help Me To Deal With Running Cloned Pipelines From Different Commits/Branches. I Found This Commit:

That works AND the feature works!

YEY

Quick follow up question, is there any way to abort a pipeline and all of the tasks it ran?

Hmm yes currently if you abort the pipeline is has no "time" to abort the running Tasks (the DAG itself will stop, because the pipeline controller was aborted, bit the running Tasks will continue).
In order to have better support, we need to add a previously requested feature for "abort" callback. This is actually not as straight forward as it sound...

4 years ago
0 Hi, When Using

somehow set docker_args and docker_bash_setup_script equivalent??task.set_base_docker(...)
# somehow setup repo and branch to download to remote instance before runningThis is automatically detected based on your local commit/branch as well ass uncommitted changes

3 years ago
0 Any Plans For Log Space For Hyperparameter Support (Log Argument)? This Is Supported Config Space

Hi RobustRat47
What do you mean by "log space for hyperparameter" , what would be the difference ? (Notice that on the graph itself you can switch to log scale when viewing in the UI) ?
Or are you referring to the hyper parameter optimization, allowing you to add log space ?

4 years ago
Show more results compactanswers