Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 6 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YummyWhale40 awesome thanks!
4 years ago
0 Votes
6 Answers
436 Views
0 Votes 6 Answers 436 Views
Hi
Hi :robot_face: , humans We have the new documentation site up and running πŸŽ‰ None 🎊 This is still a work in progress, so we keep the previous version alive...
3 years ago
0 Votes
1 Answers
949 Views
0 Votes 1 Answers 949 Views
Gals, Guys & :robot_face: , if you want to checkout the Hyper-Parameters automation (Using Bayesian Optimization Hyper-Band) We have an example on the demo s...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New RC for trains-agent is out pip install trains-agent==0.13.2rc1
4 years ago
0 Votes
0 Answers
990 Views
0 Votes 0 Answers 990 Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New releases: pip install trains==0.13.3https://github.com/allegroai/trains/releases/tag/0.13.3 pip install trains-agent==0.13.2https://github.com/allegroai/...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
We are at AAAI NY, come look us up :)
4 years ago
0 Votes
0 Answers
989 Views
0 Votes 0 Answers 989 Views
2 years ago
0 Votes
0 Answers
894 Views
0 Votes 0 Answers 894 Views
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YEY!!!! Download as CSV 🀯
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
I would guess connectivity issues, the TLS is probably python inaccurate response (I mean in a way, it is also a TLS error, but I would imagine this has more...
4 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
4 years ago
0 Votes
1 Answers
985 Views
0 Votes 1 Answers 985 Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is it a one time thing? or recurring?
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Finally
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Lol, I wonder what the adblock rule was ;)
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
https://allegro.ai/docs
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New video is out πŸ™‚ Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys! I have great news, we finally fully implemented support for continuing previously trained models πŸŽ‰ Here is a quick example (this is torch, but any ...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Slack security ... Go figure πŸ˜‰
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
docs are up
4 years ago
0 Votes
10 Answers
497 Views
0 Votes 10 Answers 497 Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on πŸš€ πŸŽ‰ Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
8 months ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
https://m.facebook.com/story.php?story_fbid=2484620658505570&id=1620822758218702&refid=52&tn=-R
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
1 Answers
506 Views
0 Votes 1 Answers 506 Views
LSTMeow is back! Bots/Gals/Guys feel free to πŸ‘ None
4 years ago
0 Votes
2 Answers
988 Views
0 Votes 2 Answers 988 Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
This will close it Task.current_task().close()I think we should rename completed() because it just marks the Task as completed on the backend but does not ac...
3 years ago
Show more results questions
0 Just A Quick Question: How Can I Pull Off The Scaler Data Json From Server Without Downloading Them One By One?

is there a way that i can pull all scalars at once?

I guess you mean from multiple Tasks ? (if so then the answer is no, this is on a per Task basis)

Or, can i get experiments list and pull the data?

Yes, you can use Task.get_tasks to get a list of task objects, then iterate over them. Would that work for you?
https://clear.ml/docs/latest/docs/references/sdk/task/#taskget_tasks

3 years ago
0 Hi! Is There Something Happening With The

Hmm I think this was the fix (only with TF2.4), let me check a sec

3 years ago
0 Hi Quick Question. If I Use Clearml-Data To Upload A Dataset To A Remote Folder Which Is Mounted At, Say, /Mnt/Something/Data, When I Use Dataset.Get_Local_Copy(), It Looks Like It Is Unzipping That Data Also In The Remote Folder And Thus Returning The A

Hi StormyOx60
Yes, by default it assumes any "file://" or local files, are accessible (which makes sense because if they are not, it will not able to download them).

there some way to force it to download the dataset to a specified location that is actually on my local machine?

You can specify a specific folder is not "local" and what it will do it will copy the zip locally and unzip it.
Is this what you are after ?

3 years ago
0 Hi I Saw This On The Clearml-Agent Docs But Other Than The Docker Image, I'M Not Sure How To Integrate This With Clearml Py And Clearml-Server. Please Advise.

Hi SubstantialElk6
Yes this is the queue the glue will pull jobs from and push into the k8s. You can create a new queue from the UI (go to the workers&queues page and to the Queue Tab and press on "create new" Ignore it πŸ™‚ this is if you are using config maps and need TCP routing to your pods As you noted this is basically all the arguments you need to pass for (2). Ignore them for the time being This is the k8s overrides to use if launching the k8s job with kubectl (basically --override...

3 years ago
0 Does Clearml Have The Ability To Run A Single Experiment Across Multiple Nodes/Gpus In A K8 Cluster?

Actually this is by default for any multi node training framework torch DDP / openmpi etc.

one year ago
0 Base_Template_Keras_Simply.Py

DeliciousBluewhale87 could you send the full log of the Task?

3 years ago
0 Hi, I Would Like To Pass In Some Pip Arguments That Clearml-Agent Would Include When Setting Up The Venv On The Containers. How Should I Specify This? The Argument In Question Are --Trusted-Host And --Find-Links . I Need Them As I'Ve Installed A Pypi Repo

The --template-yaml allows you to use foll k8s YAML template (the overrides is just overrides, which do not include most of the configuration options. we should probably deprecate it

3 years ago
0 It Is Possible To Attach To An

Hi GiganticTurtle0
Sure, OutputModel can be manually connected:
model = OutputModel(task=Task.current_task()) model.update_weights(weights_filename='localfile.pkl')

3 years ago
0 What Is The Recommended Way To Stop The Execution Of A Specific Agent? This Command Doesn'T Allow Me To Specify The Agent Ip I Want To Stop:

Hmmm that is a good use case to have (maybe we should have --stop get an argument ?)
Meanwhile you can do
$ clearml-agent daemon --gpus 0 --queue default $ clearml-agent daemon --gpus 1 --queue default then to stop only the second one: $ clearml-agent daemon --gpus 1 --queue default --stopwdyt?

3 years ago
0 Is It Possible To Add A Callback For A Pipeline From A Step?

That is awesome!
If you feel like writing a bit about the use-case and how you solved it, I think AnxiousSeal95 will be more than happy to publish something like that πŸ™‚

3 years ago
0 Did Someone Here Already Try The

Yes, the mechanisms under the hood are quite complex, the automagic does not come for "free" πŸ™‚
Anyhow, your perspective is understood. And as you mentioned I think your use case might be a bit less common. Nonetheless we will try to come-up with a solution (probably an argument for Task.init so you could specify a few more options for the auto package detection)

3 years ago
0 Did Someone Here Already Try The

ButΒ 

Task.create

Β is used byΒ 

Task.init

Surprisingly , no πŸ™‚

3 years ago
0 Did Someone Here Already Try The

There is a git issue for selecting "pip freeze" / auto analyze, we could add "use requirements.txt"
wdyt?

3 years ago
0 Hi! I Was Wondering Regarding This Issue:

WittyOwl57 this is what I'm getting on my console (Notice New lines! not a single one overwritten as I would expect)
` Training: 10%|β–ˆ | 1/10 [00:00<?, ?it/s]
Training: 20%|β–ˆβ–ˆ | 2/10 [00:00<00:00, 9.93it/s]
Training: 30%|β–ˆβ–ˆβ–ˆ | 3/10 [00:00<00:00, 9.89it/s]
Training: 40%|β–ˆβ–ˆβ–ˆβ–ˆ | 4/10 [00:00<00:00, 9.87it/s]
Training: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 5/10 [00:00<00:00, 9.87it/s]
Training: 60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 6/10 [00:00<00:00, 9.88it/s]
Training: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 7/10 [00:00<00...

3 years ago
0 Hi, Is There A Simple Way To Make

GiganticTurtle0
I'm assuming here that self.dask_client.map(read_and_process_file, filepaths) actually does the multi process/node processing. The way it needs to work, it has to store the current state of the process and then restore it on any remote node/process. In practice this means pickling the local variables (Task included).
First I would try to use a standalone static function for the map, DASK might be able to deduce it does not need to pickle anything, as it is standalone.
A...

3 years ago
0 Hi, I Had A Task Successfully Completed. Then I Cloned It And Enqueued It Again Without Any Changes. But The Task Ends Up With An Error. Here'S The Logs, Not Sure What Went Wrong.

SubstantialElk6
Regrading cloning the executed Task:
In the pip requirements syntax, "@" is a hint that tells pip where to find the package if it is not preinstalled.
Usually when you find the @ /tmp/folder It means the packages was preinstalled (usually pre installed in the docker).
What is the exact scenario that caused it to appear (this was always the case, before v1 as well).
For example zipp package is installed from pypi be default and not from local temp file.
Your fix b...

3 years ago
0 Hi, When Using

somehow set docker_args and docker_bash_setup_script equivalent??task.set_base_docker(...)
# somehow setup repo and branch to download to remote instance before runningThis is automatically detected based on your local commit/branch as well ass uncommitted changes

2 years ago
0 Question About Using S3 As Artifact Storage - Do We Need To Setup S3 Credentials On Every System That Is Using Those Artifacts (E.G. In Clearml-Agent Where Model Upload Happens, Or In A Prediction Service, That Needs To Download The Latest Model)

Hi FiercePenguin76
So currently the idea is you have full control over per user credentials (i.e. stored locally). Agents (depending on how deployed) can have shared credentials (with AWS the easiest is to push to the OS env)

3 years ago
0 Is It Possible To Add A Callback For A Pipeline From A Step?

Is task.parent something that could help?

Exactly πŸ™‚ something like:
# my step is running here the_pipeline_task = Task.get_task(task_id=task.parent)

3 years ago
Show more results compactanswers