Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8044 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
9 Answers
935 Views
0 Votes 9 Answers 935 Views
Hi
Hi https://github.com/allegroai/trains/releases/tag/0.15.1 / https://github.com/allegroai/trains-server/releases/tag/0.15.1 / https://github.com/allegroai/tr...
4 years ago
0 Votes
6 Answers
373 Views
0 Votes 6 Answers 373 Views
Hi
Hi :robot_face: , humans We have the new documentation site up and running πŸŽ‰ None 🎊 This is still a work in progress, so we keep the previous version alive...
3 years ago
0 Votes
3 Answers
945 Views
0 Votes 3 Answers 945 Views
This will close it Task.current_task().close()I think we should rename completed() because it just marks the Task as completed on the backend but does not ac...
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YummyWhale40 you are saying the example code is not working when running with the demo server? Also I think I was able to view your experiment on the demo se...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is you server using https ?!
4 years ago
0 Votes
0 Answers
961 Views
0 Votes 0 Answers 961 Views
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
919 Views
0 Votes 0 Answers 919 Views
4 years ago
0 Votes
10 Answers
437 Views
0 Votes 10 Answers 437 Views
Happy Friday everyone ! We have a new repo release we would love to get your feedback on πŸš€ πŸŽ‰ Finally easy FRACTIONAL GPU on any NVIDIA GPU 🎊 Run our nvidi...
6 months ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
4 years ago
0 Votes
0 Answers
937 Views
0 Votes 0 Answers 937 Views
4 years ago
0 Votes
2 Answers
924 Views
0 Votes 2 Answers 924 Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
3 years ago
0 Votes
0 Answers
943 Views
0 Votes 0 Answers 943 Views
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
docs are up
4 years ago
0 Votes
0 Answers
859 Views
0 Votes 0 Answers 859 Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
I would guess connectivity issues, the TLS is probably python inaccurate response (I mean in a way, it is also a TLS error, but I would imagine this has more...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
🎊 🍾 Happy new year ! πŸŽ† πŸŽ‡ We wanted to thank you all for the great feedback, contribution and general support you guys give us. It is truly fulfilling to ...
3 years ago
0 Votes
0 Answers
981 Views
0 Votes 0 Answers 981 Views
4 years ago
Show more results questions
0 Hi, Is There A Way To Pull Clearml Datasets To A Mounted Pv Instead Of The Pod'S Local Directory.

When you set the pod make sure you mount the clearml local cache folder to the PV
basically /root/.clearml/cache/

one year ago
0 Hi Guys, Any Plan To Integrate The

We already redesigned the implementation so it should be quite easy to extend to GCP and Azure, what are you planning ?

4 years ago
0 Hi All, Is There A Way To Schedule The Tasks From The Queue Onto The Gpu Instances Based On Factors Such As Gpu Utilisation, Number Of Cpu Cores Present, Free Memory Or Custom Parameters Such As Priority Of The Task, Estimated Time Etc?

I am trying to see if the user can submit a list of resource requirements (e.g 4GPUs, 12 cores, 100GB diskspace)

This will be quite easy to implement using the cleamrl k8s glue, just use user-properties and change the template based on it. I can point to where you need to modify the code

3 years ago
0 Good Morning, I'M Wondering If Someone Has Any Advice/Experience Configuring Clearml-Agent To Include Private Packages From Aws Codeartifact? So Far I Know I Have To Edit The

SuperficialGrasshopper36 regrading the codeartifact
I think the easiest will be to have a bash script authenticating the codeartifact with the aws command at the beginning of each docker spin. This can be done by adding it to:
https://github.com/allegroai/clearml-agent/blob/81edd2860fbc09e2a179985d8315ffaba851dcd7/docs/clearml.conf#L136
For example:
extra_docker_shell_script: ["apt-get install -y aws_cli_or_something", "aws cli authenticate me command"]wdyt?

3 years ago
0 Hi, I Am Trying To Pull Api Data From /Tasks.Get_All Endpoint

You should have metric :monitor:gpu variant gpu_0_utilization
Since I see you have none of those, that points to no GPU driver ...
Could that be ?

one year ago
0 Hey Guys, I Have Set Up A Clearml Pipeline For My Simple Isolation Forest Model. But I Have Been Receiving This Error.

If you are using the "default" queue for the agent, notice you might need to run the agent with --services-mode to allow for multiple pipeline components on the same machine

one year ago
0 Hi Again. Is There Any Way To Have Trains-Agent Do A 'Docker Build' On The Dockerfile In The Repository It Pulls And Then Run That Image? I Know I Can Specify The Base Image Trains-Agent Runs The Task In And That Will Get Pulled/Run At Execution Time, But

Hi RobustGoldfish9 ,

I'd much rather just have trains-agent just automatically build the image defined there than have to build the image separately and make it available for all the agents to pull.

Do you mean there is no docker image in the artifactory built based on your Dockerfile ?

3 years ago
0 Hi, I Am Trying To Setup Multi-Node Training With Pytorch Distributeddataparallel. Ddp Requres A Launch Script With A Set Of Parameters To Be Run On Each Node. One Of These Parameters Is Master Node Address. I Am Currently Using The Following Scheme:

This task is picked up by first agent; it runs DDP launch script for itself and then creates clones of itself with task.create_function_task() and passes its address as argument to the function

Hi UnevenHorse85
Interesting use case, just for my understanding, the idea is to use ClearML for the node allocation/scheduling and PyTorch DDP for the actual communication, is that correct ?

passes its address as argument to the function

This seems like a great solution.

the queu...

3 years ago
0 Hi There, I'Ve Encountered A Problematic Behavior In Python. When Defining An Argument A Default Value Of

Hi PompousBeetle71
I remember it was an issue, but it was solved a while ago. Which Trains version are you using?

4 years ago
0 Hello There, I Am Trying To Organize The Dl Code Into A Monorepo, The Repo Will Have A Section Of Shared Packages That Will Be Used By Other Packages That Are The Actual Training Projects. Let'S Say That I Install The Shared Libs With Pip In Editable Mod

Hi SkinnyPanda43

Let's say that I install the shared libs with pip in editable mode on my development evironment, how does the clearml-agent will handle those libraries if I submit a job

So installing packages from local folders with "-e" is in general ill-advised.
But using a full git path should work out of the box. for example if you install pip install https://github.com/user/repo/repo.git then the agent will be able to install it on the remote machine. The main challenge...

2 years ago
0 Hi, We Have A Use Case That We Would Like To Upload A Local Folder Into The Cloud

Hi OutrageousSheep60

AS-IS

  • without compressing or breaking it up into chunks.

So for that I would suggest to manually archive it, and upload as external link?
Or are you saying you want to control the compression used by Dataset class ?
https://github.com/allegroai/clearml/blob/72d9b22e0d27f317a364acfeacbcf5c70f852e8c/clearml/datasets/dataset.py#L603

one year ago
0 Hi

Oh, then no, you should probably do the opposite πŸ™‚
What is the flow like now? (meaning what are you using kubeflow for and how)

2 years ago
0 Hey, Great Product! I'Ve Installed Trains Agent On A Python3 Venv, But When I Run A Script On The Worker, It Calls Python2 Instead Of Python 3. How To Change It?

VivaciousWalrus99 any chance the original Task was executed with python2 ?
what do you have for:
ls -la /cs/usr/gal.hyams/.trains/venvs-builds/3.7/bin/

3 years ago
0 I Would Like To Use Clearml Together With Hydra Multirun Sweeps, But I’M Having Some Difficulties With The Configuration Of Tasks.

Hmm @<1523701279472226304:profile|SoreHorse95> this is a good point, I think you are correct we need to fix that,

  • Could you open a GitHub issue so this is not forgotten ?
  • As a workaround I would use clone=True, then after the call I would call task.close() on the original task, wdyt?
one year ago
0 Is Trains Adaptable For Federated Learning Scenarios?

Hi LazyLeopard18 ,
So long story short, yes it does.
Longer version, to really accomplish full federated learning with control over data at "compute points" you need some data abstraction layer. Without data abstraction layer, federated learning is just averaging derivatives from different location, this can be easily done with any distributed learning framework, such as horovod pr pytorch distributed or TF distributed.
If what you are after is, can I launch multiple experiments with the sam...

4 years ago
0 Hi There, I Have A Package Called

IrritableGiraffe81 could it be the pipeline component is not importing pandas inside the function? Notice that a function decorated with pipeline component become a stand-alone, this means that if you need pandas you need to import inside the function. The same goes for all the rest of the packages used.
When you are running with run_loclly or debug_pipeline you are using your local env , as opposed to the actual pipeline where a new env is created inside the repo.
Can you send the Entire p...

2 years ago
0 Hi All! Is There Any Simple Way To Use

Hi @<1556450111259676672:profile|PlainSeaurchin97>

Is there any simple way to use

argparse

to pass a clearml task name?

need to call

args = task.connect(args)

.

noooo πŸ™‚ there is no need to do that, the arguments are automatically detected
see for yourself

args = parse_args()
task = Task.init(task_name=args.task_name)
one year ago
0 I'M Trying To Use

Follow up: I see that if I move an Experiment to a new project, it does not copy the associated model files and must be done manually.Β Once I moved the models to the new project, the query works as expected.

Correct πŸ™‚
Nice catch!

3 years ago
0 Hi! I'M Using Func

Hi DepressedFish57

In my case download each part takes ~5 second, and unzip ~15.

We run into that, and the new version will employ multithreading approach for the unzip (meaning the unzipping will happen in the background)

2 years ago
0 Hi! I Was Wondering Regarding This Issue:

` from time import sleep
from clearml import Task
import tqdm

task = Task.init(project_name='debug', task_name='test tqdm cr cl')
print('start')
for i in tqdm.tqdm(range(100)):
sleep(1)
print('done') `The above example code will output a line every 10 seconds (with the default console_cr_flush_period=10) , can you verify it works for you?

3 years ago
0 Hi, I Started A Trains-Agent (0.15) In Services Mode (Full Command:

Hi JitteryCoyote63 a few implementation details on the services-mode, because I'm not certain I understand the issue.
The docker-agent (running in services mode) will pick a Task from the services queue, then it will setup the docker for it spin it and make sure the Task starts running inside the docker (once it is running inside the docker you will see the service Task registered as additional node in the system, until the Task ends) once that happens the trains-agent will try to fetch the...

4 years ago
0 ..
2 years ago
0 Hello! I'M Trying To Test The (Unpublished) Feature That Should Help Me To Deal With Running Cloned Pipelines From Different Commits/Branches. I Found This Commit:

Hi CleanPigeon16
Put the specific git into the "installed packages" section
It should look like:
... git+ ...(No need for the specific commit, you can just take the latest)

3 years ago
Show more results compactanswers