Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
3 Answers
448 Views
0 Votes 3 Answers 448 Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
2 Answers
465 Views
0 Votes 2 Answers 465 Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS πŸš† πŸŽ‰ 🎈
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New video is out πŸ™‚ Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
2 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Slack security ... Go figure πŸ˜‰
4 years ago
0 Votes
6 Answers
485 Views
0 Votes 6 Answers 485 Views
Hi
Hi :robot_face: , humans We have the new documentation site up and running πŸŽ‰ None 🎊 This is still a work in progress, so we keep the previous version alive...
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
3 Answers
559 Views
0 Votes 3 Answers 559 Views
we recently released a new version of clearml-session with Persistent Workspace support! πŸš€ πŸŽ‰ Finally you can develop on remote machines with workspace fold...
8 months ago
0 Votes
1 Answers
441 Views
0 Votes 1 Answers 441 Views
πŸ™ Please skip cleaml python package v1.0.1 and just move on to v1.0.2 😊 apologies for the inconvenience πŸ™‚ pip install clearml==1.0.2
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Lol, I wonder what the adblock rule was ;)
4 years ago
0 Votes
0 Answers
933 Views
0 Votes 0 Answers 933 Views
3 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YummyWhale40 awesome thanks!
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
https://allegro.ai/docs
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YEY!!!! Download as CSV 🀯
2 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
4 years ago
Show more results questions
0 Hello, Everyone! I Have A Question Regarding Clearml Features. We Run Into The Situation When Some Of The Agents That Are Working On A Hpo Die Due To Variable Reasons. Some Workers Go Offline Or Resources Need Temporarily Be Detached For Other Needs. Thu

The main reason we need the above mentioned functionality is because there are some experiments that need to run for a long time. Let's say weeks.

Good point!

. We need to temporarily pause(kill or something else) running HPO task and reassign the resource for other needs.

Oh I see now....

Later, when more important experiments has been completed, we can continue HPO task from the same state.

Quick question when you say the HPO Task, you mean the HPO controller logic Task...

2 years ago
0 I Would Like To Use Clearml Together With Hydra Multirun Sweeps, But I’M Having Some Difficulties With The Configuration Of Tasks.

Hmm @<1523701279472226304:profile|SoreHorse95> this is a good point, I think you are correct we need to fix that,

  • Could you open a GitHub issue so this is not forgotten ?
  • As a workaround I would use clone=True, then after the call I would call task.close() on the original task, wdyt?
one year ago
0 Hello Again, How Can I Use The

Hi JumpyDragonfly13 , just making sure, do you have an agent running on a remote machine ?
Can you have a direct TCP connection to the remote machine (the default port it will use is 10022)

3 years ago
0 How Can I Clone A Task And Execute_Remotely The Cloned Task With Exit_Process=False. It Currently Kills The Notebook Kernel. If I Say Exit_Process=False, It Says Clone Cannot Be False. Why The Restriction? What To Do In A Notebook To Run A Task Remotely

more like testing especially before a pipeline

Hmm yes, that makes sense.
Any chance you can open a github issue on it?
Let me see if I understand, basically, do not limit the clone on execute_remotely, right ?

When did this PipelineDecorator come. Looks interestingΒ 

A few days ago (I think)
It is very cool! checkout the full object proxy interaction on the actual pipeline logic This might be better for your workflow, https://github.com/allegroai/clearml/blob/c85c05ef6aaca4e...

3 years ago
0 Hi All! Is There A Way For Trains To Recognize The Cli Arguments When Using

GrievingTurkey78 whats the repository link you see in the UI? Does it start with ssh:// or https://
Did you add git_user/git_pass to the trains.conf of the trains-agent ? (if you did it should replace any ssh:// link with https:// user/pass link)

4 years ago
0 Hi, I Am Trying To Setup The Path To Trains.Conf File Programatically And Having Trouble.. We Tried Using Os.Environ['Trains_Config_File'] = Path, And Also Other Variations Of Overriding The Trains.Backend_Config.Defs But Nothing Seem To Work.. When Creat

Programmatically before , importing the package, set os.environ['TRAINS_CONFIG_FILE']='~/my_new_trains.conf'
BTW: What's the use case for doing so?

thanks for helping again

My pleasure :)

3 years ago
0 Hi, Relating To The

add_external_files

with a very large number of urls that are

not

in the same S3 folder without running into a usage limit due to the

state.json

file being updated

a lot

?

Hi ShortElephant92
what do you mean the state.json is updated a lot?
I think that everytime you call add_external_files is updated, but add_external_files ` can get a folder to scan, that would be more efficient. How are you using it ?

2 years ago
0 Hi I Have A Most Probably A Beginer Question Abour Loading The Data In Pycharm And Later On In Google Colab From An Dataset From Clearml. I Used From Page:

try:

import os

...

dataset_path = Dataset.get(
    dataset_name=dataset_name,
    dataset_project=dataset_project,
    alias="0013_Dataset"
).get_local_copy()
dataset_path = os.path.join(dataset_path, "data.yaml")

...
11 months ago
0 Hey Clearml Team, We Created An Account, Setup Our Data Pipeline, And Now We Can'T Get Back In. Nothing Is In The Project. Can Someone From Support Reach Out To Help?

We created an account, setup our data pipeline, and now we can't get back in. Nothing is in the project. Can someone from support reach out to help?

Hi @<1545216077846286336:profile|DistraughtSquirrel81>
You mean in the SaaS? (app.clearml.ml) or is it a local installation?
If this is the SaaS, could it be the data is on a different workspace ? (you can switch workspace and refresh the page)

one year ago
0 Hi, With Clearml-Agent 1.5.1, I Tried To Run An Experiment Within A Docker With Image Python3:8 And It Failed Executing The Task While Trying To Call Python3.9. I Am Not Sure Why It'S Using Python3.9, Since The Agent.Default_Python Is 3.8 And The Image Is

packages are updated, and I don't know which python version I get, + changing the python version of the OS is not really recommended

Wait I'm confused, this is inside a container, no?

and the python version running my code should not depend of the python version running the clearml-agent (especially for experiments running in containers)

Generally speaking you are correct, but some packages will not have the same version for all python versions

Specifically in this case I think...

one year ago
0 Pytorch-Lightning-Bols.Loggers.Trainslogger

YummyWhale40 no idea what the pytorch-lighting guys did there. let me check a the actual code.

4 years ago
0 Hi, Guys! I’M Trying To Connect Clearml To My Task And Getting Strange Error: After

DepressedChimpanzee34
I might have an idea , based on the log you are getting LazyCompletionHelp in stead of str
Could it be you installed hyrda bash completion ?
https://github.com/facebookresearch/hydra/blob/3f74e8fced2ae62f2098b701e7fdabc1eed3cbb6/hydra/_internal/utils.py#L483

3 years ago
0 Hi I Have A Most Probably A Beginer Question Abour Loading The Data In Pycharm And Later On In Google Colab From An Dataset From Clearml. I Used From Page:

If I access the dataset on the same location directly it works fine:

wait, I'm confused, how is it the datset us there? did it download the dataset?

are you saying this line for example will fail? (assuming you actually have a dataset by that name)

data_path = Dataset.get(dataset_name="002_Datenset_MASAM_for_fintuning", alias="002_Datenset_MASAM_for_fintuning").get_local_copy()
11 months ago
0 When Launching A Task To Trains Agent, I'M Having Trouble Getting The Imports From Other Files Working Correctly. For Instance, If My Task Imports A Function From Another File Within The Same Git Repo [

would I have to execute each task in the pipeline locally(but still connected to trains),

Somehow you have to have the pipeline step Task in the system, you can import it from code, or you can run it once, then the pipeline will clone it and reuse it. Am I missing something ?

4 years ago
0 With

In Azure VMSS, there is a method called "Custom Data", which is basically a way of passing things to be executed

I know that it is in the to do list to add "azure_autoscaler" which is basically asybling to the aws_autoscaler.
With the same idea of the "custom data" as initial bash script:
You can check here:
https://github.com/allegroai/clearml/blob/4a2099b53c09d1feaf0e079092c9e075b43df7d2/clearml/automation/aws_auto_scaler.py#L54

3 years ago
0 Hey, I'M Trying To Run The Aws Autoscaler And Pull A Docker Image From Ecr (Private Repository). I'M Currently Getting The Error:

Those variables are not passed to the remote instance they are used by the aws autoscaler to launch it, but there is no need to pass them.
I think the easiest is to add them to the "extra_vm_bash_script" as well

3 years ago
0 Another Question Is If I Have A Conda Env Available On My Workers Systemwide.. Can I Use That Env Directly When Running Tasks With

PompousParrot44
It should still create a new venv, but inherit the packages from the system-wide (or specific venv) installed packages. Meaning it will not reinstalled packages you already installed, but it will ive you the option of just replacing a specific package (or install a new one) without reinstalling the entire venv

4 years ago
0 Hi All! I Have A Question About Pipelines. My Pipeline Consists Of Several Steps:

"sub nodes" inside pipeline, in my opinion, makes them much more useful, in sense that all the steps are visible.

Yeah I really like this idea... continuing this thread, would it also make sense to have a Task object per "sub-node" and run the sub-nodes as subprocess of the parent Node? I'm thinking this sounds like a combination of both local pipeline execution and remote pipeline execution.
wdyt?

2 years ago
0 For Any Early Adopters, Who Also Want To Give Us Feedback - Both Good And Bad, Please Feel Free To Try The Clearml-Serving Beta

This is sitting on top of the serving engine itself, acting a s a control plane.
Integration with GKE is being worked on (basically KFServing as the serving engine)

3 years ago
0 Hello Folks! We Have Started Using Clearml In Kubernetes. The Trainings Are Run In K8S With Help Of K8Sintegration And Some Custom Coding. Now For The Clearml-Session Tasks, A Port-Forward Should Be Done Each Time If I Need To Access The Jupyter Notebook

Hi DisgustedDove53

Now for the clearml-session tasks, a port-forward should be done each time if I need to access the Jupyter notebook UI for example.

So basically this is why the k8s glue has --ports-mode.
Essentially you setup a k8s service (doing the ingest TCP ports) then the template.yaml that is used by the k8s glue should specify said service. Then the clearml-session knows how to access the actual pod, by a the parameters the k8s glue sets on the Task.
Make sense ?

3 years ago
0 Hi, Trying To Spin Up A Clearml Agent And Gettting This Error:

the latter is an ec2 instance

and the agent fails to install on the ec2 machine ?

2 years ago
0 , This Is A Great Tool For Visualizing All Your Experiments. I Wanted To Know That When I Am Logging Scalar Plots With Title As Train Loss And Test Loss They Are Getting Diplayed As Train Loss And Test Loss In The Scalar Tab. I Wanted That The Title Shoul

It will not create another 100 tasks, they will all use the main Task. Think of it as they "inherit" it from the main process. If the main process never created a task (i.e. no call to Tasl.init) then they will create their own tasks (i.e. each one will create its own task and you will end up with 100 tasks)

4 years ago
Show more results compactanswers