Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
0 If I Am Using The Demo Servers, Do I Need To Do Something Special To Use

FYI: These days TB became the standard even for pytorch (being a stand alone package), you can actually import it from torch.
There is an example here:
https://github.com/allegroai/trains/blob/master/examples/frameworks/pytorch/pytorch_tensorboard.py

HealthyStarfish45 did you manage to solve the report_image issue ?
BTW: you also have
https://github.com/allegroai/trains/blob/master/examples/reporting/html_reporting.py
https://github.com/allegroai/trains/blob/master/examples/reporting/...

4 years ago
0 I Am Running Trains=0.16.4 Python==3.7.5 , And Notice That The "Log" Page Sometimes Didn'T Capture The Console Log From My Program. Is This A Known Issue, Anyone Have Experienced Similar Behavior?

This works.
great!

So it is still in master and should be included in 1.0.5?

correct, RC will be released soon with this fix included

3 years ago
0 Hello Everybody, Is It Possible To Download My Python Code From Clearml Server?

@<1615519322766053376:profile|DrainedOctopus19> if your code is a single file (which was stored on the clearml server), then ity is stored on the Task:

task = Task.get_task("task UID here")
# this should be your entire code
print(task.data.script.diff)
one year ago
0 Hi Everyone, Additional Arguments To The Script Execution, Is It Possible? How Can It Be Done? So At The Moment When My Script Is Being Executed The

PompousBeetle71 a few questions:
is this like using PyTorch distributed , only manually? Why don't you use call trains.init in all the sub processes? We had a few threads on that, it seems like a recurring question, I'll make sure we have an example on GitHub. Basically trains will take care of passing the arg-parser commands to the sub processes, and also on torch node settings. It will also make sure they all report to the tame experiment.What do you think?

4 years ago
0 Continuing On

Also, can the image not be pulled from dockerhub but used from the local build instead?

If you have your docker configured to pull from local artifactory, then the agent will do the same πŸ™‚ (it is calling the docker command just like you do)

agent.default_docker.arguments: "--mount type=bind,source=$DATA_DIR,target=/data"

Notice that you are use default docker arguments in the example
If you want the mount to always be there use extra_docker_arguments :
https://github.com/...

3 years ago
0 Hi, I'M Trying To Deploy Clearml On Gke On Google Cloud Via Helm Using App Version 1.0.2 And Chart Version 2.0.2+1. I'M Seeing The Following

Hi StaleHippopotamus38

I imagine I could make the changes specified in the warning toΒ 

/etc/security/limits.conf

Yep seems like elastic memory issue, but I think the helm chart takes care of it,
You can see a reference in the docker compose:
https://github.com/allegroai/clearml-server/blob/09ab2af34cbf9a38f317e15d17454a2eb4c7efd0/docker/docker-compose.yml#L41

3 years ago
0 Hi, Plotting A Debug Sample With A

Thanks VirtuousFish83 !
This is great

4 years ago
0 When I Try To Create Experiment In The Ui All I See Is This Dialogue

Does the clearml module parse the python packages?

Yes it analyzes the installed packages based on the actual mports you have in the code.

If I'm using a private pypi artifact server, would I set the PIP_INDEX_URL on the workers so they could retrieve those packages when that experiment is cloned and re-ran?

Correct πŸ™‚ the agent basically calls pip install on those packages, so if you configure it, with PIP_INDEX_URL it should just work like any other pip install

2 years ago
0 When I Try To Create Experiment In The Ui All I See Is This Dialogue

and the agent default runtime mode is docker correct?

Actually the default is venv mode, to run in docker mode add --docker to the command line

So I could install all my system dependencies in my own docker image?

Correct, inside the docker it will inherit all the preinstalled packages, But it will also install any missing ones (based on the Task requirements. i.e. "installed packages" section)

Also what is the purpose of the

aws

block in the clearml.c...

2 years ago
0 When I Try To Create Experiment In The Ui All I See Is This Dialogue

How does a task specify which docker image it needs?

Either in the code itself 'task.set_base_docker' or with the CLI, or set it in the UI when you clone an experiment (everything becomes editable)

2 years ago
0 Hi, Plotting A Debug Sample With A

Hi VirtuousFish83 ,
Is it throwing an exception? Are you seeing the plot in the UI but the title is incorrect?

4 years ago
0 How Can I Ensure Tasks In A Pipeline Have The Same Environment As The Pipeline Itself? It Seems A Bit Counter-Intuitive That The Pipeline (Executed Remotely) Captures The Local Environment, But The Tasks (Executed Remotely) Do Not Use That Same Environmen

Then the type hints are not removed from helper and the code immediately crashes when being run

Oh yes I see your point, that does make sense (btw removing the type hints will solve the issue)
regardless let me make sure this is solved

one year ago
0 Whet Is The Method For Packages Exploration When Using Conda? Agent Is Set To 'Conda' Mode. We Upload A Task From A Local Conda Env That (Obviously) Has Some Pip Packages As Well. When We Enqueue The Task To Run Remotely, Not All Conda Packages Are Instal

Let me try to add some color to this process analysis process.
Basically clearml will try to statically analyze the code (i.e. look for import/from packages)
Then it will list them in a pip requirements.txt format under installed packages.
When running inside conda environment, it will check which packages were installed via "conda install" (instead of pip install) and mark them internally. This process ensures that when the clearml-agent is running with conda package manager, it "knows" whic...

2 years ago
0 Hi Everybody, I'M Trying To Run An Experiment Inside A Docker And I Get: Repository Cloning Failed: Command '['Git', 'Checkout', 'Commit-Id', '--Force']' Returned Non-Zero Exit Status 128. (I Set Git_User And Git_Pass) Anyone Know How To Solve? I Tired

Hi SparklingElephant70

Anyone know how to solve?
I tired git push before,

Can you send the entire log? Could it be that the requested commit ID does not exist on the remote git (for example force push deleted it) ?

2 years ago
0 Hi, I'M Trying Out The

CleanPigeon16 , just making sure, docker is installed and configured on the host machine (i.e. Azure machine)?

3 years ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

A few examples here:
None

Grafana model performance example:

    browse to 

    login with: admin/admin
    create a new dashboard
    select Prometheus as data source
    Add a query: 100 * increase(test_model_sklearn:_latency_bucket[1m]) / increase(test_model_sklearn:_latency_sum[1m])
    Change type to heatmap, and select on the right hand-side under "Data Format" s...
one year ago
3 years ago
0 Hey, I'M Trying To Run The Aws Autoscaler And Pull A Docker Image From Ecr (Private Repository). I'M Currently Getting The Error:

Hi CleanPigeon16
I think now the issue is missing git credentials, did you pass git_user / git_pass to the AWS autoscaler ?

3 years ago
0 Can You Please Tell Me How To Make The Agent Use The Docker Env By Default? Instead Of Creating Venv It Already Has All The Necessary Environment And Libraries Installed

Can you please tell me how to return the folder where the script should run?

add it to the python path

PYTHONPATH="/src/project"
one year ago
0 Hey, Our Elastic Search Just Randomly Crashed On Our Self-Hosted K8S Deployment. On Debugging, It Looks Like Indices Are Corrupt. Any Suggestions Of How We Might Solve This?

Hi @<1535069219354316800:profile|PerplexedRaccoon19>

On debugging, it looks like indices are corrupt.

ishhhhh, any chance you have a backup?

2 months ago
0 Hi. Help

Hi PanickyMoth78

I had several pipeline components getting it and uploading files to is concurrently.

Should not be a problem

I've attached it's log file which only mentions skipping one file (a warning)

So what exactly is the error you are getting?

2 years ago
0 Hello,

Hi WickedElephant66

Setting the pipeline controller with pipeline_execution_queue as None

is actually launching the pipeline controller on your "dev" machine, not sure why you have two of them?

2 years ago
Show more results compactanswers