Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
3 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
YEY!!!! Download as CSV 🤯
3 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Gals, Guys & :robot_face: , if you want to checkout the Hyper-Parameters automation (Using Bayesian Optimization Hyper-Band) We have an example on the demo s...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Finally
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
5 years ago
0 Votes
4 Answers
917 Views
0 Votes 4 Answers 917 Views
Happy new year everyone! 🥂 🎆 Last minute 🎁 v2.0 is now out, with a new UI design! now finally supporting light & dark mode 🤩 Lot's more to come this year...
10 months ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Lol, I wonder what the adblock rule was ;)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
apparently everyone can ...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hi Guys! I have great news, we finally fully implemented support for continuing previously trained models 🎉 Here is a quick example (this is torch, but any ...
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Slack security ... Go figure 😉
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Hello Everyone!
5 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS 🚆 🎉 🎈
5 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Hi
Hi ClearML v0.17.1 and ClearML-Agent v0.17.0 are now the official packages & repositories 🎉 🎊 👋 🛤️ This new name brings on many changes, mainly replace a...
4 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
🙏 There is no v1.0 release without a prompt v1.0.1 following it, and we are no different 😊 pip install clearml==1.0.1
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is you server using https ?!
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
docs are up
5 years ago
Show more results questions
0 Hello. I'M Interested In Dynamic Gpu Feature. But I Can'T Find Any Information On How It Works. Can You Help Me With It? Is It Possible To Try It Somewhere ?

ItchyJellyfish73
Unfortunately this needs backend support, and only available in the enterprise version, what is your use case for it? (It was designed to allow out of the box bare-metal multi gpu dynamic allocation, think DGX with 8 GPUs that instead of spinning down agents when you want to change the queue->num-gpu mapping you can do it on the fly)

4 years ago
0 Hi, Coming Back With The Venv Caching: With The Following Setting:

replace it with:
git+No need for the repository name, this will ensure you always reinstall it (again pip feature)

4 years ago
0 Hello, I Have A Question I Am Trying To Connect Clearml To Local Minio But For Some Reason The Host Configuration Is Dropped Out, I Tried To Go In To The Code To Print Out The Config Throughout The Different Steps And I Get This [S3Bucketconfig(Bucket='In

and what is --storage s3//:inference ?
if you are using minio it should be something like None
Notice you have to specify the IP:port otherwise it thinks it is an AWS endpoint

2 years ago
0 Hello! I'M Trying To Make A Simple Eval.Py Script That Will Go Pull The Best Model Of A Given Experiment, Load It Locally And Evaluate It On Whatever Data I Give. Question 1: Is There A Standard Way Documented Somewhere To Do This? Question 2: I'M Loadin

Wait, that makes no sense to me. The API from python and the API from the UI are getting the same data from the backend ...
What are you getting with?
from clearml import Task task = Task.get_task(task_id=<put task id here>) print(task.models)

3 years ago
0 Hello All, We’Re Trying To Use

Are any files uploaded? Like artifacts etc?

2 years ago
0 When My Remote Task Is Installing The Python Dependencies

Could it be something else is missing and hence the import fails ?

3 years ago
4 years ago
0 Hi Everyone! Is There A Way To Specify The Working Directory In A Pipeline Component? I’M Using Pipelines From Decorators, I Can Set The Repo Url Just Fine, But I’M Running Everything From A Subfolder, And The Working Dir Is Set To

Hi @<1570220858075516928:profile|SlipperySheep79>

Is there a way to specify the working dir from the decoratoe

not directly, but why would that change anything? I mean the coponent code will be created in the git root, and you can still access files inside the subfolders

from .subfolder import something

what am I missing?

one year ago
0 Hi! I Am Using The Modelcheckpoint Callback From Tensorflow To Save The Best Model. When The Experiment Finishes If I Go On The Server To Experiment > Artifacts > Output Model I Can See The Model And Subsequently By Clicking On It The Weights. How Can I

I mean what is the actual link?
File:// is a path to a file.
If your machine cannot access that path you get an error.
For example:
file:///home/user/file.bin
translates to /home/user/file.bin
If you do not have the file /home/user/file.bin on your machine you get an error.
GrievingTurkey78 make sense ?
Note that by default trains / clearml will not upload your weights file anywhere , only if you set "output_uri" to a specific location it will do that .

4 years ago
0 Hi

p.s. clearml v0.17.1 is out, fixing the missing link to clearml-task 😥

4 years ago
0 Hello, I'M Trying To Save A Keras Model As A Task Artifact, And Then Upload It From Another Task. Does Anyone Know The Syntax For That? What I'Ve Seen Is Not Quite Working.

You can always log it manually:
from clearml import InputModel input_model = InputModel.import_model(weights_url='/tmp/keras_example/weight.6.hdf5')

4 years ago
0 Hi All, Is It Possible To Control The Number Of Steps Of The Pipeline During Run Time. Eg. If User Wants #N Parallel Steps In The Pipeline

. but when we try to do a "New Run" from UI, it tries to follow the DAG of previous run (the run with all child nodes skipped) and the new run fails too.

This is odd, is this reproducible ? what's the clearml python package version ?

2 years ago
0 Hi, I Shifted My Clearml Setup To An On-Premise Disconnected Env, Which Has A Pip Repo Setup. I Noted This Warning,

SubstantialElk6 could you try with the latest (just released)?
pip install clearml-agent==0.17.2Then if possible, could you attach the full log of the agent's execution (Task->results->Console)

4 years ago
0 Hi Everybody, I'M Running Experiments Inside A Docker Which Includes Multiple Python Instances, Some Of Them Are Inside Conda Environments. How Can I Specify The Agent To Use A Specific Conda Environment Inside The Docker?

How can I specify the agent to use a specific conda environment inside the docker?

Hi CrookedWalrus33
By default it will pick the highest python in the PATH.
Then if you have a python version (in PATH) that matches the requested on on the Task, it will look for it.
Do you want to limit it to a specific python binary ?

3 years ago
0 Does Artifact Track Per File Base? What If Only Some File Is Updated, Does It Knows Only Uploading The New Files? Also, Wonder What Is The Best Way To Setup Storage For Teams To Share? (Not Prefer Using Cloud As Network Cost Can Be Significant Since We Do

EnviousStarfish54 regrading file server, you have one built into the trains-server, and this will be the default location to store all artifacts. You can also use external solutions like S3 GS Azure etc.
Regarding the models, any model store / load is automatically logged as long as you are using one of the supported frameworks (TF Keras PyTorch scikit learn)
If you want your model to be automatically uploaded, just add outpu_uri:
task=Task.init('examples', 'model', output_uri=' http://trai...

5 years ago
0 Happy Friday Everyone ! We Have A New Repo Release We Would Love To Get Your Feedback On

How does it work with k8s?

You need to install the clearml-glue and them on the Task request the container, notice you need to preconfigure the clue with the correct Job YAML

one year ago
0 Hi All! Is There Any Simple Way To Use
  • Yes Task.init should be called on each subprocess (because torch forks them before they ar epatched)
  • I think the main issue is that we patch the argparse on the Subprocess (this is assuming you did not manually parse non argv argument)
  • If you can create a mock test I think we can work around the issue, as long as the way you spin it is the standard pytorch distub way
2 years ago
0 Hi, I'M Trying To Use

Ohh, if this is the case then it kind of makes sense to store on the Task itself. Which means the Task object will have to store it, and then the UI will display it :(
I think the actual solution is a vault , per user, which would allow users to keep their credentials on the sever, the agent to pass those to the Task when it spins it, based on the user. Unfortunately the vault feature is only available on the paid/enterprise version ( with RBAC etc.).
Does that make sense?

3 years ago
0 Hi, I Was Some How Able To Get A Project Running Yesturday, However Now I Am Unable To Get It Running, I Keep Getting An Failed Getting Token Error

i keep getting an failed getting token error

MiniatureCrocodile39 what's the server you are using ?

4 years ago
0 Hi, I Am Trying To Run Experiment From Clearml Web Ui. I Did Experiment Copy, Enqueue, But In The Execution Log I See That It Runs Command

As long as you import clearml on the main script, it should work. Regarding the Nvidia container, it should not interfere with any running processes, the only issue is memory limit. BTW any reason not to spin an agent on a dedicated machine? What is the gpu used for in the ckearml server machine?

4 years ago
0 Hi (Again... Sorry For Asking So Many Questions) Question About Using Google Cloud Storage In A Clearml Agent Running In Aws Ec2 Instance. My

in Your Additional ClearML Configuration (which is basically clearml.conf configuration)
Add the following:
environment { GOOGLE_APPLICATION_CREDENTIALS="~/gs.cred" } files { gsc { contents: "<this is your GCP storage credentials file>" path: "~/gs.cred" } }Reference:
https://github.com/allegroai/clearml-agent/blob/a5a797ec5e5e3e90b115213c0411a516cab60e83/docs/clearml.conf#L421
https://github.com/allegroai/clearml-agent/blob/a5a797ec5e5e3e90b115213c0411a...

3 years ago
0 Hi Everyone, Additional Arguments To The Script Execution, Is It Possible? How Can It Be Done? So At The Moment When My Script Is Being Executed The

PompousBeetle71 the code is executed without arguments, in run-time trains / trains-agent will pass the arguments (as defined on the task) to the argparser. This means you that you get the ability to change them and also type checking 🙂

PompousBeetle71 if you are not using argparser how do you parse the arguments from sys.argv? manually?
If that's the case, post parsing, you can connect a dictionary to the Task and you will have the desired behavior
` task.connect(dict_with_arguments...

5 years ago
0 After I Finish Training A Model, I Want To Call Logger.Report_Scalars To Help Monitor Inferencing Status (We Do A Lot Of Batch) But After The Model Finishes Training, Scalars Are No Longer Accepted By The Task As It Is Considered Completed. Help!

@<1523711619815706624:profile|StrangePelican34> are you saying that after the " with " block the task is marked completed? how is that possible? is this done manually ?

2 years ago
5 years ago
Show more results compactanswers