Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
9 Answers
2K Views
0 Votes 9 Answers 2K Views
Hi
Hi https://github.com/allegroai/trains/releases/tag/0.15.1 / https://github.com/allegroai/trains-server/releases/tag/0.15.1 / https://github.com/allegroai/tr...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Gals, Guys & :robot_face: If you want to get some inspiration on building DL Continuous Integration pipelines, I suggest this post (obviously built on top of...
5 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
🙏 Please skip cleaml python package v1.0.1 and just move on to v1.0.2 😊 apologies for the inconvenience 🙂 pip install clearml==1.0.2
4 years ago
0 Votes
2 Answers
2K Views
0 Votes 2 Answers 2K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
🎊 🍾 Happy new year ! 🎆 🎇 We wanted to thank you all for the great feedback, contribution and general support you guys give us. It is truly fulfilling to ...
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
3 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
https://allegro.ai/docs
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Slack security ... Go figure 😉
5 years ago
0 Votes
1 Answers
2K Views
0 Votes 1 Answers 2K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
3 years ago
0 Votes
3 Answers
2K Views
0 Votes 3 Answers 2K Views
we recently released a new version of clearml-session with Persistent Workspace support! 🚀 🎉 Finally you can develop on remote machines with workspace fold...
one year ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
4 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
I would guess connectivity issues, the TLS is probably python inaccurate response (I mean in a way, it is also a TLS error, but I would imagine this has more...
5 years ago
0 Votes
0 Answers
2K Views
0 Votes 0 Answers 2K Views
Is you server using https ?!
5 years ago
0 Votes
6 Answers
2K Views
0 Votes 6 Answers 2K Views
Hi
Hi ! ClearML Server + SDK v1.9.0 is out! 🎉 🚀 🎊 Happy Holidays and Happy New Year! ❇️ 🎇 🎄
2 years ago
Show more results questions
0 Can I Run A Random Task From A Queue? Like This

can you get the agent to execute the task on the current conda env without setting up new environment?

Wouldn't that break easily ? Is this a way to avoid dockers, or a specific use case ?

is there any other way to get task from the queue running locally in the current conda env?

You mean including cloning the code etc. but not installing any python packages ?

3 years ago
0 Is Anyone Also Experiencing Network Error During Every Clearml Dataset Download? It'S Been A While And Almost Every Download Fails...

(currently I think the implementation expects that if the download completed, it was successful)

3 years ago
0 Hi, Can You Pls Help Me? I Am Using V 0.14 (Will Update It Soon) And I Got The Following Error: /Usr/Bin/Python3.6: No Module Named Virtualenv Trains_Agent: Error: Command '['Python3.6', '-M', 'Virtualenv', '/Home/Ubuntu/.Trains/Venvs-Builds.2/3.6']' Ret

It should be the last line (or almost) of the Log. is it there ? Also it seems that from the log, that trains you are using trains 0.14.3 , try with trains 0.15 , let me know if you are still missing packages

5 years ago
0 Hi, Can Someone Give More Information About What An Api Call Means? Our Team Has Been Charged For 10 Millions Api Calls, But We Struggle To Understand Where They Are Coming From (We Are Only Making Training Tasks). Thanks

Hi @<1556812486840160256:profile|SuccessfulRaven86>
I'm assuming this relates to the SaaS service.
API calls are away to measure usage, basically metric reports are bunched into a single call, agents pings / query is API call, and so on so forth.
How many hours you had training tasks reporting data? how many agents running and so on

2 years ago
2 years ago
0 Hello Everone, I Have Hosted Clearml Server And Trained A Yolov8 Model To Test My Installations. The Model Was Trained Successfully And I Tried To Optimize The Hyderparameters By Using The Sample Code From Clearml But Im Getting Some Error In Doing So An

the parameter datatypes are not being changed when loading them up.

These are the auto logged parameters , inside YOLO, correct?
Just to make sure, you can actually see the value None in the UI, is that correct? (if everything works as expected, you should see empty string there)

one year ago
0 Hey, So I'M Trying To Upload An Artefact To Clearml’S Fileserver(I Have A Self Hosted Clearml Server Running), I'Ve Uploaded The File Using Storagemanager.Upload_File(Path, Url) And Giving The Url As “

Are Kwargs supported in functions decorated as a pipeline component?

They are, but I think the main issue is the casting, without prior knowledge, everything will be a tring

3 years ago
0 I Am Starting To Use Clearml-Data, And I Have A Feature Request - Add A Progress Bar For The Upload Phase / Log Which Files Are Uploaded / Add Upload Speed Currently When Uploading Large Amounts Of Data, We Can An Obscure Message Of

The issue is uploading reporting fro http uploads (object storage will report upload). Basically the http upload is post with urllib that does not support upload callbacks for progress report. If you have an idea here, we will gladly add it (as you mentioned it can be quite annoying to have to open network manager to verify the upload is progressing)

4 years ago
0 Hi! Is There Something Happening With The

ModelCheckpoint('best_model', save_best_only=True)That worked for me now, what's the diff

4 years ago
0 Hi, And Thanks For The Great System. I'Ve Been Training Using

Hi StickyWhale51
I think this issue is due to some internal race condition, anyhow I think we have an RC out solving it, can you try with:
pip install clearml==1.2.0rc2

3 years ago
0 Hi, Is There A Concept Of An Agent Taking More Then One Job?

pip install clearml-agent==0.17.3rc0

4 years ago
0 Hi Great Trains Community! I Have A Question Regarding Version Control. How Trains Manages Model/Dataset Version Control?

understood trains does not have auto versioning

What do you mean auto versioning ?

task name is not unique, task ID is unique, you can have multiple tasks with the same name and you can edit the name post execution

5 years ago
0 Hello All, Thanks For This Really Cool Software And Community! I Have A Question On

Hmm that makes sense, btw the PYTHONPATH set by the agent would be the working dir listed under the Task, But if you set the agent.force_git_root_python_path the agent would also add the root git repo to the python path

2 years ago
0 Hi, I Have Quite A Generic Question. Basically, I Am Picking Your Brains For Any Solution. Our Current Pipeline Has (Clearml-Data, Clearml And Seldon). We Were Looking For Some Workflow Orchestrator To Stitch Them Up. One Scenario:

we can add non-clearml code as a step in the pipeline controller.

Yes 🙂 , btw you can kind of already do that, with pre/post function callbacks (notice they are running from the same scope as the actual pipeline controller).
What exactly did you have in mind to put there ?

4 years ago
0 Another Question: Is It Possible To Specify In Which Directory To Save All The Files That Clearml-Agent Creates (E.G. Cache Files Or Results Of The Currently Running Experiments)

So clearml-init can be skipped, and I provide the users with a template and ask them to append the credentials at the top, is that right?

Correct

What about the "Credential verification" step in clearml-init command, that won't take place in this pipeline right, will that be a problem?

The verification test is basically making sure the credentials were copy pasted correctly.
You can achieve the same by just running the following in your python console:
` from clearml import Ta...

4 years ago
0 Another Question: Is It Possible To Specify In Which Directory To Save All The Files That Clearml-Agent Creates (E.G. Cache Files Or Results Of The Currently Running Experiments)

I was hoping that there's a universal flag somewhere. Asking this because I want all the Models and Artifacts to be stored in one place and the users shouldn't have to edit their configuration files.

You mean like make sure all models/artifacts are always uploaded?

4 years ago
0 Hi All, Is It Possible To Control The Number Of Steps Of The Pipeline During Run Time. Eg. If User Wants #N Parallel Steps In The Pipeline

. but when we try to do a "New Run" from UI, it tries to follow the DAG of previous run (the run with all child nodes skipped) and the new run fails too.

This is odd, is this reproducible ? what's the clearml python package version ?

2 years ago
0 Is It Possible To Upload A Hyperdataset? Or Can We Only Upload Datasts

I want to store only my raw data in my blob storage, and I want to create a Hyperdataset with all the artificats, metrics, frames,

Yes that's exactly how it works.
None

This line adds a reference to raw file (local/remote)
[https://github.com/allegroai/clearml/blob/1b474dc0b057b69c76bc2daa9eb8be927cb25efa[…]es/hyperdatasets/data-registration/register_dataset_wit...

one year ago
0 Is This An Expected Behaviour? Trains Version 0.16.4, Not Able To Upgrade Now To Latest Version But I Doubt This Was Changed

New version will contain much more advanced search (including all the task fields)

are there any more fields in this function with partial matching? for example project? tags?

Yes they can all be filtered (basically everything you see in the UI)
notice: tags are strings (you can provide list of tags), project is an ID of the project
(Use Task.get_project_id, I think)

4 years ago
0 Hi, I Am Trying To Understand Clearml-Data And Only Found This Piece Of Article Explaining It.

Hi SubstantialElk6

but in terms of data provenance, its not clear how i can associate the data versions with the processes that created it.

I think DeliciousBluewhale87 ’s approach is what we are aiming for, but with code.
So using clearml-data from CLI is basically storing/versioning of files (with differentiable based storage etc, but still).
What ou are after (I think) is in your preprocessing code using the programtic Dataset class, to create the Dataset from code, this a...

4 years ago
0 If I Clone A Task, I Suppose All Artifacts Are Not Cloned With It, Even If They Are Registered, Right?

Yes that makes total sense to me. How about a GitHub issue on the clearml-docs ?

3 years ago
0 Hi All, Is It Possible To Control The Number Of Steps Of The Pipeline During Run Time. Eg. If User Wants #N Parallel Steps In The Pipeline

I see, so in theory you could call add_step with a pipeline parameter (i.e. pipe.add_parameter etc.)
But currently the implementation is such that if you are starting the pipeline from the UI
(i.e. rerunning it with a different argument), the pipeline DAG is deserialized from the Pipeline Task (the idea that one could control the entire DAG externally without changing the code)
I think a good idea would be to actually allow the pipeline class to have an argument saying always create from cod...

2 years ago
0 Hi All, Is It Possible To Control The Number Of Steps Of The Pipeline During Run Time. Eg. If User Wants #N Parallel Steps In The Pipeline

@<1523701523954012160:profile|ShallowCormorant89> can you verify it is reproducible in 1.9.3 ? because if it is I'd like to fix that 🙂

will it be possible for us to configure the "new run" button in a way so that it always clones from a particular pipeline ?

What do you mean by "particular pipeline" ? by default it will clone the last successful one, and by right clicking a specific one you can run a copy of that one. what am I missing ?

2 years ago
0 Hi All, What Does This Statement Mean ?

Hi @<1523701523954012160:profile|ShallowCormorant89>
This means the system did not detect any "iteration" reporting (think scalars) and it needs a time-series axis for the monitoring, so it just uses seconds from start

2 years ago
0 Hi All, Is It Possible To Control The Number Of Steps Of The Pipeline During Run Time. Eg. If User Wants #N Parallel Steps In The Pipeline

yes

argument saying always create from code

can be helpful

@<1523701523954012160:profile|ShallowCormorant89> any chance you can open a github issue on that, just so we do not forget ?

if we can edit the configuration objects of a pipeline, that can be beneficial too. which we're unable to do from UI

Actually you already can, after you clone the pipeline, you can press on details then go to configuration Tab, and edit the pipeline object. The format is HOCON (...

2 years ago
0 Encountered An Odd Bug. Upon Attempting To Write Images To Clearml (3D Projected, Matplotlib),

I should mention this is run within a TF v1 session context

This should not be connected.

everything gets stored as intended (to clearML dashboard)

So in jupyter it works? But from command line it does not ? what's the difference ?

4 years ago
Show more results compactanswers