Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Hi
Hi , v0.15 is out, πŸŽ‰ πŸš€ Your feedback had a major influence on the features we added πŸ™‚ thank you! A selected list of features: Column resizing / ordering /...
4 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ClearML v0.17.1 and ClearML-Agent v0.17.0 are now the official packages & repositories πŸŽ‰ 🎊 πŸ‘‹ πŸ›€οΈ This new name brings on many changes, mainly replace a...
3 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
This is usually due to enterprise level issued https certificates not part of the local installation (basically any python generated SSL request will fail)
4 years ago
0 Votes
0 Answers
913 Views
0 Votes 0 Answers 913 Views
3 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hi
Hi ! ClearML Server + SDK v1.9.0 is out! πŸŽ‰ πŸš€ 🎊 Happy Holidays and Happy New Year! ❇️ πŸŽ‡ πŸŽ„
one year ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Finally
4 years ago
0 Votes
1 Answers
423 Views
0 Votes 1 Answers 423 Views
πŸ™ Please skip cleaml python package v1.0.1 and just move on to v1.0.2 😊 apologies for the inconvenience πŸ™‚ pip install clearml==1.0.2
3 years ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
This will close it Task.current_task().close()I think we should rename completed() because it just marks the Task as completed on the backend but does not ac...
3 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
Hi
Hi ! trains 0.16.2 is finally out with the new pipelines interface! Check out the new example https://github.com/allegroai/trains/blob/master/examples/pipeli...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
YEY!!!! Download as CSV 🀯
2 years ago
0 Votes
2 Answers
446 Views
0 Votes 2 Answers 446 Views
OMG Look who just joined the PyTorch EcoSystem None Yes! it is TRAINS πŸš† πŸŽ‰ 🎈
4 years ago
0 Votes
3 Answers
426 Views
0 Votes 3 Answers 426 Views
@<1523703325881536512:profile|ConvolutedSealion94> these are xgboost internal metrics that are automatically picked by clearml
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
New video is out πŸ™‚ Cloud Autoscalers are awesome https://www.youtube.com/watch?v=j4XVMAaUt3E
2 years ago
0 Votes
9 Answers
1K Views
0 Votes 9 Answers 1K Views
Hi
Hi https://github.com/allegroai/trains/releases/tag/0.15.1 / https://github.com/allegroai/trains-server/releases/tag/0.15.1 / https://github.com/allegroai/tr...
4 years ago
0 Votes
7 Answers
476 Views
0 Votes 7 Answers 476 Views
Thank you all for taking the time to answer our survey (If you haven't already, we urge you to do so ). Your feedback has a major impact on what we build, do...
4 years ago
0 Votes
1 Answers
533 Views
0 Votes 1 Answers 533 Views
LSTMeow is back! Bots/Gals/Guys feel free to πŸ‘ None
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys/Gals, If you want to checkout the latest RC we have 0.15.0rc0 out : pip install trains==0.15.0rc0 pip install trains-agent==0.15.0rc0Many of the impr...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hi Guys! I have great news, we finally fully implemented support for continuing previously trained models πŸŽ‰ Here is a quick example (this is torch, but any ...
4 years ago
0 Votes
6 Answers
463 Views
0 Votes 6 Answers 463 Views
Hi
Hi :robot_face: , humans We have the new documentation site up and running πŸŽ‰ None 🎊 This is still a work in progress, so we keep the previous version alive...
3 years ago
0 Votes
3 Answers
539 Views
0 Votes 3 Answers 539 Views
we recently released a new version of clearml-session with Persistent Workspace support! πŸš€ πŸŽ‰ Finally you can develop on remote machines with workspace fold...
8 months ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Is it a one time thing? or recurring?
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
apparently everyone can ...
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
docs are up
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hello Everyone!
4 years ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
Quick note: v1.3.1 caused PipelineDecorator Tasks to by default disable the automagic frameworks connection, this bug is solved in the latest RC pip install ...
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
2 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
4 years ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
I would guess connectivity issues, the TLS is probably python inaccurate response (I mean in a way, it is also a TLS error, but I would imagine this has more...
4 years ago
Show more results questions
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

Could you post what you see under "installed packages" in the UI ?

3 years ago
3 years ago
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

Exactly, that’s my problem: I want to remove it to make sure it is reinstalled (because the version can change)

JitteryCoyote63 yes, this is definitely a pip bug... can you test with the latest pip version, maybe it was fixed? (i.e. git+https:// link)

3 years ago
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

With env caching enabled, it won’t reinstall this private dependency, right?

It will, local packages (".") and git packages are alwyas reinstalled even if using venv caching, exactly for that reason πŸ™‚

3 years ago
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

Ohh so the setup.py is the one containing these requirements, oops I totally missed that :( let me check what pep has to say about that ... (Basically this is not a clearml issue but a pip one...)

3 years ago
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

error in my-package setup command:

Okay this seems like an error in the setup.py you have in the "mypackage" folder

3 years ago
3 years ago
0 Hello, I Have An Error While Installing Git Dependencies Of Local Package: So Far I Used Task.

oh dear 😞 if that's the case I think you should open an Issue on pypa/pip , I'm not sure what we can do other than that ...

3 years ago
0 Hi, I’M Using

GrittyKangaroo27 any chance you can open a GitHub issue so this is not forgotten ?
(btw: we I think 1.1.6 is going to be released later today, then we will have a few RC with improvements on the pipeline, I will make sure we add that as well)

2 years ago
0 I'M New To Clearml And I'D Like To Deploy An Inference Service Based On My Trained Model, Something Like What Bentoml Does Wrapping Flask Api... Is There A Way To Do It Within Clearml?

ContemplativeCockroach39 unfortunately No directly as part of clearml 😞
I can recommend the Nvidia triton serving (I'm hoping we will have the out-of-the-box integration soon)
mean while you can manually run it , see docs:
https://developer.nvidia.com/nvidia-triton-inference-server
docker here
https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver

3 years ago
0 Hi. I Have A

I'm still unclear on why cloning the repo in use happens automatically for the pipeline task and not for component tasks.

I think in the pipeline it was the original default, but it turns out for a lot of users this was not their defualt use case ...
Anyhow you can also pass repo="." which will load + detect the repo in the execution environemtn and automatically fill it in

2 years ago
0 Hi Fam! I’M Trying To Get

Hi QuaintPelican38
Can you ssh to {instance_public_ip_address}:10022 (something like ssh -p 10022 user@IP_HERE )?
Basically just getting the password prompt means you are okay.
I suspect that you have some AWS security definition (firewall) that prevents a direct access to the instance, could that be?

3 years ago
0 Hi. I'M Running This Little Pipeline:

Is there any better way to avoid the upload of some artifacts of pipeline steps?

How would you pass "huge datasets (some GBs)" between different machines without storing it somewhere?
(btw, I would also turn on component caching so if this is the same code with the same arguments the pipeline step is reused instead of reexecuted all over again)

2 years ago
0 Adding

Makes sense to add it to docker run by default if GPUs are mentioned in agent.

I think this is an arch thing, --privileged is not needed on ubuntu flavor, that said you can always have it if you add it here:
https://github.com/allegroai/clearml-agent/blob/178af0dee84e22becb9eec8f81f343b9f2022630/docs/clearml.conf#L149

clearml-agent daemon --gpus 0 --queue default --docker
But docker still sees all GPUs.

Yes --gpus should be enough, are you sure regrading the --privileged flag ?

2 years ago
0 Adding

One thing though - I am running agent on behalf of a regular user.

Oh that might be credentials / docker service issue (i.e. the user might not have the ability to rn a docker with --gpus, but as you mentioned,, that seems like an arch thing πŸ™‚ )

2 years ago
0 Hi Anyone

The latest image seems to require drivers on the host 460+
try this one:
https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/rel_20-12.html#rel_20-12

3 years ago
0 Another Question: How Can I Make Clearml-Agent Use Pre-Installed Version From The Nvidia/Pytorch (

Why can we even change the pip version in the clearml.conf?

LOL mistakes learned the hard way πŸ™‚
Basically too many times in the past pip versions were a bit broken, which is fine if they are used manually and users can reinstall a diff version, but horrible when you have an automated process like the agent, so we added a "freeze version" option, only with greater control. Make sense ?

2 years ago
0 The

Do you think this is better ? (the API documentation is coming directly from the python doc-string, so the code will always have the latest documentation)
https://github.com/allegroai/clearml/blob/c58e8a4c6a1294f8acec6ed9cba81c3b91aa2abd/clearml/datasets/dataset.py#L633

3 years ago
0 The

Optional[Sequence[Union[str, Dataset]]]None, list of string or list of Datasets objects
(each one is a parent (supporting multiple parents)

3 years ago
0 Hi Anyone

Bottom line the driver version in the host machine does not support the CUDA version you have in the docker container

3 years ago
0 Hello! I Add To Inject The Configuration Into Clearml With

I think it would make sense to have one task per run to make the comparison on hyper-parameters easier

I agree. Could you maybe open a GitHub issue on it, I want to make sure we solve this issue πŸ™‚

3 years ago
0 Hello! I Add To Inject The Configuration Into Clearml With

It's a running number because PL is creating the same TB file for every run

3 years ago
0 Hi Everyone! I Have A Question About The Pipeline Controller: I Would Like To Build A Ml Pipeline Similar To The One At

Hi LovelyHamster1
That is a good point, sine the Pipeline kind of assumes the task are already in the system, it clone them (leaving you with the original Draft Task).
I think we should add a flag to that pipeline that if the Task is in draft it will use it (instead of cloning it) Since it seems your pipeline is quite straight forward, I'm not sure you actually need the pipeline controller class, you can perform the entire thing manually, see example here: https://github.com/allegroai/clea...

3 years ago
0 Hey! I Just Finished The Movie

GiddyPeacock64 Are you sending the jobs from JupyterLab Kale extension ?

EDIT:
Is the pipeline step itself calling Task.init?

3 years ago
0 Hi. I'M Running This Little Pipeline:

We already have the feature-store to save all data, that’s why I don’t need to save it (just a reference of version of dataset).

that makes sense, so why don't you point to the feature store ?

I can have different steps of the pipeline running on different machines. But this is not my use case.

if they are running on the same machine you can basically return a path to the local storage or change the output_uri to the local storage, this will cause them to get serialized to the l...

2 years ago
0 Hi. I'M Running This Little Pipeline:

I could merge some steps, but as I may want to cache them in the future, I prefer to keep them separate

Makes total sense, my only question (and sorry if I'm dwelling too much in it) is how would you pass the data between step 2 to step 3, if this is a different process on the same machine ?

2 years ago
0 Hi. I'M Running This Little Pipeline:

Well you do somehow need to pass the data, no?

2 years ago
0 Hi Folks, Is There A Way To Force Clear-Ml Agent With --Docker To

My bad you have to pass it to the container itself:
https://github.com/allegroai/clearml-agent/blob/a5a797ec5e5e3e90b115213c0411a516cab60e83/docs/clearml.conf#L149
extra_docker_arguments: ["-e", "CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1"]

2 years ago
Show more results compactanswers