Reputation
Badges 1
25 × Eureka!I think you can force it to be started, let me check (I pretty sure you can on aborted Task).
TartSeal39 please let me know if it works, conda is a strange beast and we do our best to tame it.
Specifically when you execute manually on a conda env we collect (separately) the conda packages & the python packages (so later we can replicate on both conda & pip, or at least do our best)
Are you running both development env and agent with conda ?
UnevenDolphin73 if you have the time to help fix / make it work it will be greatly appreciated π
No idea, I just remember it is relatively old π
Actually, no. This is ti spin the clearml-server on GCP, not the agent
Hi UnevenDolphin73
Can one compare experiments/tasks from different projects?
Yes, the easiest way is to go to the parent project ("all projects" if they have no common parent, then search for the specific Tasks (i.e. filter or using the search bar), then multi-select them.
wdyt?
Yes, which looks like a lot, but you only need to d that once.
Auto scheduler would make (1) redundant (as it would spin the instance up/down based on the jobs in the queue)
yup, it's there in draft mode so I can get the latest git commit when it's used as a base task
Yes that seems to be the problem, if it is in draft mode, you have no outputs...
Finally managed; you keep saying "all projects" but you meant the "All Experiments" project instead. That's a good startΒ
Β Thanks!
Yes, my apologies you are correct: "all experiments"
- Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)?
Totally forgot about the global search feature, hmm I'm not sure the webapp is in the correct "state" for that, i.e. I think that the selection only works in "table view", which is the "all experiments" flat table
- Could we add a filter on the project name in the "All Experiments" project?
You mean "filter by project" ?
Could we ad...
I meant even just a link to a blank comparison and one can then add the experiments from that view
Just making sure you are aware, once you are in comparison you can always add Tasks (any Task):
Notice you can press on the "Add experiments", then select Any experiment (including all projects! as filters)
Notice you need to remove all filters (right side red x on the filter Icon)
could be nice to have a direct "task comparison" link in the UI somewhere,
you mean like a "cart" for comparison ? or just to "save the state" so you can move between projects ?
why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
Apologies, I was not clear. Yes I'm with you, this is a great idea π
DeliciousBluewhale87 basically any solution that is compliant with S3 protocol will work. An example:output_uri="
:PORT/bucket/folder"
Are you sure Nexus supports this protocol ?
I "think" nexus sits on top of a storage solution (like am object storage), meaning we can use the same storage solution Nexus is using.
Just to clarify we do not support the artifactory protocol Nexus provides for storing models/artifacts. But we do support it as a source for python packages used by the a...
For example, ServerA stores file at /opt/clearml but ServeB stores at /some_path/clearml
As long as you adjust your docker-compose yaml file, should be just fine
OmegaConf
is the configuration, the overrides are in the Hyperparameters "Hydra" section
None
And you have the exact same folder structure / content, and server A/B give a different set of experiments ?
(is serverB empty, meaning no experiments at all?)
Meaning if I create a sleep endpoint that is async
Hmm are you calling "sleep" or "async.sleep"?
Also are you running the serving service with GUNICORN or UVCORN?
see here:
None
Thanks SubstantialElk6 !
Happy new year π πΊ πΎ π
Thanks! Let me check if we can reproduce it. BTW what's your clearml package version?
Any specific use case for the required "draft" mode?
Hi VivaciousBadger56
Basically you can think of MLRun as "amazon lambda service without amazon". It is designed to run a "function" in scale on multiple nodes.
ClearML on the other hand is an MLOps platform. It does the experiment tracking, it orchestrates Task (think jobs), it does data management and lastly we recently released the serving. These are two different use cases.
Am I making sense here?
I have the agent configured to force install requirements.txt
what do you mean by that?
Hmm that is odd, let me see if I can reproduce it.
What's the clearml version you are using ?
In the "installed packages" section you should have "nvidia-dali-cuda110" In the agent's clearml.conf you should add:extra_index_url: ["
", ]
https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
Should solve the issue