If the credentials don't provide access, the calls should fail (there's no fallback - just default values in place of empty configuration).
Notice you explicitly configure all hosts values, so you don't end up using a specific server for API access, and the default demo server for File server access...
HappyDove3 you can get some more insight on the different configuration methods and how to use theme https://clear.ml/docs/latest/docs/fundamentals/hyperparameters
Hi DefeatedCrab47 ,
The examples folder has just been restructured: Find the example here:
https://github.com/allegroai/trains/blob/master/examples/services/hyper-parameter-optimization/hyper_parameter_optimizer.py
DepressedChimpanzee34 Experience has shown that some mechanisms for mitigating large sets impact on browser performance are required.
Your 2nd suggestion for adding an in-app search tool for such sections seems to be completely in line with ClearML's behaviour in other UI sections (e.g. console logs) - It'd be great if you can https://github.com/allegroai/clearml/issues/new/choose
UnevenDolphin73 I think it'd be easier to track as a separate one.
WittyOwl57 The UI shows repo and package detailed comparison under the "Details"/"Execution" (See sample screenshot), whereas auto-logged environment variables are shown under the "HyperParameters" comparison tab.
What do you find missing beyond those?
ItchyJellyfish73 Have you looked at the https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation ?
DepressedChimpanzee34 Thanks for clarifying where the current debug images display falls short for your use case - Extending the filtering to liken the behaviour of the scalars sound like a great idea 🙂
JitteryCoyote63 Not currently there, but certainly sounds like something to add to the list - Care to https://github.com/allegroai/clearml/issues/new/choose ?
CooperativeSealion8 For future reference, notice there's a configuration reference available at https://allegro.ai/docs/references/trains_ref/
Thanks letting us know @<1784392065820397568:profile|SplendidFox3> - The signup for app.clear.ml had indeed broken down, but we should be back on track - Can you now complete the registration?
Hi HealthyStarfish45 ,
Since you're discussing the experiment list, I assume that by "fixed view per experiment" you actually mean "per project" (as the list view is across all experiments in the list)?
Under this assumption, note that the view configuration (column sort, custom columns, filters) is also specified in the browser URL. So, until the Trains UI supports in-app per-project view preferences - You can simply bookmark the URL.
Does this help?
WittyOwl57 No worries 🙂 happens to the best!
SharpDove45 you can programmatically control the configured server using https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?highlight=set_credentials#clearml.task.Task.set_credentials
DepressedChimpanzee34a filter similar to one in the scalars page where you can display a subset of the reported debug images can be useful
The scalars page provides a metric hide/show control - Is this the one you mean? The debug images page also provides a filter by metric - Depending on your naming policy this can easily be used to focus on more sparsely appearing images.
Else, an example of the filter you were thinking of would be appreciated.
Regardless, direct iteration access cou...
WittyOwl57 Is that information available for you on each of the compared experiments when you view them individually?
DefeatedCrab47 For the most part, mlflow can serve basic ML models using scikit-learn. In contrast, Trains was designed with more general purpose ML/DL workflows in mind, for which there's no "generic" way to serve models as different scenarios can use different input encoding, models results would be represented in a variety of forms, etc.
Consider also, that creating an HTTP endpoint for model inference is quite a breeze: there are multiple examples of Flask on top of any DL/ML framework w...
@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.
@<1580367723722969088:profile|SmoothDuck83> CSV export is only available for table plots
GreasyPenguin14 That's an annoying bug indeed - Thanks for spotting it. If you need to circumvent it before a fix comes out in one of the near releases, you can programatically use the https://clear.ml/docs/latest/docs/references/api/endpoints#post-projectsupdate e.g.from clearml.backend_api.session.client import APIClient client = APIClient() client.projects.update(project='<project ID>', description='My new description')
Note you can get your project's ID either from the webapp URL...
UpsetTurkey67 The single set of online documentation ( https://clear.ml/docs/latest/docs ), denotes OSS/Free-SaaS/Paid features as such. For example: https://clear.ml/docs/latest/docs/configs/clearml_conf#configuration-vault
@<1523701157564780544:profile|TenseOstrich47> The storage in question here is what's available on the machine hosting the ClearML server's docker containers (specifically, the ES one).
@<1628927672681762816:profile|GreasyKitten62> When you have specific display considerations, you can implement them through report_table's 'extra_layout' and 'extra_data' parameters
@<1523701157564780544:profile|TenseOstrich47> This is typically indicative of insufficient server disk space causing ES to go into read-only mode or turn active shards into inactive or unassigned (see FAQ ).
The disk watermarks controlling the ES free-disk constraints are defined by default as % of the disk space (so it might look to you like you still have plenty of space, but ES thinks otherwise). You can configure di...
MelancholyElk85 Thanks for calling this to attention. What do you think would have made it easier for you to notice the available extended list content?
I would assume that a "type to match" option would also have helped?
Appreciate if you could https://github.com/allegroai/clearml/issues/new/choose so this can be pushed forward.
DefeatedCrab47 Happy you're finding Trains useful 🙂
but it definitely has it's advantages if TRAINS would support it (early stage Data Science infrastructure).
No doubt, and I definitely see such usable example in the cards for Trains' upcoming versions...
Take a look at https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#running-the-pipeline ;
By default pipelines are enqueued for execution by a ClearML Agent. You can explicitly change this behaviour in your code.