Hi @<1649946171692552192:profile|EnchantingDolphin84> , what about this example?
None
Add argparser to change the configuration of the HyperParameterOptimizer class.
What do you think?
Hi @<1724960475575226368:profile|GloriousKoala29> , to address your questions:
- No, that is not possible currently. Think of the Datasets feature as a catalogue of data, meaning you can see what data is saved but you can only see what's inside when you pull it locally.
- I'm afraid not, ClearML basically saves links to the data but doesn't directly "look" at the data
What do you get when you call get_configuration_objects() now?
@<1523701132025663488:profile|SlimyElephant79> , it looks like you are right. I think it might be a bug. Could you open a GitHub issue to follow up on this?
As a workaround programmatically you can set Task.init(output_uri=True) , this will make the experiment outputs all to be uploaded to whatever is defined as the files_server in clearml.conf .
Hi DeliciousBluewhale87 , yes I think it does. Although I think ClearML-Serving works as a control plane on top of your serving engine.
Hi @<1523701553372860416:profile|DrabOwl94> , can you check if there are some errors in the Elastic container?
Hi @<1644147961996775424:profile|HurtStarfish47> , to answer your questions:
- Is there an autoscaler available for Azure ?I'm afraid not in the self hosted version. An Azure autoscaler is available only in the Scale/Enterprise licenses.
- I'm interested in the clearml-serving functionnalities, would that be suitable for real-time inference on arm64 devices ?Yes 🙂
Also, in the link above there is the warning
Do not enqueue training or inference tasks into the services queue. They will put an unnecessary load on the server.
I am not using the dedicated
services
queue on the server but I am doing training and inference in the pipeline component.
Steps of a pipeline should have dedicated queues with relevant resources to them
Hi @<1749965229388730368:profile|UnevenDeer21> , an NFS is one good option. You can also point all agents on the same machine to the same cache folder as well. Or just like you suggested, point all workers to the same cache on a mounted NFS
Hi @<1785479228557365248:profile|BewilderedDove91> , I think this is the env variable you're looking for - CLEARML_AGENT_FORCE_CODE_DIR
Everything in None
Hi @<1523701122311655424:profile|VexedElephant56> , do you get the same response when you try to run a script with Task.init() without agent on that machine?
Hi YummyFish22 , how did you arrive at this error message?
FrothyShrimp23 , I think this is more of a product design - The idea of a published task is one that cannot be easily changed afterwards. What is your use case for wanting to often unpublish tasks? Why publish them to begin with? And why manually?
Hi CloudySwallow27 , regarding - Process terminated by user - Are you running Hyperparam Optimization?
Regarding CUDA - yes, you need CUDA installed (or run it from a docker with CUDA) - ClearML doesn't handle the CUDA installation since this is on a driver level.
Hi @<1833676820357058560:profile|MiniatureGrasshopper70> , I suggest checking out the channel to see anything you can add or fix 🙂
Hi @<1533620191232004096:profile|NuttyLobster9> , can you please elaborate on what you were expecting to get? Can you provide a self contained snippet that reproduces this?
Or are you just trying to run clearml-agent?
RobustRat47 , you can optimize the connection process by using auto_connect_frameworks to select the frameworks you're using. That can speed up the process
TrickySheep9 , what is the use case? If I understand correctly, you want to use ClearML's package detection in a script to get the imports or do you want all the packages in the environment you're running?
VexedCat68 I think this will be right up your alley 🙂
https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py#L43
Hi AlertCamel57 ,
Exporting/Importing data between ClearML servers is not supported in the open source version from what I understand.
You can however migrate the entire database very easily by moving /opt/clearml/data (If I'm not mistaken this is where it sits) to another location. Make sure that the server is down while doing so, to avoid corruptions
Hi @<1582179661935284224:profile|AbruptJellyfish92> , how do the histograms look when you're not in comparison mode?
Can you provide a self contained snippet that creates such histograms that reproduce this behavior please?
Hi @<1558986867771183104:profile|ShakyKangaroo32> , you can do it but keep in mind that models/artifacts/debug samples are all referenced as links inside mongo/ES, you'd have to migrate the databases for that
Hi @<1523701457835003904:profile|AbruptHedgehog21> , what happens if you use a different size?
I think you need to provide the app pass for github/butbucket instead of your personal password
You can do it by comparing experiments, what is your use case? I think I might be missing something. Can you please elaborate?