Hi @<1715175986749771776:profile|FuzzySeaanemone21> , can you provide a log of the run? Also some code snippet that reproduces this behavior on your side?
SmugTurtle78 , I'll take a look at it shortly 🙂
containing the correct on-premises s3 settings
Do you mean like an example for minio?
Hi MoodySheep3 ,can you please add a standalone snippet that reproduces this? What version of clearml
are you using?
Getting the following error when I try to run this code:
Traceback (most recent call last): File "plots-issue.py", line 9, in <module> fig=px.pie(df, names='a', facet_col='b') TypeError: pie() got an unexpected keyword argument 'facet_col'
What happens during the run is that plotly plots are shown during run on your computer but they don't show in UI and ONLY after the run is finished the plots show up?
Are your runs long?
Hi @<1533619725983027200:profile|BattyHedgehong22> , does the package appear in the installed packages section of the experiment?
Hi @<1523701062857396224:profile|AttractiveShrimp45> , can you please add the configuration of your HPO app and the log?
The reports is a separate area, it's between 'Pipelines' and 'Workers & Queues' buttons on the bar on the left 🙂
If it's metrics why not report them as scalars?
https://clear.ml/docs/latest/docs/references/sdk/logger#report_scalar
Hi AttractiveShrimp45 , can you please elaborate on what you mean by KPIs artifact?
My guess other agents are sitting on different machines, did you verify that the credentials are the same between the different clearml.conf
files? Maybe @<1523701087100473344:profile|SuccessfulKoala55> might have an idea
Hi @<1523717803952050176:profile|SmoothArcticwolf58> , can you describe a bit about the network between the agent and the new server?
And you're sure that clearml.conf
points to the correct server with the right credentials?
You can add basically whatever you want usingclearml-serving metrics add ...
None
It should look something like this
@<1541592204353474560:profile|GhastlySeaurchin98> , I think this is more related to how Optuna works, it aborts the experiment. I think you would need to modify something in order for it to run the way you want
It looks like you're on a self hosted server, the community server is app.clear.ml where you can just sign up and don't have to maintain your own server 🙂
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , HyperDatasets are built mainly for unstructured data since the problem itself is more difficult, but all features can be applied also to tabular data. Is there something specific you're looking for?
Make sure to fetch the logger manually and not construct it yourself 🙂
Hi @<1523704461418041344:profile|EnormousCormorant39> , is there any chance this could be indeed network related if it does manage to work sometimes?
Can you add a larger portion of the log with errors?
Also what type of machines are these? Linux to linux?
Hi DizzyHippopotamus13 , I'm not sure this is currently possible. Maybe check if there is a GitHub issue about this 🙂
Hi @<1523706700006166528:profile|DizzyHippopotamus13> , you can simply do it in the experiments dashboard in table view. You can rearrange columns, add custom columns according to metrics and hyper parameters. And of course you can sort the columns
Hi @<1570583237065969664:profile|AdorableCrocodile14> , how did you upload the image?
Hi @<1570583237065969664:profile|AdorableCrocodile14> , you can export a report as a PDF 🙂
With what host config were you trying the last attempts?
What version of clearml
, clearml-agent
& server are you using?
Hi EmbarrassedSpider34 , what do you get in the log of the experiment you're trying to run? Or do you look at it at the level of the GCP console?
Hi @<1750689997440159744:profile|ShinyPanda97> , I think you can simply move the model to a different project as part of the pipeline
Hi EmbarrassedSpider34 , what is your use-case? Isn't the Optimizer object something like a Task object? Since it's a process I'm not sure you can pickle it. wdyt?