Hi @<1574931891478335488:profile|DizzyButterfly4> , not sure what you mean. Can you elaborate on what you see vs what you expect to see?
I have no idea, but considering that the version for http://app.clear.ml was updated recently (last week from what I noticed) I'd be guessing that the self hosted server should be right around the corner 😉
JitteryCoyote63 , I was referring to http://app.clear.ml , when you look in profile page there you will see 3.8, I noticed it recently changed.
Hi!
I think the example here should help you.
https://github.com/allegroai/clearml/blob/master/examples/reporting/pandas_reporting.py#L19
Together with this
https://github.com/allegroai/clearml/blob/master/examples/reporting/artifacts.py
Tell me if it helped 🙂
I think it depends on your implementation. How are you currently implementing top X checkpoints logic?
Can I assume you're running the agent (in daemon mode) on the same machine that you're running the clearml-agent daemon --stop
command?
MelancholyElk85 , I think the upload()
function has got the parameter you need: output_uri
I played a bit with it and got to the value. OutrageousSheep60 , please tell me if this helps you 🙂
` >>> task.set_user_properties(x=5)
True
y=task.get_user_properties()
y
{'x': {'section': 'properties', 'name': 'x', 'value': '5'}}
y["x"]["value"]
'5' `
Hi @<1523706700006166528:profile|DizzyHippopotamus13> , you can simply do it in the experiments dashboard in table view. You can rearrange columns, add custom columns according to metrics and hyper parameters. And of course you can sort the columns
Hi TartSeagull57 ,
Which one is fig 1 and which one is fig 2?
How are you logging them?
RoughTiger69 Hi!
Regarding your questions:
You can use the following: Task.force_requirements_env_freeze(requirements_file='repo/some_folder/requirements.txt')
before your task=Task.init(...)
You can configure sdk.development.detect_with_pip_freeze=true
in your ~/clearml.conf
file for full env detection from the environment you're running from
Looks like a permissions issue:nested: IOException[failed to test writes in data directory [/usr/share/elasticsearch/data/nodes/0/indices/mQ-x_DoZQ-iZ7OfIWGZ72g/_state] write permission is required]; nested
@<1574931891478335488:profile|DizzyButterfly4> , what do you feel was lacking from the documentation? A usage example?
DrabSwan66 Hi!
What version are you trying to install? If the machine in question has an issue installing opencv then the agent will most likely fail as well.
When looking at the base task, do you have that metric there?
Results -> Scalars 🙂
Hi @<1594863230964994048:profile|DangerousBee35> , I'm afraid that the self hosted version and the PRO versions are entirely disconnected. There are many more advanced features in the Scale/Enterprise licenses where you can have a mix of all the features you might be looking for. You can see the different options here - None
No, but I think it would make sense to actually share reports outside of your workspace, similar to experiments. I'd suggest opening a GitHub feature request
Hi @<1643060831954407424:profile|ScrawnyMole16> , you can export your report to PDF and share it with your colleagues 🙂
Hi @<1747428509627715584:profile|CumbersomeDuck6> , you can basically expose it as an argument in the configuration section. For example using argparse
would work very conveniently.
@<1719524641879363584:profile|ThankfulClams64> , the Genesis autoscaler feature is currently disabled. You can still use the AWS and GCP autoscalers available though
Also, what if you try using only one GPU with pytorch-lightning? Still nothing is reported - i.e. console/scalars?
Hi @<1686547375457308672:profile|VastLobster56> , from my understanding Hadoop is a collection of different utilities. Do you have something specific in mind?
On prem is also K8s? Question is if you run the code unrelated to ClearML on EKS, do you still get the same issue?
I think you're right. But it looks like an infrastructure issue related to Yolo
Try setting the following environment envs:%env CLEARML_WEB_HOST=
%env CLEARML_API_HOST=
%env CLEARML_FILES_HOST=
%env CLEARML_API_ACCESS_KEY=... %env CLEARML_API_SECRET_KEY=...
and try removing the clearml.conf file 🙂
Hi BoredBat47 , I'm not sure. However I doubt that any remote agents would be taking such a configuration
Hi @<1523702932069945344:profile|CheerfulGorilla72> , I think you need to map out the relevant folders for the docker. You can add docker arguments to the task using Task.set_base_docker