You're right, I forgot, ClearML-Agent also tries to match a version to something that will work on the system it's running on
TartSeagull57 , you said the problem was with automatic reporting. Can you give an example of how you solved the issue for yourself?
Hi @<1577830989026037760:profile|EnormousGiraffe79> , can you please a self contained code snippet that reproduces this?
SmugTurtle78 , I think so. Can you verify on your end?
I can see that the old Allegro AI Trains server is not longer available:
What do you mean? You mean the AMI?
Regarding AWS deployment - I guess it really depends on your usage. Are you interested in holding the server on an EC2 instance?
Try running with all them marked out so it will take defaults
Hi SparklingHedgehong28 , can you please elaborate on the steps you take during the process + how you connect your config to the task?
What is your scenario? Can you elaborate?
What's the docker image that you're using?
In compare view you need to switch to 'Last Values' to see these scalars. Please see screenshot
That's an interesting question. I think it's possible. Let me check 🙂
Hi ExasperatedCrocodile76 , I think you are correct. This is simply an info print out.
UpsetBlackbird87 , can you give me a code snippet with 3 layers to try and play with?
Hi RattyLouse61 ,
I think packages are detected in runtime and it only shows the packages used by the script directly. When you run with ClearML-Agent, it will log all packages including dependencies that were used.
Hello MotionlessCoral18 ,
Can you please add a log with the failure?
Also 1 more question, can we have more than one storage options , a secondary storage maybe. if yes which changes need to be performed.
You can. But that would entail creating a new dataset with output_uri pointing to the new location
You don't need to do any special actions. Simply run your script from within a repository and ClearML will detect the repo + commit + uncommitted changees
DashingKoala39 , you'll need to configure each server individually 🙂
Hi @<1574931903440490496:profile|CrookedBear44> , are you sure you're using the right email? With what email are you registered?
Hi DashingKoala39 , some people prefer to use S3 or other storage solutions instead of the integrated fileserver. You can configure it on the clearml.conf
level and the task level
Hi TrickyFox41 , I think this issue is solved in 1.9.0, please update to the latest version of clearml
Hi TrickyFox41 , I'm sorry for the confusion. It appears that the issue is solved in the unreleased version 1.9.2 of the server that should be coming out in the next few days (Thursday or start of next week).
Hi TrickyFox41 , how did you save the debug samples? What is the URL of the image?
an example of the part of you saving the files and loading the files. I'm assuming that all files are saved locally?
SwankySeaurchin41 , I think you don't need to connect pipelines. Think of pipeline as a DAG execution. You can build it anyway you want 🙂
I think you would need to add some 'pre' steps. So you would want to build the package from the repository ( python setup.py bdist_wheel
) and then you can either install it manually via the startup script OR add it as a requirement using the following syntax in requirements file:///srv/pkg/mypackage
ShinyLobster84 , Hi 🙂
What do you think if there was an api to retrieve all the scalars as tables. Do you think it would be useful?
StaleButterfly40 , alternatively you could use auto_connect_frameworks=False
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
So torch.save won't automatically save the model, however, you will not get the scalars/metrics automatically as well.