This is because Datasets have a new view now. Just under 'Projects' on the left bar you have a button for Datasets 🙂
Only news ones after you use SDK 1.6.0 🙂
Hi, SmugTurtle78
Hi, Is there any manifest for the relevant polices needed for the AWS account (if we are using autoscaling)?
I'm not sure. From my experience the autoscaler requires access to spinning instances up & down, listing instances, checking tags and reading machine logs.
You should try running with those permissions. If something is missing, you'll see it in the log 🙂
Also, Is there a way to use Github deploy key instead of personal token?
Do you mean git user/...
YummyLion54 , let me take a look 🙂
I can see that the old Allegro AI Trains server is not longer available:
What do you mean? You mean the AMI?
Regarding AWS deployment - I guess it really depends on your usage. Are you interested in holding the server on an EC2 instance?
Hi CrookedMonkey33 ,
Can you please open developer tools (F12) and see what is returned when you navigate to the 'projects' page (When you see 41 experiments)
Also go into 'Settings' -> 'Configuration' and verify that you have 'Show Hidden Projects' enabled
Hi, regarding your questions:
If you create and finalize the dataset, it should upload the file contents to the fileserver (or any other storage you configure). The dataset is an object similar to a task - it has a unique ID You can add metric columns to the experiments table. You can do this by clicking the little cog wheel at the top right of the experiments table. You can also select multiple experiments and compare them (Bottom left on the bar that appears after selecting more than 1 expe...
That's an option This really depends on your usage - if you want those 'custom parameters' be accessible by other tasks, then save them as artifacts. If you only want visibility - then save them as scalars. You have a nice example on usage here: https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py
Hi @<1523703397830627328:profile|CrookedMonkey33> , not sure I follow. Can you please elaborate more on the specific use case?
Currently you can add plots to the preview section of a dataset
Hi CrookedMonkey33 , can you elaborate a bit more on what you want done?
Hi @<1545216070686609408:profile|EnthusiasticCow4> , start_locally()
has the run_pipeline_steps_locally
parameter for exactly this 🙂
assuming that they are in the same setup as user/secret keys then I guess they would work until they expire 🙂
Hi @<1853608151669018624:profile|ColossalSquid53> , if there is no connectivity to the clearml server, your python script will run regardless. clearml
will cache all logs/events and then flush them once connectivity to the server is resumed.
Hey, maybe AgitatedDove14 or ExasperatedCrab78 can help
I don't think such a feature exists currently but you could put in a feature request on GitHub 🙂
Hi @<1523708920831414272:profile|SuperficialDolphin93> , simply set output_uri=/mnt/nfs/shared
in Task.init
Hi @<1523708920831414272:profile|SuperficialDolphin93> , does it run fine if you use a regular worker?
SuperficialDolphin93 , looks like a strange issue. Can you maybe open a github issue for better tracking?
Hi @<1774245260931633152:profile|GloriousGoldfish63> , this feature is waiting enablement on clearml-serving
side and will be supported in the next release
You can do it in one API call as follows:
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all
You can use scroll_id
to scroll through the tasks. When you call tasks.get_all
you will get a scroll_id
back. Use that scroll_id
in the following calls to go through the entire database. Considering you have only 2k tasks, you can cover this in 4 scrolls 🙂
Hi @<1523708920831414272:profile|SuperficialDolphin93> , I think this is what you're looking for
None
@<1644147961996775424:profile|HurtStarfish47> , you also have the auto_connect_frameworks
parameter of Task.init
do disable the automatic logging and then manually log using the Model module to manually name and register the model (and upload ofc)
Hi @<1644147961996775424:profile|HurtStarfish47> , how do you specify file name for the model regardless of ClearML?
Hi @<1523701842515595264:profile|PleasantOwl46> , I think you can add a PR here - None
Hi @<1719162252994547712:profile|FloppyLeopard12> , not sure I understand what you're trying to do, can you elaborate step by step?
Hi @<1523701842515595264:profile|PleasantOwl46> , In the info section you can see the user name but not ID (However it is returned in the API request the webUI sends)
What is your use case?
Why not give an option to provide their user name and then convert it in the code?
I think its possible there was an upgrade in Elastic, I'd suggest going over the release notes to see if this happened with the server