All artifact links are saved in mongodb, all debug samples are saved in ElasticSearch. I think you would need to read up on how to change values inside those dbs. I would assume server would need to be down when such a script would be running
I don't think there is a specific API call for that but you can fetch all the running experiments and then check which users are running them
Hi ElegantCoyote26 , looks like connectivity issue. Are you running a self hosted server?
Hi @<1546303269423288320:profile|MinuteStork43> , how did you set the apiserver in clearml.conf ?
You can configure it in ~/clearml.conf at api.files_server
Yeah, looks like it reproduces. I suggest opening a GitHub issue to get this fixed 🙂
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , did you have a chance to try out the solution suggested in GitHub and play with it a bit?
Hi @<1755038652741718016:profile|LuckyRobin32> , how are you pointing to the folder?
Did you try what I added? Also the screenshot is too small, nothing is readable
@<1580005325879119872:profile|SweetCat82> , once an experiment finished running you can't change it's status unless you reset it. I think task.upload_artifact needs to come before your task finishes.
How did you try calling it? Fetching the task via SDK and then trying to upload the artifact?
@<1546303277010784256:profile|LivelyBadger26> , it is Nathan Belmore's thread just above yours in the community channel 🙂
GreasyPenguin14 Hi!
I wish I could help but I'm afraid I'll need to ask AnxiousSeal95 for some help with that, please hold tight until he will be able to help out 🙂
Hi @<1547752799075307520:profile|ZippyCamel28> , to address your points
- What do you mean by 'reload'?
- You need to go into the project and archive the experiments in order to delete the project + experiments in the archive
- There are some configurations you can play with to report 'less' metrics. For example
sdk.metrics.plot_max_num_digitsYou should read here - None . To get an idea of the size of an experiment think of an...
SubstantialMonkey63 , Hi! What exactly are you looking for ? I think you might find some relevant things here https://github.com/allegroai/clearml/tree/master/examples
I might not be able to get to that but if you create an issue I'd be happy to link or post what I came up with, wdyt?
Taking a look at your snippet, I wouldn't mind submitting a PR for such a cool feature 🙂
Hi @<1623491856241266688:profile|TenseCrab59> , can you elaborate on what do you mean spending this compute on other hprams? I think you could in theory check if a previous artifact file is located then you could also change the parameters & task name from within the code
Hi @<1524560082761682944:profile|MammothParrot39> , do you mean like an autoscaler?
Hi @<1546303254386708480:profile|DisgustedBear75> , there are a few reasons remote execution can fail. Can you please describe what you were trying to do and please add logs?
Hi FierceRabbit20 , I don't think there is such an option out of the box but you can simply add it to your startup of the machine or create a Cron job
You can simply run the script from inside the repo once, you can also use execute_remotely to avoid actually running the entire thing
At least from the log of the agent failure
What do you mean by dashboard of the admin?
Hi @<1523703397830627328:profile|CrookedMonkey33> , not sure I follow. Can you please elaborate more on the specific use case?
Currently you can add plots to the preview section of a dataset
How did the tasks fail?
Hi @<1690896098534625280:profile|NarrowWoodpecker99> , can you please elaborate on what you mean by limit code access? You define access to the code via the git credentials on the agent config
SubstantialElk6 , Hi 🙂
Do you mean a working example for Triton integration?
like some details about attributes, dataset size, formats.
Can you elaborate on how exactly you'd be saving this data?
here when we define output_uri in task_init in which format the model would be saved?
It depends on the framework I guess 🙂