Hi @<1546665634195050496:profile|SolidGoose91> , I think this capability exists when running pipelines. The pipeline controller will detect spot instances that failed and will retry running them.
Are you using the PRO or the open source auto scaler?
Can you try hitting F12 and seeing if there are any errors in console?
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I think you need this module as a part of repository, otherwise, how will the pipeline know what code to use?
Hi UpsetSheep55 ,
Permissions feature is indeed exists only in the enterprise version. There are no examples for this since this an enterprise only feature.
Hi @<1523701083040387072:profile|UnevenDolphin73> , not in the open source
Hi,
I can't seem to reproduce. The steps I try as follows:
Compare 2 experiments
Enough scalar graphs to have a scroll bar
Click on the eye to make some graphs disappear, they disappear but no empty spaces are shown. Can you maybe add a screenshot?
Hi @<1523701842515595264:profile|PleasantOwl46> , I think its a docker hub limitation for non paying accounts pulling too many images and unrelated to ClearML. You can always host your own artifactory as well.
Hi @<1523702932069945344:profile|CheerfulGorilla72> , in Task.init
specify output_uri=
None
Hi @<1536881167746207744:profile|EnormousGoose35> , you certainly can and should 🙂
I just want to verify that it took affect because from my experience the method is task.set_base_docker(docker_image="python:3.9-bullseye")
For example:
task = Task.init(project_name='examples', task_name='PyTorch MNIST train', output_uri=True)
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--ds-name', default="blabla")
args = parser.parse_args()
Not one known to me, also, it's a good practice to implement (Think of automation) 🙂
I think it is supported only through the API 🙂
Why not give an option to provide their user name and then convert it in the code?
Hi @<1523703572984762368:profile|SlimyDove85> , conceptually I think it's possible. However, what would be the use case? In the end it would all be abstracted to a single pipeline
I think I've encountered something related to this. Let me take a look at the docs
Hi @<1523702932069945344:profile|CheerfulGorilla72> , it's possible. I see the web UI uses queues.move_task_to_front
.
I suggest using the webUI as a reference together with developer tools 🙂
Hi @<1570220858075516928:profile|SlipperySheep79> , you can use pre & post execute callback functions that run on the controller. Is that what you're looking for?
Check the pre_execute_callback
and post_execute_callback
arguments of the component.
Hi @<1531807732334596096:profile|ObliviousClams17> , I think for your specific use case it would be easiest to use the API - fetch a task, clone it as many times as needed and enqueue it into the relevant queues.
Fetch a task - None
Clone a task - None
Enqueue a task (or many) - [None](https://clear.ml/docs/latest/docs/references/api/ta...
Hi @<1753589101044436992:profile|ThankfulSeaturtle1> , what sort of materials do you think you're missing?
Hi GrievingDeer61 , you need to create the queue yourself or change the queue that is being used to something you created 🙂
Hi @<1734020162731905024:profile|RattyBluewhale45> , from the error it looks like there is no space left on the pod. Are you able to run this code manually?
Hi @<1595225628804648960:profile|TroubledLion34> , can you please add a log of what you're inputting/getting in clearml-session
?
So you're using the community server? Response time really depends on the resources of the machine that is running the server and amount of data to filter
That's weird. Did you do docker-compose down
and up properly?