That's unfortunate. Looks like this is indeed a problem 😕 We will look into it and get back to you.
Hi @<1676400486225285120:profile|GracefulSquid84> ! Each step is indeed a clearml task. You could try using the step ID. Just make sure you pass the ID to the HPO step (you can do that by simply returning the Task.current_task().id
You're welcome! Feel free to write here again if you believe this might be a ClearML problem
no problem. we will soon release a RC that solves both issues
in the meantime, we should have fixed this. I will ping you when 1.9.1 is out to try it out!
@<1531445337942659072:profile|OddCentipede48> Looks like this is indeed not supported. What you could do is return the ID of the task that returns the models, then use Task.get_task and get the model from there. Here is an example:
from clearml import PipelineController
def step_one():
from clearml import Task
from clearml.binding.frameworks import WeightsFileHandler
from clearml.model import Framework
WeightsFileHandler.create_output_model(
"obj", "file...
Hi @<1523701168822292480:profile|ExuberantBat52> ! During local runs, tasks are not run inside the specified Docker container. You need to run your steps remotely. To do this you need to first create a queue, then run a clearml-agent instance bound to that queue. You also need to specify the queue in add_function_step . Note that the controller can still be ran locally if you wish to do that
Hi @<1523708920831414272:profile|SuperficialDolphin93> ! What if you do just controller.start() (to start it locally). The task should not quit in this case.
Hi @<1597762318140182528:profile|EnchantingPenguin77> ! There is no way to do that as of now
Do you want to remove steps/add steps from the pipeline after it has ran basically? If that is the case, then it is theoretically possible, but we don't expose and methods that would allow you to do that...
What you would need to do is modify all the pipeline configuration entries you find in the CONFIGURATION section (see the screenshot), Not sure if that is worth the effort. I would simply create another version of the pipeline with the added/removed steps

@<1719162259181146112:profile|ShakySnake40> the data is still present in the parent and it won't be uploaded again. Also, when you pull a child dataset you are also pulling the dataset's parent data. dataset.id is a string that uniquely identifies each dataset in the system. In my example, you are using the ID to reference a dataset which would be a parent of the newly created dataset (that is, after getting the dataset via Dataset.get )
what about import clearml; print(clearml.__version__)
Hi @<1523701949617147904:profile|PricklyRaven28> ! Thank you for the example. We managed to reproduce. We will investigate further to figure out the issue
check the output_uri parameter in Task.init
UnevenDolphin73 can't you find your task/dataset under the Datasets tab?
Hi FreshParrot56 ! This is currently not supported 🙁
@<1578555761724755968:profile|GrievingKoala83> did you call task.aunch_multi_node(4) or 2 ? I think the right value is 4 in this case
Hi @<1884411118219169792:profile|TensePigeon91> ! We can consider that. Also, feel free to open an issue here: None . Also, a PR would also be appreciated if possible
Hi @<1834401593374543872:profile|SmoggyLion3> ! There are a few things I can think of:
- If you need to continue a task that is marked as completed, you can do
clearml.Task.get_task(ID).mark_stopped(force=True)to mark it as stopped. You can do this in the job that picks up the task and want to continue it before callingTask.init, or in apost_execute_callbackin the pipeline itself, so the pipeline function marks itself as aborted. For example:
from clearml import Pipeli...
That is very odd. Is the script above all you're running?
Hi @<1523702000586330112:profile|FierceHamster54> ! Looks like we pull all the ancestors of a dataset when we finalize. I think this can be optimized. We will keep you posted when we make some improvements
@<1657556312684236800:profile|ManiacalSeaturtle63> what clearml SDK version are you using? I believe there was a bug related to pipelines not showing in the UI, but that was fixed in clearml==1.14.1
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I think you are right. We will try to look into this asap
Hi NuttyCamel41 ! Can you please provide a minimal example how your code looks like and how your requirements.txt looks like?
Hi @<1859043976472956928:profile|UpsetWhale84> ! Yes, if you specify the output_uri in Task.init to the s3 bucket all artifacts will be stored in s3, including model weights. Also, you can specify upload_uri in OutputModel.update_weights to the s3 location if you are uploading the model locally
Hi @<1533620191232004096:profile|NuttyLobster9> ! PipelineDecorator.get_current_pipeline will return a PipelineDecorator instance (which inherits from PipelineController ) once the pipeline function has been called. So
pipeline = PipelineDecorator.get_current_pipeline()
pipeline(*args)
doesn't really make sense. You should likely call pipeline = build_pipeline(*args) instead
Hi @<1534706830800850944:profile|ZealousCoyote89> ! Do you have any info under STATUS REASON ? See the screenshot for an example:
The only expection is the models if I'm not mistaken, which are stored locally by default.
Hi @<1795263699850629120:profile|ContemplativeParrot88> ! Are the scalars in the UI in the optimization tasks (not the base task)?