Hi @<1726047624538099712:profile|WorriedSwan6> ! At the moment, only the function_kwargs
and queue
parameters accept such references. We will consider supporting them for other fields as well in the near future
@<1626028578648887296:profile|FreshFly37> can you please screenshot this section of the task? Also, how does your project's directory structure look like?
Yes, you need to call the function every time. The remote run might have some parameters populated which you can use, but the pipeline function needs to be called if you actually want to run the pipeline.
Hi @<1581454875005292544:profile|SuccessfulOtter28> ! The logger is likely outdated. Can you please open a Github issue about it?
Hi @<1674226153906245632:profile|PreciousCoral74> !
Sadly, Logger.report_matplotlib_figure(…) doesn't seem to log plots. Only the automatic integration appears to behave.
What do you mean by that? report_matplotlib_figure
should work. See this example on how to use it: None .
If it still doesn't work for you, could you please share a code snippet that could help us track down...
Hi @<1523701168822292480:profile|ExuberantBat52> ! During local runs, tasks are not run inside the specified Docker container. You need to run your steps remotely. To do this you need to first create a queue, then run a clearml-agent
instance bound to that queue. You also need to specify the queue in add_function_step
. Note that the controller can still be ran locally if you wish to do that
Regarding pending pipelines: please make sure a free agent is bound to the queue you wish to run the pipeline in. You can check queue information by accessing the INFO section of the controller (as in the first screenshort)
then by pressing on the queue, you should see the worker status. There should be at least one worker that has a blank "CURRENTLY EXECUTING" entry
![image](https://clearml-we...
Hi @<1657918706052763648:profile|SillyRobin38> ! If it is compatible with http/rest, you could try setting api.files_server
to the endpoint or sdk.storage.default_output_uri
in clearml.conf
(depending on your use-case).
Hi @<1523721697604145152:profile|YummyWhale40> ! Are you able to upload artifacts of any kind other than models to the CLEARML_DEFAULT_OUTPUT_URI?
how did you install clearml?
@<1657556312684236800:profile|ManiacalSeaturtle63> what clearml SDK version are you using? I believe there was a bug related to pipelines not showing in the UI, but that was fixed in clearml==1.14.1
Hi @<1675675705284759552:profile|NonsensicalAnt77> ! How are you uploading the model weights without using the SDK? Can you please share a code snippet (might be useful in finding why your config doesn't work). Also, what is your clearml version?
Hi!
It is possible to use the same queue for the controller and the steps, but there needs to be at least 2 agents that pull tasks from that queue. Otherwise, if there is only 1 agent, then that agent will be busy running the controller and it won't be able to fetch the steps.
Regarding missing local packages: the step is ran in a temporary directory that is different than the directory the script is originally in. To solve this, you could add all the modules/files you are interested in in a...
Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! The rc is now out and installable via pip install clearml==1.14.1rc0
Hi @<1523703107031142400:profile|FlatOctopus65> ! python3.9 introduced a breaking change for codebases that parse code containing slices. You can read more about it here: None . Notable:
* The code that produces a Python code from AST will need to handle indexing with tuples specially (see Tools/parser/unparse.py) because d[(a, b)] is valid syntax (although parenthesis are redundant), but d[(a, b:c)] is not.
What you could do is downgrade to...
what about import clearml; print(clearml.__version__)
(We will deprecate continue_on_fail)
Hi @<1676400486225285120:profile|GracefulSquid84> ! Each step is indeed a clearml task. You could try using the step ID. Just make sure you pass the ID to the HPO step (you can do that by simply returning the Task.current_task().id
ok, that is very useful actually
or rather than str(self)
, something like:
def __repr__(self):
return self.__class__.__name__ + "." + self.name
should work better
Hi @<1639799308809146368:profile|TritePigeon86> ! Please see continue_behaviour
. You should be able to pass the parameter to your parent step. It is not documented yet, but it should be available in the latest version of clearml. See this for some documentation: None
Hi @<1545216070686609408:profile|EnthusiasticCow4> ! Can you please try with clearml==1.13.3rc0
? I believe we fixed this issue
Can you please screenshot the INFO
tab on the pipeline controller task?
@<1590514584836378624:profile|AmiableSeaturtle81> ok, I think that your credentials from clearml.conf are actually working now. let's not change them.
Now let's try this simple code:
from clearml import Task
import numpy as np
if __name__ == "__main__":
task = Task.init(task_name="test4", project_name="test4", output_uri="
")
image = np.random.randint(0, 256, size=(500, 1000, 3), dtype=np.uint8)
task.upload_artifact("image", image)
You should change the ...
@<1657556312684236800:profile|ManiacalSeaturtle63> can you share how you are creating your pipeline?
Hi @<1693795212020682752:profile|ClumsyChimpanzee88> ! Not sure I understand the question. If the commit ID does not exist remotely, then it can't be pulled. How would you pull the commit to another machine otherwise, is this possible using your current workflow?
That would be much appreciated
We will add this to the SDK soon
Hi @<1523702000586330112:profile|FierceHamster54> ! Looks like we pull all the ancestors of a dataset when we finalize. I think this can be optimized. We will keep you posted when we make some improvements
@<1719162259181146112:profile|ShakySnake40> the data is still present in the parent and it won't be uploaded again. Also, when you pull a child dataset you are also pulling the dataset's parent data. dataset.id
is a string that uniquely identifies each dataset in the system. In my example, you are using the ID to reference a dataset which would be a parent of the newly created dataset (that is, after getting the dataset via Dataset.get
)