I mean this blob is then saved on the fs
It can if you do:temp_file = task.connect_configuration('/path/to/config/file', name='configuration object is a config file')Then temp_file is actually a local copy of the text coming from the Task.
When running in manual mode the content of '/path/to/config/file' is stored on the Task When running remotely by the agent, the content from the Task is dumped into a temp file and the path to the file is returned in temp_file
Let me try to build a minimal reproducible version
Thank you!
JumpyPig73 Do you see all the configurations under the Args section in the "Configuration" Tab ?
(Maybe I'm wrong and the latest RC does Not include the python-fire support)
is it possible to perform debugging operations with pycharm integration using remote session?
Sure, use clearml-session it will open an ssh connection to the remote machine, then you can use pycharm
As we canโt create keys in our AWS due to infosec requirements
Hmmm
It seems like the configuration is cached in a way even when you change the CLI parameters.
@<1523704461418041344:profile|EnormousCormorant39> nice!
Yes the configuration is cached so that after you set it once you can just call clearml-session again without all the arguments
What was the actual issue ? Should we add something to the printout?
Hi SuperiorDucks36
you have such a great and clear GUI
๐
I personally would love to do it with a CLI
Actually a lot of stuff are harder to get from UI (like current state of your local repository etc.) But I think your point stands ๐ We will start with CLI, because it is faster to deploy/iterate, then when you guys say this is a winner we will have a wizard in the UI.
What do you think?
Hi ElegantCoyote26
What's the docker / docker-compose version?
What's the OS?
LOL @<1545216070686609408:profile|EnthusiasticCow4>
I assume this is a hidden folder?
for example datasets are hidden folders that can be viewed if you go to the settings page and turn on "show hidden folders"
Hi PompousParrot44
Let's stick with a single question per thread, it will make my life a lot easier ๐
What do you mean by "and not in the terminal directly when executed manually through script"?
trains-agent (usually) executed as a daemon pulling jobs and executing them.
The other options is to use it to manually execute a single task.
What am I missing?
try:
None
docker_install_opencv_libs: true
, i thought there will be some hooks for deploying where the integration with k8s was also taken care automatically.
Hi ObedientToad56
Yes you are correct, basically now you have a docker-compose (spinning everything, even though per example you can also spin a standalone container (mostly for debugging).
We are working on a k8s helm chart so the deployment is easier, it will be based on these docker-compose :
https://github.com/allegroai/clearml-serving/blob/main/docker/docker-comp...
Hi @<1523701066867150848:profile|JitteryCoyote63>
Could you please push the code for that version on github?
oh seems like it is not synced, thank you for noticing (it will be taken care immediately)
Regrading the issue:
Look at the attached images
None does not contain a specific wheel for cuda117 to x86, they use the pip defualt one

It seems something is wrong with the server itself...
(BTW: draft means they are in edit mode, i.e. before execution, then they should be queued (i.e. pending) then running then completed)
Hi @<1559711593736966144:profile|SoggyCow20>
How did you configure the clerml.conf ? see here an example:
None
Any chance you actually run the second script with Popen (i.e. calling the python as a subprocess) ?
Hi @<1784754456546512896:profile|ConfusedSealion46>
clear ml server took so much memory usage, especially for elastic search
Yeah that depends on how many metrics/logs you have there, but you really have to have at least 8GB RAM
delete old experiments ?
If I try to connect a dictionary of typeย
dict[str, list]
ย withย
task.connect
, when retrieving this dictionary with
Wait, this should work out of the box, do you have any specific example?
Oh I see your point, that makes sense, it should check the state of the Task and force it to aborted so it can be renequed, the issue with reset it will clear the previous run execution, which I think we do not want, Wdyt?
Hi @<1523708920831414272:profile|SuperficialDolphin93>
The error seems like nvml fails to initialize inside the container, you can test it with nvidia-smi and check if that wirks
Regrading Cuda version the ClearML serving inherits from the Triton container, could you try to build a new one with the latest Triton container (I think 25). The docker compose is in the cleaml serving git repo. wdyt?
NastySeahorse61 I would try to open in incognito mode (i.e. no cookies etc.), did you also change the address of the server?
Hi MotionlessSeagull22
Hmm I'm not this is possible in the UI.
You can compare multiple experiments and view the images in form of thumbnails one next to the other, But full view will be a single image...
You can however right click on the image and get a direct link, then open a new tab ... :(
is there a way to increase the size of the text input for fields or a better way to handle lists?
No ๐
Maybe an easier way to use connect_configuration instead ? it will take an entire dict and store it as text (format is hocon, which is YAML/Json compatible, which means it is hard to break when editing)