Badges 129 × Eureka!
Hi @<1523701323046850560:profile|OutrageousSheep60> , thanks for your message as well. So far I have actually been using these exact functions until I noticed the following: when I run a task with these calls, everything works as expected. However, if I do a hyperparameter tuning and change some of the hyperparameters so that the additional information that is not a hyperparameter also changes, they are not adjusted. For better understanding again my concrete example: I have 3 parameters/inf...
Hello CostlyOstrich36 , thanks for your question. At the moment I am training a MLP for a regression problem and in one case I want to store the number of neurons per layer. Note that in my case it is not a hyperparameter because I calculate the number of neurons based on the number of layers and the number of model parameters. Another case is that I want to store some local paths where the models are stored, since I currently don't have any remote storage set up for my models.
Hi @<1523701087100473344:profile|SuccessfulKoala55> , thanks for your message! 🙂 I am aware that the console is also logged on the server, but I somehow find it not optimal to look for relevant information in the console log and would like to place the information in a more structured way.
Hi @<1523701435869433856:profile|SmugDolphin23> , thanks for your question. For now I just deleted the requirements.txt and let ClearML track the requirements automatically and it works. For long term I would still like to use a requirements.txt, so I will come back to this topic a little later.
Ok, I have some weird update... I shut down and restarted the docker container just to get fresh logs and now I am getting the following error message by clearml-serving-triton
` clearml-serving-triton | clearml-serving - Nvidia Triton Engine Controller
clearml-serving-triton | Warning: more than one valid Controller Tasks found, using Task ID=433aa14db3f545ad852ddf846e25dcf0
clearml-serving-triton | ClearML Task: overwriting (reusing) task id=350a5a919ff648148a3de4483878...
I think you are correct with your guess that the services were not shut down properly. I noticed that some services were still shown as running on the clear ml dashboard. I aborted all and at least got rid of the error
ValueError: triton-server process ended with error code 1 . But the two errors you named are still there and I also got these two warnings:
` clearml-serving-triton | Warning: more than one valid Controller Tasks found, using Task ID=4709b0b383a04bb1a033e99fd325dc...
By the way, the example which worked for me in the beginning also produces the same error now
poll failed for model directory 'test_model_pytorch': failed to open text file for read /models/test_model_pytorch/config.pbtxt: No such file or directory . So there really seems to be something wrong with the docker containers.
I got the last bit of my issue solved. I thought for a start it would be easier to provide the
AZURE_STORAGE_KEY in my 'example.env' in plain text and not access my environment variables because I was not sure about the syntax. Turns out the syntax is not
AZURE_STORAGE_KEY=mystoragekey123 . Same for
AZURE_STORAGE_ACCOUNT . Also the syntax for accessing my environment variables is just the same as in the clear...
Ok, I have found the issue. 🙌 When I try to serve a model which is saved on azure (generated by
Task.init(..., output_uri='azure://...') ) I get the
poll failed for model directory 'test_model_pytorch': failed to open text file for read /models/test_model_pytorch/config.pbtxt: No such file or directory error. A model which was saved on the clearML server (generated by
Task.init(..., output_uri=True) ) can be served without any problems.
For now I am not sure why th...
I got it working!! For now I am not sure what did the trick because I tried a bunch of different things. But I will try to reproduce it and come back to this thread for other users facing this problem. So big thanks for your help, @<1523701118159294464:profile|ExasperatedCrab78> !
Hi @<1523701118159294464:profile|ExasperatedCrab78> , I have a sad update on this issue. It does not seem to be completely solved yet. 😕 But I think I can at least describe it a bit better now:
- Models which are located on the clearML servers (created by
Task.init(..., output_uri=True)) still run perfectly.
- Models which are located on azure blob storage make different problems in different scenarios (which made me think we resolved this issue):- When I start the docker con...
Ok, so I killed all docker containers (the proposal by chatgpt did not work for me, but your commands did). The result is that we have one less warning. The warning
clearml-serving-triton | Warning: more than one valid Controller Tasks found, using Task ID=4709b0b383a04bb1a033e99fd325dcbf seems to be solved. All remaining errors come up in the clearml-serving-triton service and this is the log I get
Hi @<1523701118159294464:profile|ExasperatedCrab78> , thanks for your answer. 🙂 Yes sure! I will create the issue right away.
You're very welcome, thank you again for the great support. :)) I followed the instructions of the clearml-serving README on github None . There is one section called 'Optional: advanced setup - S3/GS/Azure access'. Maybe the syntax could be added there? I also saw the additonal link to configure the storage access, but this site focuses on setting up the clearml.conf and I was not sure how and if I could transfer it to the docker .env-file.
Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for your answer I will check if I get that working!
Hi @<1523701070390366208:profile|CostlyOstrich36> , thanks for your answer! I just updated the 'azure_storage_blob' package to the newest version and got some strange behaviour. When running the BOHB hyperparameter optimization, there is only one job executed and not stopped. I aborted the job after 3500 epochs because I set the the max_iteration_per_job parameter to 1000 and the job seems to run infinitely long. I just downgraded the package back to version 12.14.1 and everything works as b...
Yes, I also find that very weird... I start the hyperparameter optimization via python code using the HyperParameterOptimizer class of clearml. Which configurations are you explicitely interested in?
When comparing the logs of the two hpo tasks it seems like no logs of the subtasks are getting to the hpo task. So maybe this is the reason for the infinitely long running subtask? But what does the azure package have to do with that?
This is the hpo task log with the azure-storage-blob in version 12.14.1
This is the log of the hpo task with the newest azure azure-storage-blob version
Hi ExasperatedCrab78 , thanks for your answer! In fact I used your recommended format for passing input and output size before and changed it in my debugging process. I have just tried again but got the same error message.
Also thanks for the hint to check the log for warnings I wil do this in a moment.
Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for your answer!
I reached over 1M API calls in about one week using clearml-serving on one machine with only calling the deployed model a few hundred times for testing purpose. So I wanted to dig a little bit deeper on that. Thanks for the channel suggestion, I will repost my question there. :)
Yes I am running the agent by calling
clearml-agent daemon --queue default in my virtual environment on my local computer.
` Current configuration (clearml_agent v1.5.1, location: C:/Users/USER~1/AppData/Local/Temp/.clearml_agent.g6ysfs_g.cfg):
agent.worker_id = HPZBook:0
agent.worker_name = HPZBook
agent.force_git_ssh_protocol = false
agent.package_manager.type = pip
agent.package_manager.pip_version.0 = <20.2 ; python_version < '3.10'
agent.package_manager.pip_version.1 = <22.3 ; python_version >= '3.10'
agent.package_manager.system_site_packages = false
Thank you, I did not think about that. It helped a lot! I found out that the problem causing the unicode error was, that only the 'python' command was set up on my windows machine, but not the 'python3' command. This was the exact error for documentation:
DEBUG:clearml_agent.commands.worker:Searching for python3 Traceback (most recent call last): File "C:\Users\User\venvs\clearml\lib\site-packages\clearml_agent\helper\process.py", line 204, in normalize_exception yield File "C:\...
Hi @<1523701205467926528:profile|AgitatedDove14> thanks for your answer! 🙂 I think my case is a bit different. I do not want to load a custom model but I want to load a custom object used for preprocessing. So I think the load method would not fit, as the
local_file_name parameter I get in the load function would lead to the model file. And as far as I can see there is no mechanism installed to load other objects than the model file inside the Preprocess class, right?
Hi @<1523701205467926528:profile|AgitatedDove14> , I serialized a sklearn MinMaxScaler object which I created on the training data using pickle. So when serving the model I would like to load that pickle file in the preprocess script such that I can perform the same normalization as done during training. Unless there is a better practice applying the same normalization during training and serving time.
Hi @<1523701205467926528:profile|AgitatedDove14> , that is an interesting idea! But wouldn't it be better to load the model in the
load() function, so that the model doesn't have to be loaded again with every request? Or is there kind of internal link that when the
load() method is implemented it is expected that there was a custom model loaded and applied in the
Hi @<1523701205467926528:profile|AgitatedDove14> , thanks for your answer! Can you tell me, how specifically I map my clearml.conf to the containers? By the way, the credentials are already set (and working) in the clearml.conf.
FYI: I just posted an issue on github None