Reputation
Badges 1
25 × Eureka!Hi @<1523701205467926528:profile|AgitatedDove14> ,
Thank you for your prompt response.
I am using the functional pipeline API to create the steps. Where each step calls a function. My functions are stored in the files under the ap_pipeline
directory ( filters.py
, features.py
, etc..)
These are packaged as part of this repo.
The modules are imported inside of the clearml_pipeline.py
so it would look something like:
from ap_pipeline.features import func1, func2 ....
This...
The extra_index_url
is not even showing..
Thank you for your reply SuccessfulKoala55 . 😄
It is currently set to 1, so I am assuming setting it to 0 would mute the errors from logging?
The current behaviour is, if I keep it set to 1 the services agent would automatically shutdown if the access key is not configured. Assuming I set it to 0, then the agent services should not shutdown anymore, right?
The community server is working again.
Thanks @<1523701205467926528:profile|AgitatedDove14> restarting the agents did the trick!
I am having the same problem on both the self hosted and free community clearml.
That's what I was thinking. But I am still having issues on the self hosted version. I think it may be an unrelated issue though. I will do some debugging and report back.
On a separate note, does clearml have a set of acceptance tests that you usually go through before a release?
I would like to see it used in a clear example as it was intended to be used before giving my opinion on it, if that makes sense
I knew that, I was just happy that we have an updated example 😁
SuccessfulKoala55 That seemed to do the trick, thanks for your help! 😄
I was able to resolve the issue. I am currently using clearml on wsl2 and my machine is connected to a vpn that allows me to connect on to the clearml instance hosted on AWS. You were right it was a network issue, I was able to resolve it by modifying my /etc/resolv.conf
file.
Yes, I am using a virtualenv that has pandas and clearml installed.
@<1523701070390366208:profile|CostlyOstrich36> I am facing the same issue:
{"meta":{"id":"90841d05dfb1431a8d9dfc6bfdb39f9e","trx":"90841d05dfb1431a8d9dfc6bfdb39f9e","endpoint":{"name":"events.debug_images","requested_version":"2.23","actual_version":"2.7"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"metrics":[]}}
I am currently running the scripts on WSL ubuntu
I set my local laptop as an agent for testing purposes. I run the code on my laptop, it gets sent to the server which sends it back to my laptop. So the conf file is technically on the worker right?
So what's the point of the alias? It's not very clear.. Even after specifying an alias I am still getting the following message: Dataset.get() did not specify alias. Dataset information will not be automatically logged in ClearML Server
I'm actually trying that as we speak 😛
Not exactly, the dataset gets called in the script using Dataset.get() and the second dataset is an output dataset using Dataset.create().. Which means that dataset_1 is a parent dataset of dataset_2.
Which would make sense because of the name SHUTDOWN_IF_NO_ACCESS_KEY
. The thing is, when I tried setting it to 0, it still shutdown.
Hi AgitatedDove14 ,
I am planning to use terraform to retrieve the secrets from AWS, after I retrieve the user list from the secrets manager, I am going to pass them as Environment variables.
The reason I am passing them as environment variables is that, I couldn't find a way to automatically upload files to AWS EFS from Terraform. Since the config file needs to be mounted as an EFS volume to the ECS task definition.
I was able to make the web authentication work while passing the followi...
As you can see, it eventually manages to reach the apiserver
however, it still says that access key was not provided and that the service will not be started. I get the same behaviour whether I set the flag to 0 or 1.
Just waiting for the changes to be completed
Let me rerun it, so that I can capture it. I am currently running it on AWS Fargate, so I have the logs for that.
Wow, that was fast. Thanks a lot for your prompt response! Will check it out now :D
Right so I figured out why it was calling it multiple times. Everytime a dataset is serialiazed, it calls the _serialize()
function inside of clearml/datasets/dataset.py
file, the _serialize()
method calls self.get(parent_dataset_id)
which is the same get()
method. This means that the user will always be prompted with the log, even if they are not "getting" a dataset. So anytime a user creates, uploads, finalizes a dataset, they will be prompted with the message...