Reputation
Badges 1
25 × Eureka!That's what I was thinking. But I am still having issues on the self hosted version. I think it may be an unrelated issue though. I will do some debugging and report back.
On a separate note, does clearml have a set of acceptance tests that you usually go through before a release?
I knew that, I was just happy that we have an updated example 😁
Which would make sense because of the name SHUTDOWN_IF_NO_ACCESS_KEY . The thing is, when I tried setting it to 0, it still shutdown.
Ohh, thanks! Will give it a shot now!
Thanks @<1523701205467926528:profile|AgitatedDove14> restarting the agents did the trick!
I was able to resolve the issue. I am currently using clearml on wsl2 and my machine is connected to a vpn that allows me to connect on to the clearml instance hosted on AWS. You were right it was a network issue, I was able to resolve it by modifying my /etc/resolv.conf file.
How do you handle private repos in clearml for packages?
I have been trying to contribute as well...
I have created some PRs, in an attempt to improve the current situation. I'm just surprised that currently there is no CI process, and that it's been 2 months since the last release.
Again, I'm more than happy to help and contribute to the overall CI process.
Hi AgitatedDove14 ,
I am planning to use terraform to retrieve the secrets from AWS, after I retrieve the user list from the secrets manager, I am going to pass them as Environment variables.
The reason I am passing them as environment variables is that, I couldn't find a way to automatically upload files to AWS EFS from Terraform. Since the config file needs to be mounted as an EFS volume to the ECS task definition.
I was able to make the web authentication work while passing the followi...
Yes, I am using a virtualenv that has pandas and clearml installed.
I am using the latest version clearml server and I am using version 1.9.1 for the sdk.
Here is the code that I am currently using:
if __name__ == "__main__":
# create clearml data processing task
dataset = Dataset.create(
dataset_name="palmer_penguins",
dataset_project="palmer penguins",
dataset_tags=["raw"]
)
dataset_path = "data/raw/penguins.csv"
# add the downloaded files to the current dataset
dataset.add_files(path=dataset_pa...
I am currently running the scripts on WSL ubuntu
Not exactly, the dataset gets called in the script using Dataset.get() and the second dataset is an output dataset using Dataset.create().. Which means that dataset_1 is a parent dataset of dataset_2.
Thanks @<1523701205467926528:profile|AgitatedDove14>
Thank you so much for your reply, will give that a shot!
Let me rerun it, so that I can capture it. I am currently running it on AWS Fargate, so I have the logs for that.
