Reputation
Badges 1
34 × Eureka!problem solved. Removed nginx limits
@<1523701070390366208:profile|CostlyOstrich36> Yes, sure
import pandas as pd
import yaml
import os
from omegaconf import OmegaConf
from clearml import Dataset
config_path = 'configs/structured_docs.yml'
with open(config_path) as f:
config = yaml.full_load(f)
config = OmegaConf.create(config)
path2images = config.data.images_folder
def get_data(config, split):
path2annotation = os.path.join(config.data.annotation_folder, f"sample_{split}.csv")
data = pd.read_csv(path2an...
Thank you for your response @<1523701205467926528:profile|AgitatedDove14> . I will definitely try the solutions you described above. Could you please advise if it is possible to execute the "bash.sh" script directly before the environment setup stages for reproducing the experiment? The repository setup involves downloading resources from AWS. While creating a container that incorporates my requirements would help solve this problem, I am interested in finding a more flexible approach.
@<1523701205467926528:profile|AgitatedDove14> The bash script does the unloading of the necessary resources from aws and sets the environment variable
aws s3 cp ..... --recursive
export PYTHONPATH=" "
All commands can be added to the generated docker image, but you will have to change the project structure