![Profile picture](https://clearml-web-assets.s3.amazonaws.com/scoold/avatars/BoredHedgehog47.png)
Reputation
Badges 1
212 × Eureka!for example, if my github repo is project.git and my structure is project/utils/tool.py
so it caches to ~/.clearml/ any files that are under the same project name?
AgitatedDove14 How do I setup a master task to do all the reporting?
I don't know how to get past this? My k8 pods shouldn't need to reach out to the public file server URL.
Traceback (most recent call last): File "sfi/imagery/models/training/ldc_train_end_to_end.py", line 26, in <module> from sfi.imagery.models.chip_classifier.eval import eval_chip_classifier ModuleNotFoundError: No module named 'sfi.imagery.models'
` SysPath: ['/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py/sfi/imagery/models/training', '/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py/sfi', '/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py', '/usr/lib64/python37.zip', '/usr/lib64/python3.7', '/usr/lib64/python3.7/lib-dynload', '/home/npuser/.clearml/venvs-builds/3.7/lib64/python3.7/site-packages', '/home/npuser/.clearml/venvs-builds/3.7/l...
I think if I use the local service URL this problem is fixed
It seems like https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-agent/values.yaml#L72-L80 doesn't actually do anything as the values set here aren't applied in the agent template
I don't see any requests
So this is an additional config file with enterprise? Is this new config file deployable via helm charts?
SuccessfulKoala55 Darn, so I can only scale vertically?
After proving we can run our training, I would then advise we update our code base
I got the EFS volume mounted. Curious what advantage it would be to use the StorageManager
I assumed I would need to upload it and then reference it somehow?
` * Serving Flask app 'fileserver' (lazy loading)
- Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead. - Debug mode: off
[2022-09-08 13:24:25,822] [8] [WARNING] [werkzeug] * Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment. `
Thanks for looking into this!
I think this is VPN related now
err maybe not, I dont know where its being fetched
These are the logs from the fileserver pod
gotcha, I see how that is populated now. So then if my workers have git credentials, a user can clone that experiment and run on a worker?
I used the values from the dashboard/configuration/api keys
The task pod (experiment) started reaching out to an IP associated with malicious activity. The IP was associated with 1000+ domain names. The activity was identified in AWS guard duty with a high severity level.
If you look lower, it is there '/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py'
` PYTHONPATH: /home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py/sfi:/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py:/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py/sfi/imagery/models/training::/home/npuser/.clearml/venvs-builds/3.7/task_repository/commons-imagery-models-py/sfi:/usr/lib64/python37.zip:/usr/lib64/python3.7:/usr/lib64/python3.7/lib-dynload:/home/npuser/.clearml/venvs-builds/3.7/lib6...