Reputation
Badges 1
76 × Eureka!Hi @<1523701087100473344:profile|SuccessfulKoala55> , Thanks for your response.
I'm not entirely sure about the use of CLEARML_ENV
since I haven't interacted with it before. Could you guide me on what I should set as its value?
Previously, the system was running smoothly. However, I've run into some issues after making certain configuration changes to modify the server permissions. Specifically, I'm curious if these changes might have influenced the agent's permission to access certain...
i running this on 2.35 am, but the job not launching after 2.40 am
it seems your clearml-agent didn't setup the right git account. are you sure setup on your agent conf?
hi @<1523701087100473344:profile|SuccessfulKoala55> , it solved! thanks for information CLEARML_ENV
! I just accidently write environment varible CLEARML_ENV on every clearml-agent.conf. 🎉
clearml-agent, if you looking for clearml.conf, the place is '/root/default_clearml.conf'
Hi CostlyOstrich36 ,
nope, i mean my server does not have pip/conda. so i will go for docker/container, is that possible if i install clearml-agent inside python:3.10 container?
i set like this: for init Task Scheduler
task_scheduler = TaskScheduler(
sync_frequency_minutes=5,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
remove this params will solve use_current_task=True,
Thanks! i just prove it can run in next day, but not for the same day. i hope can run in same day too.
Syncing scheduler
Waiting for next run, sleeping for 5.13 minutes, until next sync.
Launching job: ScheduleJob(name='fetch feedback', base_task_id='', base_function=<function test_make at 0x7f91fd123d90>, queue=None, target_project='Automation/testing', single_instance=False, task_parameters={}, task_overrides={}, clone_task=True, _executed_instances=None, execution_limit_hours=None, r...
Hi @<1523701205467926528:profile|AgitatedDove14> , Thanks for rresponse!
this my simple code to test scheduler
import datetime
from clearml.automation import TaskScheduler
def test_make():
print('test running', datetime.datetime.now())
if __name__ == '__main__':
task_scheduler = TaskScheduler(
sync_frequency_minutes=30,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
print('\n[utc_timestamp]...
alright, will try thanks!
i feel pain to make it as form if so much varible want to changes.
i attach train.py here,
and to run it i do python src/train.py
Hi @<1523701070390366208:profile|CostlyOstrich36> , just want to update,
this is solve by
- remove
-f
- change Task.force_requirements_env_freeze(False, req_path) -> Task.add_requirements(req_path)
- change my clearml-agent settings
Hi @<1523701070390366208:profile|CostlyOstrich36> , i think can try this to run it as standalone:
hmm i want to make custom function that need credential registed on clearml.conf, like aws s3.
is clearml-agent have clearml.conf? where the path for that? i just test it, running using clearml-agent, but not found /root/clearml.conf
i want to download model before i run the my inference code. i can actually make simple script using cleaml-sdk before that, but i just look for CLI based solution.
i see, it solved right now using default_output_uri, Thanks!
i need custom output_uri for some function because split dataset and model artifacs.
alright, will try, i just worried about if execution mode is docker mode? should i mount to /var/run/docker.sock?
my config is same like issue #763
` import clearml
from clearml import StorageManager, Dataset
from rich import print
version_clearml = clearml.version
manager = StorageManager()
print(f'clearml: {version_clearml}')
try:
minio_s3_url = 'x/x/x/x/x/x/x'
print('\n-------------download folder-------------')
target_folder = manager.download_folder(
local_folder='tmp',
remote_url=f' '
)
except Exception as e:
print(e)
print('FAILED: download fold...
oh okay, so i need to set that to path ssd, yeah?
is it this one? or there is
docker_internal_mounts {
sdk_cache: "/clearml_agent_cache"
apt_cache: "path/to/ssd/apt-cache"
ssh_folder: "/root/.ssh"
pip_cache: "path/to/ssd/clearml-cache/pip"
poetry_cache: "/mnt/hdd_2/clearml-cache/pypoetry"
vcs_cache: "path/to/ssd/clearml-cache/vcs-cache"
venv_build: "path/to/ssd/clearml-cache/venvs-builds"
pip_download: "path/to/ssd/cle...
maybe accidently install my custom solution on this https://github.com/muhammadAgfian96/clearml/commit/01db9aa40537a6c2f83977220423556a48614c3a at that time. so i said the test is passed.
https://github.com/mert-kurttutan/torchview
maybe can try this one, and can send to logger clearml at the end.
Hi @<1523701070390366208:profile|CostlyOstrich36> , thanks for response, sorry for late replay,
this is my configuration in yaml, i facing difficulty when there is params in list. somehow, form to display bunch list not easy to see. do you have suggestion? Thanks!
download-data:
dataset_train:
-
-
-
dataset_test:
-
-
-
train:
data:
batch: 4
input_size: 224
split:
t...
Hi @<1523701994743664640:profile|AppetizingMouse58> , i have update on #228. Thanks!
Hi @<1523701070390366208:profile|CostlyOstrich36>
i attach for complete log
here my structure:
.
├── app
│ ├── backend
│ └── frontend
├── assets
│ ├── demo-app-sample.png
│ └── workflow.png
├── config
│ ├── clearml.conf
│ ├── list_models.py
│ ├── list_optimizer.py
│ ├── __pycache__
│ └── train_config.py
├── docker
│ ├── Dockerfile
│ ├── Dockerfile.app
│ ├── requirements.prod.txt
│ ├── requirements.train.txt
│ └── requirements.txt
├── lightning_logs
├── Mak...
sorry, but can clearml pipeline do this scenario?