it seems if i access with my dns cannot see
and if access with ip address can see
Thanks for response.
from clearml import Task
from clearml.automation import TaskScheduler
from datetime import timedelta, datetime
def my_task():
task = Task.init(...)
# do somthinge
print("do something")
# sleep 10
condition = True
if condition:
# i want to trigger run another task by
# set some config in task, but execute tomorrow/sometime
# not directly run at the time.
# here i use
task_id = task.id
task.cl...
# downloading data from s3 manager = StorageManager() target_folder = manager.download_folder( local_folder='/tmp', remote_url=f'
` '
)
# upload to clearml
dataset = Dataset.create(
dataset_project=metadata[2],
dataset_name=metadata[3],
dataset_tags=tags,
output_uri=" ` ` "
)
fp_target_folder = os.path.join(target_folder, minio_s3_url)
print('>>...
Correct! Thanks AppetizingMouse58 !
oh okay, so i need to set that to path ssd, yeah?
is it this one? or there is
docker_internal_mounts {
sdk_cache: "/clearml_agent_cache"
apt_cache: "path/to/ssd/apt-cache"
ssh_folder: "/root/.ssh"
pip_cache: "path/to/ssd/clearml-cache/pip"
poetry_cache: "/mnt/hdd_2/clearml-cache/pypoetry"
vcs_cache: "path/to/ssd/clearml-cache/vcs-cache"
venv_build: "path/to/ssd/clearml-cache/venvs-builds"
pip_download: "path/to/ssd/cle...
do you mean i can change?
files_server:
->
i set like this: for init Task Scheduler
task_scheduler = TaskScheduler(
sync_frequency_minutes=5,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
Hi @<1523701070390366208:profile|CostlyOstrich36>
i attach for complete log
here my structure:
.
├── app
│ ├── backend
│ └── frontend
├── assets
│ ├── demo-app-sample.png
│ └── workflow.png
├── config
│ ├── clearml.conf
│ ├── list_models.py
│ ├── list_optimizer.py
│ ├── __pycache__
│ └── train_config.py
├── docker
│ ├── Dockerfile
│ ├── Dockerfile.app
│ ├── requirements.prod.txt
│ ├── requirements.train.txt
│ └── requirements.txt
├── lightning_logs
├── Mak...
yup correct. but the scheduler not created idk why. here my code and the log
from doctest import Example
from clearml.automation import TriggerScheduler, TaskScheduler
from clearml import Task
import json
def open_json(fp):
with open(fp, 'r') as f:
my_dictionary = json.load(f)
return my_dictionary
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifact...
still i need do this?dataset.upload() dataset.finalize()
i have another question,
if we have uploaded data clearml, how we add data?
this is my way right now.
dataset = Dataset.create( dataset_project=metadata[2], dataset_name=metadata[3], description=description, output_uri=f"
", parent_datasets=[id_dataset_latest] )
hi i have similar case, but can we scheduled new task here?
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifacts)
try:
fp = previous_task.artifacts['latest_condition'].get_local_copy()
params = open_json(fp)
last_index = params.get('last_index')
day_n = params.get('iteration')
print("Success Fetching", param...
Hi @<1523701205467926528:profile|AgitatedDove14> ,
Yes i want to do that, but so far i know Task.enqueue will execute immediately, i need execute task to spesific time, and i see to do that i need scheduler and set recurring False, set time.
I tried that create scheduler, but the scheduler not created when the function executed.
hi @<1576381444509405184:profile|ManiacalLizard2> thanks for the answer, i will try that!
you can spesificly use Task.add_requirements
and pointing to path requirement.txt
i attach train.py here,
and to run it i do python src/train.py
Hi CostlyOstrich36 ,
nope, i mean my server does not have pip/conda. so i will go for docker/container, is that possible if i install clearml-agent inside python:3.10 container?
alright, will try, i just worried about if execution mode is docker mode? should i mount to /var/run/docker.sock?
Hi, @<1523701070390366208:profile|CostlyOstrich36> ,yes! correct! how to achive that? it will save my storage.
i see thanks for the answer, i will read that reference.
hi @<1523701087100473344:profile|SuccessfulKoala55> , it solved! thanks for information CLEARML_ENV
! I just accidently write environment varible CLEARML_ENV on every clearml-agent.conf. 🎉
Hi @<1523701070390366208:profile|CostlyOstrich36> , thanks for response, sorry for late replay,
this is my configuration in yaml, i facing difficulty when there is params in list. somehow, form to display bunch list not easy to see. do you have suggestion? Thanks!
download-data:
dataset_train:
-
-
-
dataset_test:
-
-
-
train:
data:
batch: 4
input_size: 224
split:
t...
my config is same like issue #763
` import clearml
from clearml import StorageManager, Dataset
from rich import print
version_clearml = clearml.version
manager = StorageManager()
print(f'clearml: {version_clearml}')
try:
minio_s3_url = 'x/x/x/x/x/x/x'
print('\n-------------download folder-------------')
target_folder = manager.download_folder(
local_folder='tmp',
remote_url=f' '
)
except Exception as e:
print(e)
print('FAILED: download fold...
i feel pain to make it as form if so much varible want to changes.
i running this on 2.35 am, but the job not launching after 2.40 am
the current my solution is upload my config to s3, and the pipeline will download it and read it when execute. but its decrase flexiblity.