
Reputation
Badges 1
76 × Eureka!hi i have similar case, but can we scheduled new task here?
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifacts)
try:
fp = previous_task.artifacts['latest_condition'].get_local_copy()
params = open_json(fp)
last_index = params.get('last_index')
day_n = params.get('iteration')
print("Success Fetching", param...
i running this on 2.35 am, but the job not launching after 2.40 am
hi @<1576381444509405184:profile|ManiacalLizard2> thanks for the answer, i will try that!
Thanks! i just prove it can run in next day, but not for the same day. i hope can run in same day too.
Syncing scheduler
Waiting for next run, sleeping for 5.13 minutes, until next sync.
Launching job: ScheduleJob(name='fetch feedback', base_task_id='', base_function=<function test_make at 0x7f91fd123d90>, queue=None, target_project='Automation/testing', single_instance=False, task_parameters={}, task_overrides={}, clone_task=True, _executed_instances=None, execution_limit_hours=None, r...
yup correct. but the scheduler not created idk why. here my code and the log
from doctest import Example
from clearml.automation import TriggerScheduler, TaskScheduler
from clearml import Task
import json
def open_json(fp):
with open(fp, 'r') as f:
my_dictionary = json.load(f)
return my_dictionary
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifact...
Thanks @<1523701205467926528:profile|AgitatedDove14> , right now i just use trigger to send notification and do it manually. ClearML Superb!
i set like this: for init Task Scheduler
task_scheduler = TaskScheduler(
sync_frequency_minutes=5,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
Hi AgitatedDove14 , is the Dataset.get
will take all child too?
i see okay thanks
# downloading data from s3 manager = StorageManager() target_folder = manager.download_folder( local_folder='/tmp', remote_url=f'
` '
)
# upload to clearml
dataset = Dataset.create(
dataset_project=metadata[2],
dataset_name=metadata[3],
dataset_tags=tags,
output_uri=" ` ` "
)
fp_target_folder = os.path.join(target_folder, minio_s3_url)
print('>>...
Hi @<1523701205467926528:profile|AgitatedDove14> ,
Yes i want to do that, but so far i know Task.enqueue will execute immediately, i need execute task to spesific time, and i see to do that i need scheduler and set recurring False, set time.
I tried that create scheduler, but the scheduler not created when the function executed.
Hi CostlyOstrich36 ,
nope, i mean my server does not have pip/conda. so i will go for docker/container, is that possible if i install clearml-agent inside python:3.10 container?
it seems if i access with my dns cannot see
and if access with ip address can see
I see, yeah my alternative solution right now is just to show the list of options outside on ClearML UI.
nope, still looking away to set AWS S3 secret_key without doing clearml-agent init
the current my solution is upload my config to s3, and the pipeline will download it and read it when execute. but its decrase flexiblity.
clearml-agent, if you looking for clearml.conf, the place is '/root/default_clearml.conf'
Iβm running the agent in βpipβ mode. I need to fetch certain secret values, which would be best done using Python code. If I incorporate it into the script (repository), others could deduce the path to retrieve the environment or secret value. Storing the environment variables in the clearml.config isnβt very flexible either.
Thanks guys, i will try to learn that first. i will updates when executing these ideas. @<1523701087100473344:profile|SuccessfulKoala55> @<1590514584836378624:profile|AmiableSeaturtle81> π
Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
We have successfully created a sample for the migration. Here are the changes:
- URL for MongoDB from
s3://
toazure://
- Elasticsearch as you suggested
However, our main focus is that most of our production fetches models from ClearML, which are configured withs3://
URLs.
There is an issue/bug in the UI when downloading via Azure. Here are the details: None .
Hi AgitatedDove14 ,
right now i can delete, i set configuration to see hidden project/file on UI, and it show task related (e.g pipeline, dataset) on that project. but yeah, every time i make project with subproject inside, there is [undefined] there. i create project using code and UI, the result is same.
this is my docker-compose conf. i change all port 8XXX to 7XXX nad also change /opt/cleaml to /mnt/hdd_2/clearml.
` version: "3.6"
services:
apiserver:
command:
- apiserver
...
it seems only happen if i change the user preference to My Works , if set Team's Work it will show like this.
` {"meta":{"id":"17c6e609ace54bf8bfdf3113c39fd470","trx":"17c6e609ace54bf8bfdf3113c39fd470","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"75d04598197a445ebef533814022c58d","company":{"id":"d1bd92a3b039400cbafc60a7a5b1e52b"},"user":{"id":"a174c4e36b0446a7b3b5dd1ff5261962"},"name":"ex-1","basename":"ex-1","description":"","created":"20...
Hi @<1523701070390366208:profile|CostlyOstrich36>
i attach for complete log
here my structure:
.
βββ app
β βββ backend
β βββ frontend
βββ assets
β βββ demo-app-sample.png
β βββ workflow.png
βββ config
β βββ clearml.conf
β βββ list_models.py
β βββ list_optimizer.py
β βββ __pycache__
β βββ train_config.py
βββ docker
β βββ Dockerfile
β βββ Dockerfile.app
β βββ requirements.prod.txt
β βββ requirements.train.txt
β βββ requirements.txt
βββ lightning_logs
βββ Mak...
Hi @<1523701070390366208:profile|CostlyOstrich36> , i think can try this to run it as standalone:
Hi @<1523701070390366208:profile|CostlyOstrich36> , just want to update,
this is solve by
- remove
-f
- change Task.force_requirements_env_freeze(False, req_path) -> Task.add_requirements(req_path)
- change my clearml-agent settings