Reputation
Badges 1
76 × Eureka!Hi AppetizingMouse58 , this is from My Work View
` # Payload
{"meta":{"id":"4ff606b50768402495674f4a2b37bdf4","trx":"4ff606b50768402495674f4a2b37bdf4","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"75d04598197a445ebef533814022c58d","company":{"id":"d1bd92a3b039400cbafc60a7a5b1e52b"},"user":{"id":"a174c4e36b0446a7b3b5dd1ff5261962"},...
i see thanks for the answer, i will read that reference.
` {"meta":{"id":"17c6e609ace54bf8bfdf3113c39fd470","trx":"17c6e609ace54bf8bfdf3113c39fd470","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"75d04598197a445ebef533814022c58d","company":{"id":"d1bd92a3b039400cbafc60a7a5b1e52b"},"user":{"id":"a174c4e36b0446a7b3b5dd1ff5261962"},"name":"ex-1","basename":"ex-1","description":"","created":"20...
Correct! Thanks AppetizingMouse58 !
also i found, i cannot delete project, even the project is show empty experiment.
Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
We have successfully created a sample for the migration. Here are the changes:
- URL for MongoDB from
s3://toazure:// - Elasticsearch as you suggested
However, our main focus is that most of our production fetches models from ClearML, which are configured withs3://URLs.
There is an issue/bug in the UI when downloading via Azure. Here are the details: None .
hi i have similar case, but can we scheduled new task here?
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifacts)
try:
fp = previous_task.artifacts['latest_condition'].get_local_copy()
params = open_json(fp)
last_index = params.get('last_index')
day_n = params.get('iteration')
print("Success Fetching", param...
yup correct. but the scheduler not created idk why. here my code and the log
from doctest import Example
from clearml.automation import TriggerScheduler, TaskScheduler
from clearml import Task
import json
def open_json(fp):
with open(fp, 'r') as f:
my_dictionary = json.load(f)
return my_dictionary
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifact...
Hi @<1523701070390366208:profile|CostlyOstrich36> , thanks for response, sorry for late replay,
this is my configuration in yaml, i facing difficulty when there is params in list. somehow, form to display bunch list not easy to see. do you have suggestion? Thanks!
download-data:
dataset_train:
-
-
-
dataset_test:
-
-
-
train:
data:
batch: 4
input_size: 224
split:
t...
alright, will try, i just worried about if execution mode is docker mode? should i mount to /var/run/docker.sock?
i see okay thanks
Thanks! i just prove it can run in next day, but not for the same day. i hope can run in same day too.
Syncing scheduler
Waiting for next run, sleeping for 5.13 minutes, until next sync.
Launching job: ScheduleJob(name='fetch feedback', base_task_id='', base_function=<function test_make at 0x7f91fd123d90>, queue=None, target_project='Automation/testing', single_instance=False, task_parameters={}, task_overrides={}, clone_task=True, _executed_instances=None, execution_limit_hours=None, r...
Hi @<1523701070390366208:profile|CostlyOstrich36>
i attach for complete log
here my structure:
.
├── app
│ ├── backend
│ └── frontend
├── assets
│ ├── demo-app-sample.png
│ └── workflow.png
├── config
│ ├── clearml.conf
│ ├── list_models.py
│ ├── list_optimizer.py
│ ├── __pycache__
│ └── train_config.py
├── docker
│ ├── Dockerfile
│ ├── Dockerfile.app
│ ├── requirements.prod.txt
│ ├── requirements.train.txt
│ └── requirements.txt
├── lightning_logs
├── Mak...
Hi @<1523701070390366208:profile|CostlyOstrich36> , just want to update,
this is solve by
- remove
-f
- change Task.force_requirements_env_freeze(False, req_path) -> Task.add_requirements(req_path)
- change my clearml-agent settings
alright, will try thanks!
i see,
thanks for clarify. i just want to find other solutions to storing secret value. rightnow i just storing secret value on env in clearml.conf in my workers. but it will complicated if there is new value, i need update workers conf and redeploy workers.
i attach train.py here,
and to run it i do python src/train.py
AgitatedDove14 , yes. i tried 3 times, and it always happen.
nope, still looking away to set AWS S3 secret_key without doing clearml-agent init
Hi @<1523701205467926528:profile|AgitatedDove14> ,
Yes i want to do that, but so far i know Task.enqueue will execute immediately, i need execute task to spesific time, and i see to do that i need scheduler and set recurring False, set time.
I tried that create scheduler, but the scheduler not created when the function executed.
Hi @<1523701994743664640:profile|AppetizingMouse58> , i have update on #228. Thanks!
I see, yeah my alternative solution right now is just to show the list of options outside on ClearML UI.
i set like this: for init Task Scheduler
task_scheduler = TaskScheduler(
sync_frequency_minutes=5,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
hmm i want to make custom function that need credential registed on clearml.conf, like aws s3.
is clearml-agent have clearml.conf? where the path for that? i just test it, running using clearml-agent, but not found /root/clearml.conf
I’m running the agent in ‘pip’ mode. I need to fetch certain secret values, which would be best done using Python code. If I incorporate it into the script (repository), others could deduce the path to retrieve the environment or secret value. Storing the environment variables in the clearml.config isn’t very flexible either.
my case more like there is a task/process that running but somehow its takes too long to completed. it can be because connection issue forgot to put connection timeout, a problem connection database, etc that makes status still running, but its traped in a situation like that.
so i want to force shutdown a task to failed if that happen