Reputation
Badges 1
76 × Eureka!remove this params will solve use_current_task=True,
yup correct. but the scheduler not created idk why. here my code and the log
from doctest import Example
from clearml.automation import TriggerScheduler, TaskScheduler
from clearml import Task
import json
def open_json(fp):
with open(fp, 'r') as f:
my_dictionary = json.load(f)
return my_dictionary
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifact...
Hi AppetizingMouse58 , this is from My Work View
` # Payload
{"meta":{"id":"4ff606b50768402495674f4a2b37bdf4","trx":"4ff606b50768402495674f4a2b37bdf4","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"75d04598197a445ebef533814022c58d","company":{"id":"d1bd92a3b039400cbafc60a7a5b1e52b"},"user":{"id":"a174c4e36b0446a7b3b5dd1ff5261962"},...
still i need do this?dataset.upload() dataset.finalize()
i have another question,
if we have uploaded data clearml, how we add data?
this is my way right now.
dataset = Dataset.create( dataset_project=metadata[2], dataset_name=metadata[3], description=description, output_uri=f" ", parent_datasets=[id_dataset_latest] )
hmm i want to make custom function that need credential registed on clearml.conf, like aws s3.
is clearml-agent have clearml.conf? where the path for that? i just test it, running using clearml-agent, but not found /root/clearml.conf
{"meta":{"id":"37a8d7b26c534f9da20682801f9bd2bd","trx":"37a8d7b26c534f9da20682801f9bd2bd","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[]}}
Thanks @<1523701205467926528:profile|AgitatedDove14> , right now i just use trigger to send notification and do it manually. ClearML Superb!
i want to download model before i run the my inference code. i can actually make simple script using cleaml-sdk before that, but i just look for CLI based solution.
Hi @<1523701070390366208:profile|CostlyOstrich36> ,
i use vpn and set location in US still cannot access too.
when i set location to Germany, it can access.
any idea to solve this from user side?
Hi AppetizingMouse58 , this is when in Team's Work View :
`
Payload
{"id":["75d04598197a445ebef533814022c58d"],"include_stats":true,"check_own_contents":true,"search_hidden":true}
Response
{"meta":{"id":"c4ee9cb1c4594040bd0b44499d5e9970","trx":"c4ee9cb1c4594040bd0b44499d5e9970","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"7...
i see,
thanks for clarify. i just want to find other solutions to storing secret value. rightnow i just storing secret value on env in clearml.conf in my workers. but it will complicated if there is new value, i need update workers conf and redeploy workers.
wow, okay, i think will move all logs/plot/artifacs to my storage s3. Thanks! really helpful!
i attach train.py here,
and to run it i do python src/train.py
nope, still looking away to set AWS S3 secret_key without doing clearml-agent init
Hi @<1523701205467926528:profile|AgitatedDove14> , Thanks for rresponse!
this my simple code to test scheduler
import datetime
from clearml.automation import TaskScheduler
def test_make():
print('test running', datetime.datetime.now())
if __name__ == '__main__':
task_scheduler = TaskScheduler(
sync_frequency_minutes=30,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
print('\n[utc_timestamp]...
also i found, i cannot delete project, even the project is show empty experiment.
sorry, but can clearml pipeline do this scenario?
Hi @<1523701994743664640:profile|AppetizingMouse58> , i have update on #228. Thanks!
my case more like there is a task/process that running but somehow its takes too long to completed. it can be because connection issue forgot to put connection timeout, a problem connection database, etc that makes status still running, but its traped in a situation like that.
so i want to force shutdown a task to failed if that happen
i see thanks for the answer, i will read that reference.
Thanks guys, i will try to learn that first. i will updates when executing these ideas. @<1523701087100473344:profile|SuccessfulKoala55> @<1590514584836378624:profile|AmiableSeaturtle81> 🙌
Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
We have successfully created a sample for the migration. Here are the changes:
- URL for MongoDB from
s3://toazure:// - Elasticsearch as you suggested
However, our main focus is that most of our production fetches models from ClearML, which are configured withs3://URLs.
There is an issue/bug in the UI when downloading via Azure. Here are the details: None .
https://github.com/mert-kurttutan/torchview
maybe can try this one, and can send to logger clearml at the end.