Reputation
Badges 1
76 × Eureka!` {"meta":{"id":"17c6e609ace54bf8bfdf3113c39fd470","trx":"17c6e609ace54bf8bfdf3113c39fd470","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"75d04598197a445ebef533814022c58d","company":{"id":"d1bd92a3b039400cbafc60a7a5b1e52b"},"user":{"id":"a174c4e36b0446a7b3b5dd1ff5261962"},"name":"ex-1","basename":"ex-1","description":"","created":"20...
https://github.com/mert-kurttutan/torchview
maybe can try this one, and can send to logger clearml at the end.
the current my solution is upload my config to s3, and the pipeline will download it and read it when execute. but its decrase flexiblity.
Thanks! i just prove it can run in next day, but not for the same day. i hope can run in same day too.
Syncing scheduler
Waiting for next run, sleeping for 5.13 minutes, until next sync.
Launching job: ScheduleJob(name='fetch feedback', base_task_id='', base_function=<function test_make at 0x7f91fd123d90>, queue=None, target_project='Automation/testing', single_instance=False, task_parameters={}, task_overrides={}, clone_task=True, _executed_instances=None, execution_limit_hours=None, r...
you can spesificly use Task.add_requirements and pointing to path requirement.txt
i see, it solved right now using default_output_uri, Thanks!
i need custom output_uri for some function because split dataset and model artifacs.
yeah, we cannot do anything to that [undefined]. i got this when i click that.
it seems only happen if i change the user preference to My Works , if set Team's Work it will show like this.
hi @<1523701070390366208:profile|CostlyOstrich36> , i mean uv this None
it seems if i access with my dns cannot see
and if access with ip address can see
i set like this: for init Task Scheduler
task_scheduler = TaskScheduler(
sync_frequency_minutes=5,
force_create_task_name='controller_feedback',
force_create_task_project='Automation/Controller',
)
`
payload
{"id":["75d04598197a445ebef533814022c58d"],"include_stats":true,"check_own_contents":true,"active_users":["a174c4e36b0446a7b3b5dd1ff5261962"],"search_hidden":true}
Response
{"meta":{"id":"8d18a89599db4d899ab40959a39970b3","trx":"8d18a89599db4d899ab40959a39970b3","endpoint":{"name":"projects.get_all_ex","requested_version":"2.20","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"projects":[{"id":"75d0459819...
Hi @<1523701205467926528:profile|AgitatedDove14> ,
Yes i want to do that, but so far i know Task.enqueue will execute immediately, i need execute task to spesific time, and i see to do that i need scheduler and set recurring False, set time.
I tried that create scheduler, but the scheduler not created when the function executed.
Hi @<1523701087100473344:profile|SuccessfulKoala55> , Thanks for your response.
I'm not entirely sure about the use of CLEARML_ENV since I haven't interacted with it before. Could you guide me on what I should set as its value?
Previously, the system was running smoothly. However, I've run into some issues after making certain configuration changes to modify the server permissions. Specifically, I'm curious if these changes might have influenced the agent's permission to access certain...
Hi @<1523701070390366208:profile|CostlyOstrich36> , just want to update,
this is solve by
- remove
-f
- change Task.force_requirements_env_freeze(False, req_path) -> Task.add_requirements(req_path)
- change my clearml-agent settings
maybe accidently install my custom solution on this https://github.com/muhammadAgfian96/clearml/commit/01db9aa40537a6c2f83977220423556a48614c3a at that time. so i said the test is passed.
Hi @<1523701070390366208:profile|CostlyOstrich36>
i mean we can do a form dropdown for others configuration like hyperparameters (task.connect)
clearml-agent, if you looking for clearml.conf, the place is '/root/default_clearml.conf'
Correct! Thanks AppetizingMouse58 !
Hi @<1523701994743664640:profile|AppetizingMouse58> , i have update on #228. Thanks!
# downloading data from s3 manager = StorageManager() target_folder = manager.download_folder( local_folder='/tmp', remote_url=f' ` '
)
# upload to clearml
dataset = Dataset.create(
dataset_project=metadata[2],
dataset_name=metadata[3],
dataset_tags=tags,
output_uri=" ` ` "
)
fp_target_folder = os.path.join(target_folder, minio_s3_url)
print('>>...
i feel pain to make it as form if so much varible want to changes.
Hi @<1523701070390366208:profile|CostlyOstrich36> , i think can try this to run it as standalone:
oh okay, so i need to set that to path ssd, yeah?
is it this one? or there is
docker_internal_mounts {
sdk_cache: "/clearml_agent_cache"
apt_cache: "path/to/ssd/apt-cache"
ssh_folder: "/root/.ssh"
pip_cache: "path/to/ssd/clearml-cache/pip"
poetry_cache: "/mnt/hdd_2/clearml-cache/pypoetry"
vcs_cache: "path/to/ssd/clearml-cache/vcs-cache"
venv_build: "path/to/ssd/clearml-cache/venvs-builds"
pip_download: "path/to/ssd/cle...
alright, will try, i just worried about if execution mode is docker mode? should i mount to /var/run/docker.sock?
Hi CostlyOstrich36 ,
nope, i mean my server does not have pip/conda. so i will go for docker/container, is that possible if i install clearml-agent inside python:3.10 container?
my config is same like issue #763
` import clearml
from clearml import StorageManager, Dataset
from rich import print
version_clearml = clearml.version
manager = StorageManager()
print(f'clearml: {version_clearml}')
try:
minio_s3_url = 'x/x/x/x/x/x/x'
print('\n-------------download folder-------------')
target_folder = manager.download_folder(
local_folder='tmp',
remote_url=f' '
)
except Exception as e:
print(e)
print('FAILED: download fold...
yes, so far i know, if we want to upload dataset on clearml, we need provide local_path to data, then clearml will upload to the platform.
my data not on local, but s3 bucket.
is there a way to point s3 url ? my currently workflow is download my data from s3 bucket to local, then upload to clearml.