Hi AgitatedDove14 , is the Dataset.get
will take all child too?
you can spesificly use Task.add_requirements
and pointing to path requirement.txt
https://github.com/mert-kurttutan/torchview
maybe can try this one, and can send to logger clearml at the end.
i see thanks for the answer, i will read that reference.
Hi SmugDolphin23 , i have try 1.8.4rc1, and yeah its working! Thanks!
alright, will try thanks!
# downloading data from s3 manager = StorageManager() target_folder = manager.download_folder( local_folder='/tmp', remote_url=f'
` '
)
# upload to clearml
dataset = Dataset.create(
dataset_project=metadata[2],
dataset_name=metadata[3],
dataset_tags=tags,
output_uri=" ` ` "
)
fp_target_folder = os.path.join(target_folder, minio_s3_url)
print('>>...
from src.net import Classifier
ModuleNotFoundError: No module named 'src'
hmm yeah i think, this is not possible to share a whole script here.
my config is same like issue #763
` import clearml
from clearml import StorageManager, Dataset
from rich import print
version_clearml = clearml.version
manager = StorageManager()
print(f'clearml: {version_clearml}')
try:
minio_s3_url = 'x/x/x/x/x/x/x'
print('\n-------------download folder-------------')
target_folder = manager.download_folder(
local_folder='tmp',
remote_url=f' '
)
except Exception as e:
print(e)
print('FAILED: download fold...
maybe accidently install my custom solution on this https://github.com/muhammadAgfian96/clearml/commit/01db9aa40537a6c2f83977220423556a48614c3a at that time. so i said the test is passed.
hi i have similar case, but can we scheduled new task here?
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifacts)
try:
fp = previous_task.artifacts['latest_condition'].get_local_copy()
params = open_json(fp)
last_index = params.get('last_index')
day_n = params.get('iteration')
print("Success Fetching", param...
yes, so far i know, if we want to upload dataset on clearml, we need provide local_path to data, then clearml will upload to the platform.
my data not on local, but s3 bucket.
is there a way to point s3 url ? my currently workflow is download my data from s3 bucket to local, then upload to clearml.
i see,
thanks for clarify. i just want to find other solutions to storing secret value. rightnow i just storing secret value on env in clearml.conf in my workers. but it will complicated if there is new value, i need update workers conf and redeploy workers.
yup correct. but the scheduler not created idk why. here my code and the log
from doctest import Example
from clearml.automation import TriggerScheduler, TaskScheduler
from clearml import Task
import json
def open_json(fp):
with open(fp, 'r') as f:
my_dictionary = json.load(f)
return my_dictionary
def trigger_task_func(task_id):
print("trigger running...")
try:
previous_task = Task.get_task(task_id=task_id)
print(previous_task.artifact...
still i need do this?dataset.upload() dataset.finalize()
i have another question,
if we have uploaded data clearml, how we add data?
this is my way right now.
dataset = Dataset.create( dataset_project=metadata[2], dataset_name=metadata[3], description=description, output_uri=f"
", parent_datasets=[id_dataset_latest] )
Hi @<1523701205467926528:profile|AgitatedDove14> ,
Yes i want to do that, but so far i know Task.enqueue will execute immediately, i need execute task to spesific time, and i see to do that i need scheduler and set recurring False, set time.
I tried that create scheduler, but the scheduler not created when the function executed.
Hi @<1523701087100473344:profile|SuccessfulKoala55> , Thanks for your response.
I'm not entirely sure about the use of CLEARML_ENV
since I haven't interacted with it before. Could you guide me on what I should set as its value?
Previously, the system was running smoothly. However, I've run into some issues after making certain configuration changes to modify the server permissions. Specifically, I'm curious if these changes might have influenced the agent's permission to access certain...
Hi @<1523701070390366208:profile|CostlyOstrich36>
i mean we can do a form dropdown for others configuration like hyperparameters (task.connect)
oh okay, so i need to set that to path ssd, yeah?
is it this one? or there is
docker_internal_mounts {
sdk_cache: "/clearml_agent_cache"
apt_cache: "path/to/ssd/apt-cache"
ssh_folder: "/root/.ssh"
pip_cache: "path/to/ssd/clearml-cache/pip"
poetry_cache: "/mnt/hdd_2/clearml-cache/pypoetry"
vcs_cache: "path/to/ssd/clearml-cache/vcs-cache"
venv_build: "path/to/ssd/clearml-cache/venvs-builds"
pip_download: "path/to/ssd/cle...
wow, okay, i think will move all logs/plot/artifacs to my storage s3. Thanks! really helpful!
Hi CostlyOstrich36 ,
nope, i mean my server does not have pip/conda. so i will go for docker/container, is that possible if i install clearml-agent inside python:3.10 container?
it seems i forgot using clearml-agent init.
i follow this way: https://clear.ml/docs/latest/docs/guides/ide/google_colab/
do you mean i can change?
files_server:
->
i see, it solved right now using default_output_uri, Thanks!
i need custom output_uri for some function because split dataset and model artifacs.
I see, yeah my alternative solution right now is just to show the list of options outside on ClearML UI.
yup,
example just need choosing between SGD, Adam, AdamW on optimizer field
alright, will try, i just worried about if execution mode is docker mode? should i mount to /var/run/docker.sock?
Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
We have successfully created a sample for the migration. Here are the changes:
- URL for MongoDB from
s3://
toazure://
- Elasticsearch as you suggested
However, our main focus is that most of our production fetches models from ClearML, which are configured withs3://
URLs.
There is an issue/bug in the UI when downloading via Azure. Here are the details: None .
Thanks for response.
from clearml import Task
from clearml.automation import TaskScheduler
from datetime import timedelta, datetime
def my_task():
task = Task.init(...)
# do somthinge
print("do something")
# sleep 10
condition = True
if condition:
# i want to trigger run another task by
# set some config in task, but execute tomorrow/sometime
# not directly run at the time.
# here i use
task_id = task.id
task.cl...