
Reputation
Badges 1
34 × Eureka!I create the draft mod in the picture above by calling pipe.create_draft() But this does not start the execution of the pipeline, but immediately transfers it to draft mode
Also, for some reason I don't have the ability to copy pipelines. Tell me, is this normal?
CostlyOstrich36 Oh, ok) Thanks!
May be you know?! Is data tagging available to view summary statistics in the free version?
@<1523701070390366208:profile|CostlyOstrich36> Yes, sure
import pandas as pd
import yaml
import os
from omegaconf import OmegaConf
from clearml import Dataset
config_path = 'configs/structured_docs.yml'
with open(config_path) as f:
config = yaml.full_load(f)
config = OmegaConf.create(config)
path2images = config.data.images_folder
def get_data(config, split):
path2annotation = os.path.join(config.data.annotation_folder, f"sample_{split}.csv")
data = pd.read_csv(path2an...
@<1523701070390366208:profile|CostlyOstrich36>
Yes, it's work)
Thank you so much
@<1523701070390366208:profile|CostlyOstrich36> I did it,, but I think its not optimal))
This is how I got information about FieldNet project:
from clearml import Task, Dataset
all_taks = Task.get_tasks()
FieldNet_tasks = {}
for task in all_taks:
name = task.name
task_id = task.task_id
if 'FieldNet' in name:
if name in FieldNet_tasks:
# get last dataset version
task_old_version = Task.get_task(task_id=FieldNet_tasks[name]).get_parameters_as_di...
@<1523701070390366208:profile|CostlyOstrich36>
when I start a new run I can't change initial parametrs
Problem solved:
- removed limits everywhere and live time for downloading everywhere
- increased limits for the file server
@<1523701070390366208:profile|CostlyOstrich36>
It's strange, during the first remote start I could set the options. But now I can't again. With what it can be connected?
@<1523701435869433856:profile|SmugDolphin23>
I rechecked on single files, creating new datasets, and everything works properly. I tried to create dataset using original data, and I got the following logs. Could you suggest what could be causing this?Uploading dataset changes (1497 files compressed to 9.07 MiB) to
None
`2023-05-12 08:46:03,114 - clearml.storage - ERROR - Exception encountered while uploading Failed uploading object /addudkin2/.dataset...
@<1593051292383580160:profile|SoreSparrow36> @<1578555761724755968:profile|GrievingKoala83>
@<1523701435869433856:profile|SmugDolphin23>
I found that if you go into the details of the pipeline, you can copy it manually and it will go into edit mode, where you can change the parameters manually
CostlyOstrich36 Thanks for answer!
I useclearml-agent daemon --queue default --cpu-only
@<1523701070390366208:profile|CostlyOstrich36> Yes, we deployed clearml in our outline
@<1523701070390366208:profile|CostlyOstrich36> Then when I try to get the dataset I get the following error
Failed getting object size: RetryError('HTTPSConnectionPool(host='files.clearml.dbrain.io', port=443): Max retries exceeded with url: /Labeled%20datasets/.datasets/printed%20multilang%20crops/printed%20multilang%20crops.1e76fd4ad77f4d2790e4acf1c8241c59/artifacts/state/state.json (Caused by ResponseError('too many 503 error responses'))')
Could not download
, err: H...
after running I already can't set new params
Why does it match with gpu 0 when I only have cpu?
@<1523701205467926528:profile|AgitatedDove14> The bash script does the unloading of the necessary resources from aws and sets the environment variable
aws s3 cp ..... --recursive
export PYTHONPATH=" "
All commands can be added to the generated docker image, but you will have to change the project structure
@<1578193574506270720:profile|DashingAlligator28> Removed nginx limits
@<1523701070390366208:profile|CostlyOstrich36> Yes
Clearml only has the ability to integrate with AWS?
@<1523701070390366208:profile|CostlyOstrich36> A simple run with the options I changed in the second run
@<1523701070390366208:profile|CostlyOstrich36> Yes, I changed the name in manual mode, where this option is provided, but the name of the block did not change
@<1523701205467926528:profile|AgitatedDove14> Thanks a lot. I meant running a bash script after cloning the repository and setting the environment
@<1578555761724755968:profile|GrievingKoala83> Did you solve this problem?
SmugDolphin23 It's work) Thank you so much)
Can you explain this type of Error?
@<1523701070390366208:profile|CostlyOstrich36> While pipeline in pending process, i can set new run, but after compliting not
@<1523701435869433856:profile|SmugDolphin23>
Yes, I see
Thank you for your response @<1523701205467926528:profile|AgitatedDove14> . I will definitely try the solutions you described above. Could you please advise if it is possible to execute the "bash.sh" script directly before the environment setup stages for reproducing the experiment? The repository setup involves downloading resources from AWS. While creating a container that incorporates my requirements would help solve this problem, I am interested in finding a more flexible approach.
@<1578555761724755968:profile|GrievingKoala83> I have the same problem with table and detailed view