
Reputation
Badges 1
45 × Eureka!Thanks. Not yet, but will watch, by all means.
Okay. I see, I didn't understand clearly the structure and logic behind ClearML. I though that exernal git repository should be set up to keep logs, stats, etc. So, all these are kept on the ClearML host, correct? However, if I want to keep logs on outer repo, is it possible to config ClearML to keep all these files there?
AgitatedDove14
Ok, will check this tomorrow. Thank you for your help!
After few commit-push-pulls got no diff issue on Windows. But just got a weird behavior - if stages running in a pipeline, they do not create new files, but instead, they cannot be run if the files they produce, were not committed. I do not really understand the logic of this. To be exact:
I have 3 stages, each implemented as separate script: 1) converting annotations into coco test.json and train.json files 2) converting ...
Yes, this works, thank you!
AgitatedDove14
For classification example (clml_cl_toy) - script A is image_augmentation.py
, which creates augmented images, script B is train_1st_nn.py
(of train_2nd_nn.py
, which does the same), which trains ANN based on augmented images For object detection example script A is represented by two scripts - annotation_conversion_test.py
, which creates file test.json and annotation_conversion_train.py
, which creates file train.json . These files are use...
AgitatedDove14 Ok, I'll try to do this with clearml-data. However, I've found that I don't understand the logic, where newly generated data (by pipeline) are placed. I think, it's a major issue with my code. And, also, I should understand this for using clearml-data as well.
Say, script_a.py
generates file test.json
in project folder. script_b.py
should use this file for further processing. When I run script-by-script, test.json is generated and used Ok. However, when I run...
AgitatedDove14 Set system_site_packages to true for Linux - having the same error ( ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsjhs2q2gm.txt (line 7)) (from versions: 0.0.3)
).
AgitatedDove14 I've set system_site_packages: true
. Almost succeeded. Current pipeline has the following stages: 1) convert annotations from labelme into coco format 2) convert annotations in coco format and corresponding images to tfrecords. 3) run training MASK RCNN. The process previously failed on the second stage. After setting system_site_packages: true
, the pipeline starts the third stage, but fails with some git issue:
` diff --git a/work/tfrecord/test.record b/work/t...
AgitatedDove14
No, I meant different thing. It's not easy to explain, sorry. Let me try. Say, I have a project in folder "d:\object_detection". There I have a script, which converts annotations from labelme format to coco format. This script name is convert_test.py and it runs a process, registered under the same name in clearml. This script, being run separately from command prompt creates new file in project folder - test.json . I delete this file, synch local and remote repos, both...
AgitatedDove14 The fact is that I use docker for running clearml server both on Linux and Windows. When I run tasks one-by-one from command line, they run OK - but in this case, clearml doesn't create venv and runs tasks in host environment. When I start tasks in pipeline, clearml creates venv for executing the tasks - there the issue raieses.
AgitatedDove14 Great, thanks! Wow, guys, your response while being helpful is too fast, I didn't use to this! 🙂
AgitatedDove14 In "Results -> Console" tab of UI, I see that the issue with running object detection on Linux is the following:ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsypv09bhw.txt (line 7)) (from versions: 0.0.3)
Is it possible to comment the line object_detection==0.1
? Actually, no such version of this or similar library exists. I quess, that this requirement is not necessary. Can I turn of the installati...
AgitatedDove14
Linux: resetting task in UI and removing object_detection
from list of libraries to be installed for stage 2 (generating tfrecord) and for stage 3 (training nn) helped to pass the stage2 and start the stage3, where training crashed - seems system cannot import some files from 'object_detection' folder.
Windows: I cannot store generated files as configuration on the Task - there are several files to be generated and some may be pretty large, up to few gigs. Looks like the...
AgitatedDove14 Does it make any sense to chdnge system_site_packages
to true
if I run in clearml using Docker?
JitteryCoyote63 Is there an example of how the learning pipeline can be triggered (started) by changes in dataset?
AgitatedDove14
No, I do not use --docker
flag for clearml agent In Windows setting system_site_packages
 to true
allowed all stages in pipeline to start - but doesn't work in Lunux. I've deleted tfrecords from master branch and commit the removal, and set the folder for tfrecords to be ignored in .gitignore. Trying to find, which changes are considered to be uncommited. By cache files I mean the files in folder C:\Users\Super.clearml\vcs-cache - based on error message, cle...
TimelyPenguin76 Yes, that's a new file - I haven't added it to repository yet. What I see for original taks "uncommitted changes" - "no changes logged".
Regarding diff issue - just found that empty folder 'tfrecord' in which tfrecords should be created, doesn't exist on gitlab origin repository. Added it there, then pulled the origin. Still having diff issue, but I'll run few trials to be sure, there's nothin else to create the issue.
As for "installed packages" list. To create a pipeline, I first run each stage (as a script) from cmd. After all the stages are created and can be seen in UI, I run the pipeline. So far I understand, clearml tra...
AgitatedDove14 Just in case, I've created toy examples of processes I'm running - one for classification, another for object detection. Maybe, it would be more clear, what I'm trying to get: https://gitlab.com/kuznip/clml_cl_toy , https://gitlab.com/kuznip/clml_od_toy .
Will the record be available?
https://clearml.slack.com/archives/CTK20V944/p1610481348165400?thread_ts=1610476184.162600&cid=CTK20V944
Indeed, that was a cookie issue. After deleting cookies, everything works fine. Thanks. Interesting enough, I had this issue both on Chrome and FF.
AgitatedDove14 How can the first process corrupt the second and why doesn't this occur if I run pipeline from command line? Just to be precise - I run all the processes as administrator. However, I've tested running the pipeline from command line in non-administrator mode, it works fine.
AgitatedDove14 It works!!! Thanks a lot!