
Reputation
Badges 1
45 × Eureka!JitteryCoyote63 Is there an example of how the learning pipeline can be triggered (started) by changes in dataset?
AgitatedDove14 Great, thanks! Wow, guys, your response while being helpful is too fast, I didn't use to this! 🙂
TimelyPenguin76 Yes, that's a new file - I haven't added it to repository yet. What I see for original taks "uncommitted changes" - "no changes logged".
Okay. I see, I didn't understand clearly the structure and logic behind ClearML. I though that exernal git repository should be set up to keep logs, stats, etc. So, all these are kept on the ClearML host, correct? However, if I want to keep logs on outer repo, is it possible to config ClearML to keep all these files there?
AgitatedDove14
No, I do not use --docker
flag for clearml agent In Windows setting system_site_packages
 to true
allowed all stages in pipeline to start - but doesn't work in Lunux. I've deleted tfrecords from master branch and commit the removal, and set the folder for tfrecords to be ignored in .gitignore. Trying to find, which changes are considered to be uncommited. By cache files I mean the files in folder C:\Users\Super.clearml\vcs-cache - based on error message, cle...
AgitatedDove14
Linux: resetting task in UI and removing object_detection
from list of libraries to be installed for stage 2 (generating tfrecord) and for stage 3 (training nn) helped to pass the stage2 and start the stage3, where training crashed - seems system cannot import some files from 'object_detection' folder.
Windows: I cannot store generated files as configuration on the Task - there are several files to be generated and some may be pretty large, up to few gigs. Looks like the...
AgitatedDove14
For classification example (clml_cl_toy) - script A is image_augmentation.py
, which creates augmented images, script B is train_1st_nn.py
(of train_2nd_nn.py
, which does the same), which trains ANN based on augmented images For object detection example script A is represented by two scripts - annotation_conversion_test.py
, which creates file test.json and annotation_conversion_train.py
, which creates file train.json . These files are use...
AgitatedDove14 Yes, that's what I have - for me it's weird, too.
AgitatedDove14 Yes, the difference in installed packages is large - the training stage, which runs ok has all the following:
Regarding diff issue - just found that empty folder 'tfrecord' in which tfrecords should be created, doesn't exist on gitlab origin repository. Added it there, then pulled the origin. Still having diff issue, but I'll run few trials to be sure, there's nothin else to create the issue.
As for "installed packages" list. To create a pipeline, I first run each stage (as a script) from cmd. After all the stages are created and can be seen in UI, I run the pipeline. So far I understand, clearml tra...
astunparse==1.6.3
attrs==20.3.0
botocore==1.19.63
cachetools==4.2.1
certifi==2020.12.5
chardet==4.0.0
cycler==0.10.0
Cython==0.29.21
furl==2.1.0
future==0.18.2
gast==0.3.3
google-auth==1.25.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.35.0
h5py==2.10.0
humanfriendly==9.1
idna==2.10
importlib-metadata==3.4.0
jmespath==0.10.0
jsonschema==3.2.0
Keras-Preprocessing==1.1.2
kiwisolver==1.3.1
Markdown==3.3.3
oauthlib==3.1.0
opt-einsum==3.3.0
orderedmultidict==1.0.1
pathlib2==2.3.5
pat...
AgitatedDove14
Ok, will check this tomorrow. Thank you for your help!
After few commit-push-pulls got no diff issue on Windows. But just got a weird behavior - if stages running in a pipeline, they do not create new files, but instead, they cannot be run if the files they produce, were not committed. I do not really understand the logic of this. To be exact:
I have 3 stages, each implemented as separate script: 1) converting annotations into coco test.json and train.json files 2) converting ...
AgitatedDove14 Looks like that. First, I've created a toy task running in "services" queue (you didn't tell that but I guess you assumed). I haven't found how to specify the queue to run in code ( Task.equeue(task, queue_name='services')
returned an error), so I ran toy.py first in "default" queue, aborted toy.py, started nntraining in "default" queue. Then I reset toy.py and enqueued it to "services" queue. Toy.py failed shortly. I've also reset both toy.py and nntraining and enqueue...
Exactly! To be more specified - the same base_task_id fails, if the pipeline is cloned and started from UI. I've checked the queues for failed and completed tasks - they are the same (default, gpu-all).
AgitatedDove14
No, I meant different thing. It's not easy to explain, sorry. Let me try. Say, I have a project in folder "d:\object_detection". There I have a script, which converts annotations from labelme format to coco format. This script name is convert_test.py and it runs a process, registered under the same name in clearml. This script, being run separately from command prompt creates new file in project folder - test.json . I delete this file, synch local and remote repos, both...
AgitatedDove14 I've set system_site_packages: true
. Almost succeeded. Current pipeline has the following stages: 1) convert annotations from labelme into coco format 2) convert annotations in coco format and corresponding images to tfrecords. 3) run training MASK RCNN. The process previously failed on the second stage. After setting system_site_packages: true
, the pipeline starts the third stage, but fails with some git issue:
` diff --git a/work/tfrecord/test.record b/work/t...
AgitatedDove14 The fact is that I use docker for running clearml server both on Linux and Windows. When I run tasks one-by-one from command line, they run OK - but in this case, clearml doesn't create venv and runs tasks in host environment. When I start tasks in pipeline, clearml creates venv for executing the tasks - there the issue raieses.
AgitatedDove14
Regarding Windows - pyqt5 is installed. That's the result of pip freeze
:PyQt5==5.15.2 pyqt5-plugins==5.15.2.2.1.0 PyQt5-sip==12.8.1 pyqt5-tools==5.15.2.3.0.2
Following your link, I've used the last advise and installed pip install PySide2
- I have Python 3.7.7. That didn't help, the issue is the same.
Regarding Linux, I've tried to install object_detection==0.1
, but the requirement was already satisfied. Need to note, that the whole project is placed i...
AgitatedDove14 In "Results -> Console" tab of UI, I see that the issue with running object detection on Linux is the following:ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsypv09bhw.txt (line 7)) (from versions: 0.0.3)
Is it possible to comment the line object_detection==0.1
? Actually, no such version of this or similar library exists. I quess, that this requirement is not necessary. Can I turn of the installati...
AgitatedDove14 Does it make any sense to chdnge system_site_packages
to true
if I run in clearml using Docker?
Yes, this works, thank you!
No, I have only two agents pulling from different queue:
AgitatedDove14 Ok, I'll try to do this with clearml-data. However, I've found that I don't understand the logic, where newly generated data (by pipeline) are placed. I think, it's a major issue with my code. And, also, I should understand this for using clearml-data as well.
Say, script_a.py
generates file test.json
in project folder. script_b.py
should use this file for further processing. When I run script-by-script, test.json is generated and used Ok. However, when I run...
AgitatedDove14 Set system_site_packages to true for Linux - having the same error ( ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsjhs2q2gm.txt (line 7)) (from versions: 0.0.3)
).