Reputation
Badges 1
45 × Eureka!AgitatedDove14 Great, thanks! Wow, guys, your response while being helpful is too fast, I didn't use to this! 🙂
AgitatedDove14 In "Results -> Console" tab of UI, I see that the issue with running object detection on Linux is the following:ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsypv09bhw.txt (line 7)) (from versions: 0.0.3)
Is it possible to comment the line object_detection==0.1
? Actually, no such version of this or similar library exists. I quess, that this requirement is not necessary. Can I turn of the installati...
AgitatedDove14 The fact is that I use docker for running clearml server both on Linux and Windows. When I run tasks one-by-one from command line, they run OK - but in this case, clearml doesn't create venv and runs tasks in host environment. When I start tasks in pipeline, clearml creates venv for executing the tasks - there the issue raieses.
AgitatedDove14 Ok, I'll try to do this with clearml-data. However, I've found that I don't understand the logic, where newly generated data (by pipeline) are placed. I think, it's a major issue with my code. And, also, I should understand this for using clearml-data as well.
Say, script_a.py
generates file test.json
in project folder. script_b.py
should use this file for further processing. When I run script-by-script, test.json is generated and used Ok. However, when I run...
AgitatedDove14git diff
gives nothing - current local repository is up-to-date with gitlab origin.
Yes that is the git repository cache, you are correct. I wonder what happened there ?
So far my local and remote gitlab repositories are synchronized, I suspect, that Failed applying git diff, see diff above
error is caused by cached repository from which clearml tries to run the process. I've cleaned the cache, but it haven't helped.
The installed packages is fully editab...
AgitatedDove14
For classification example (clml_cl_toy) - script A is image_augmentation.py
, which creates augmented images, script B is train_1st_nn.py
(of train_2nd_nn.py
, which does the same), which trains ANN based on augmented images For object detection example script A is represented by two scripts - annotation_conversion_test.py
, which creates file test.json and annotation_conversion_train.py
, which creates file train.json . These files are use...
AgitatedDove14 Set system_site_packages to true for Linux - having the same error ( ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsjhs2q2gm.txt (line 7)) (from versions: 0.0.3)
).
AgitatedDove14 I've set system_site_packages: true
. Almost succeeded. Current pipeline has the following stages: 1) convert annotations from labelme into coco format 2) convert annotations in coco format and corresponding images to tfrecords. 3) run training MASK RCNN. The process previously failed on the second stage. After setting system_site_packages: true
, the pipeline starts the third stage, but fails with some git issue:
` diff --git a/work/tfrecord/test.record b/work/t...
AgitatedDove14 Just in case, I've created toy examples of processes I'm running - one for classification, another for object detection. Maybe, it would be more clear, what I'm trying to get: https://gitlab.com/kuznip/clml_cl_toy , https://gitlab.com/kuznip/clml_od_toy .
AgitatedDove14
Ok, will check this tomorrow. Thank you for your help!
After few commit-push-pulls got no diff issue on Windows. But just got a weird behavior - if stages running in a pipeline, they do not create new files, but instead, they cannot be run if the files they produce, were not committed. I do not really understand the logic of this. To be exact:
I have 3 stages, each implemented as separate script: 1) converting annotations into coco test.json and train.json files 2) converting ...
AgitatedDove14
Regarding Windows - pyqt5 is installed. That's the result of pip freeze
:PyQt5==5.15.2 pyqt5-plugins==5.15.2.2.1.0 PyQt5-sip==12.8.1 pyqt5-tools==5.15.2.3.0.2
Following your link, I've used the last advise and installed pip install PySide2
- I have Python 3.7.7. That didn't help, the issue is the same.
Regarding Linux, I've tried to install object_detection==0.1
, but the requirement was already satisfied. Need to note, that the whole project is placed i...
https://clearml.slack.com/archives/CTK20V944/p1610481348165400?thread_ts=1610476184.162600&cid=CTK20V944
Indeed, that was a cookie issue. After deleting cookies, everything works fine. Thanks. Interesting enough, I had this issue both on Chrome and FF.
Regarding diff issue - just found that empty folder 'tfrecord' in which tfrecords should be created, doesn't exist on gitlab origin repository. Added it there, then pulled the origin. Still having diff issue, but I'll run few trials to be sure, there's nothin else to create the issue.
As for "installed packages" list. To create a pipeline, I first run each stage (as a script) from cmd. After all the stages are created and can be seen in UI, I run the pipeline. So far I understand, clearml tra...
AgitatedDove14 Yes, it's running with an agent. I've updated the clearml from version 0.17.4 to 0.17.5. Sorry, didn't note the other libraries, which were automatically updated along with the new ClearML version.
However, is there any way to manipulate the packages, which will be installed in venv on running the pipeline? I've tried to run the pipeline on Linux server (clearml v.0.17.4) and got the following issue:
` Requirement already satisfied: numpy==1.19.5 in /root/.clearml/venvs-builds...
AgitatedDove14
Linux: resetting task in UI and removing object_detection
from list of libraries to be installed for stage 2 (generating tfrecord) and for stage 3 (training nn) helped to pass the stage2 and start the stage3, where training crashed - seems system cannot import some files from 'object_detection' folder.
Windows: I cannot store generated files as configuration on the Task - there are several files to be generated and some may be pretty large, up to few gigs. Looks like the...
AgitatedDove14
No, I meant different thing. It's not easy to explain, sorry. Let me try. Say, I have a project in folder "d:\object_detection". There I have a script, which converts annotations from labelme format to coco format. This script name is convert_test.py and it runs a process, registered under the same name in clearml. This script, being run separately from command prompt creates new file in project folder - test.json . I delete this file, synch local and remote repos, both...
AgitatedDove14
No, I do not use --docker
flag for clearml agent In Windows setting system_site_packages
 to true
allowed all stages in pipeline to start - but doesn't work in Lunux. I've deleted tfrecords from master branch and commit the removal, and set the folder for tfrecords to be ignored in .gitignore. Trying to find, which changes are considered to be uncommited. By cache files I mean the files in folder C:\Users\Super.clearml\vcs-cache - based on error message, cle...
AgitatedDove14 Does it make any sense to chdnge system_site_packages
to true
if I run in clearml using Docker?
AgitatedDove14 Yes, the difference in installed packages is large - the training stage, which runs ok has all the following:
Thanks. Not yet, but will watch, by all means.
Okay. I see, I didn't understand clearly the structure and logic behind ClearML. I though that exernal git repository should be set up to keep logs, stats, etc. So, all these are kept on the ClearML host, correct? However, if I want to keep logs on outer repo, is it possible to config ClearML to keep all these files there?
These libraries are absent in the option, which fails. The only libraries of that option (all are present in correct-working option) are:
absl_py==0.9.0
boto3==1.16.6
clearml==0.17.4
joblib==0.17.0
matplotlib==3.3.1
numpy==1.18.4
scikit_learn==0.23.2
tensorflow_gpu==2.2.0
watchdog==0.10.3