Reputation
Badges 1
45 × Eureka!Yes, this works, thank you!
https://clearml.slack.com/archives/CTK20V944/p1610481348165400?thread_ts=1610476184.162600&cid=CTK20V944
Indeed, that was a cookie issue. After deleting cookies, everything works fine. Thanks. Interesting enough, I had this issue both on Chrome and FF.
Will the record be available?
AgitatedDove14 The fact is that I use docker for running clearml server both on Linux and Windows. When I run tasks one-by-one from command line, they run OK - but in this case, clearml doesn't create venv and runs tasks in host environment. When I start tasks in pipeline, clearml creates venv for executing the tasks - there the issue raieses.
AgitatedDove14
Linux: resetting task in UI and removing object_detection
from list of libraries to be installed for stage 2 (generating tfrecord) and for stage 3 (training nn) helped to pass the stage2 and start the stage3, where training crashed - seems system cannot import some files from 'object_detection' folder.
Windows: I cannot store generated files as configuration on the Task - there are several files to be generated and some may be pretty large, up to few gigs. Looks like the...
Regarding diff issue - just found that empty folder 'tfrecord' in which tfrecords should be created, doesn't exist on gitlab origin repository. Added it there, then pulled the origin. Still having diff issue, but I'll run few trials to be sure, there's nothin else to create the issue.
As for "installed packages" list. To create a pipeline, I first run each stage (as a script) from cmd. After all the stages are created and can be seen in UI, I run the pipeline. So far I understand, clearml tra...
AgitatedDove14 In "Results -> Console" tab of UI, I see that the issue with running object detection on Linux is the following:ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsypv09bhw.txt (line 7)) (from versions: 0.0.3)
Is it possible to comment the line object_detection==0.1
? Actually, no such version of this or similar library exists. I quess, that this requirement is not necessary. Can I turn of the installati...
AgitatedDove14 I've set system_site_packages: true
. Almost succeeded. Current pipeline has the following stages: 1) convert annotations from labelme into coco format 2) convert annotations in coco format and corresponding images to tfrecords. 3) run training MASK RCNN. The process previously failed on the second stage. After setting system_site_packages: true
, the pipeline starts the third stage, but fails with some git issue:
` diff --git a/work/tfrecord/test.record b/work/t...
AgitatedDove14
For classification example (clml_cl_toy) - script A is image_augmentation.py
, which creates augmented images, script B is train_1st_nn.py
(of train_2nd_nn.py
, which does the same), which trains ANN based on augmented images For object detection example script A is represented by two scripts - annotation_conversion_test.py
, which creates file test.json and annotation_conversion_train.py
, which creates file train.json . These files are use...
AgitatedDove14
Regarding Windows - pyqt5 is installed. That's the result of pip freeze
:PyQt5==5.15.2 pyqt5-plugins==5.15.2.2.1.0 PyQt5-sip==12.8.1 pyqt5-tools==5.15.2.3.0.2
Following your link, I've used the last advise and installed pip install PySide2
- I have Python 3.7.7. That didn't help, the issue is the same.
Regarding Linux, I've tried to install object_detection==0.1
, but the requirement was already satisfied. Need to note, that the whole project is placed i...
AgitatedDove14 Just in case, I've created toy examples of processes I'm running - one for classification, another for object detection. Maybe, it would be more clear, what I'm trying to get: https://gitlab.com/kuznip/clml_cl_toy , https://gitlab.com/kuznip/clml_od_toy .
AgitatedDove14 Does it make any sense to chdnge system_site_packages
to true
if I run in clearml using Docker?
TimelyPenguin76 Yes, that's a new file - I haven't added it to repository yet. What I see for original taks "uncommitted changes" - "no changes logged".
JitteryCoyote63 Is there an example of how the learning pipeline can be triggered (started) by changes in dataset?
AgitatedDove14 Set system_site_packages to true for Linux - having the same error ( ERROR: Could not find a version that satisfies the requirement object_detection==0.1 (from -r /tmp/cached-reqsjhs2q2gm.txt (line 7)) (from versions: 0.0.3)
).
AgitatedDove14 Yes, it's running with an agent. I've updated the clearml from version 0.17.4 to 0.17.5. Sorry, didn't note the other libraries, which were automatically updated along with the new ClearML version.
However, is there any way to manipulate the packages, which will be installed in venv on running the pipeline? I've tried to run the pipeline on Linux server (clearml v.0.17.4) and got the following issue:
` Requirement already satisfied: numpy==1.19.5 in /root/.clearml/venvs-builds...
AgitatedDove14 Ok, I'll try to do this with clearml-data. However, I've found that I don't understand the logic, where newly generated data (by pipeline) are placed. I think, it's a major issue with my code. And, also, I should understand this for using clearml-data as well.
Say, script_a.py
generates file test.json
in project folder. script_b.py
should use this file for further processing. When I run script-by-script, test.json is generated and used Ok. However, when I run...
AgitatedDove14
Ok, will check this tomorrow. Thank you for your help!
After few commit-push-pulls got no diff issue on Windows. But just got a weird behavior - if stages running in a pipeline, they do not create new files, but instead, they cannot be run if the files they produce, were not committed. I do not really understand the logic of this. To be exact:
I have 3 stages, each implemented as separate script: 1) converting annotations into coco test.json and train.json files 2) converting ...
astunparse==1.6.3
attrs==20.3.0
botocore==1.19.63
cachetools==4.2.1
certifi==2020.12.5
chardet==4.0.0
cycler==0.10.0
Cython==0.29.21
furl==2.1.0
future==0.18.2
gast==0.3.3
google-auth==1.25.0
google-auth-oauthlib==0.4.2
google-pasta==0.2.0
grpcio==1.35.0
h5py==2.10.0
humanfriendly==9.1
idna==2.10
importlib-metadata==3.4.0
jmespath==0.10.0
jsonschema==3.2.0
Keras-Preprocessing==1.1.2
kiwisolver==1.3.1
Markdown==3.3.3
oauthlib==3.1.0
opt-einsum==3.3.0
orderedmultidict==1.0.1
pathlib2==2.3.5
pat...
AgitatedDove14 Yes, that's what I have - for me it's weird, too.
AgitatedDove14 According to the logs (up to traceback message), the only difference between those two tasks is task id name
Well, I'm pretty sure that nntraining is executed in the same queue for these two cases:
No, I have only two agents pulling from different queue: