- Open Access
- Authors : Suresh M B , Honnarathi S , Neelima K , Monisha K
- Paper ID : IJERTV11IS040165
- Volume & Issue : Volume 11, Issue 04 (April 2022)
- Published (First Online): 29-04-2022
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License: This work is licensed under a Creative Commons Attribution 4.0 International License
A Comparative Analysis and Prediction of Knee Osteoarthritis Symptoms
Suresh M B
Department of Information Science and Engineering East West Institute of Technology
Bangalore, India
Neelima K
Department of Information Science and Engineering East West Institute of Technology
Bangalore, India
Honnarathi S
Department of Information Science and Engineering East West Institute of Technology
Bangalore, India
Monisha K
Department of Information Science and Engineering East West Institute of Technology
Bangalore, India
AbstractFull-leg radiographs, describe a fully automated approach for determining knee alignment. YOLOv4, a cutting- edge object detector, was trained to find regions of interest in full-leg radiographs of the hip, knee, and ankle joints. Land mark coordinates for each region of interest were reduced using residual neural networks. The hip-knee-ankle (HKA) angle was determined using the landmarks that were detected. For 180 radiographs, the accuracy of landmark detection was tested by comparing it to manually set ones. A comparison of the results of two independent image reading studies (Cooke; Duryea), both generally accessible via the Osteoarthritis Initiative, was used to analyze the accuracy of HKA angle computations on the basis of 2,943 radiographs. Spearman's Rho was used to analyze the agreement.
Keywords Deep learning, Hip-knee-ankle angle, Varus, Valgus, Mechanical axes, Osteoarthritis
-
INTRODUCTION
Knee displacement does have an undesirable effect on the distribution of loads throughout the joint, resulting in higher contact pressure in the more heavily loaded areas [30]. As a result, knee dislocation can be used as a risk factor for osteoarthritis and cartilage loss, as well as a biomarker for determining the severity and development of the disease [2, 7, and 31]. Knee alignment is defined as the angle between the mechanical axis of the femoral and tibia bones, as shown in Fig. 1. The hip-knee-ankle (HKA) angle [9] is the name given to this angle. A line drawn from the center of the femoral head to the mid-condylar position between the cruciate ligaments defines the femur's mechanical axis (cf. Fig. 1). A line drawn from the center of the tibia plateau to the center of the tibia plafond is the mechanical axis of the tibia [23]. The HKA angle is commonly represented as the angular deviation from a straight angle of 180 degrees. Varus deviations are measured in negative angles, whereas valgus deviations are measured in positive angles.
The HKA angle was previously measured manually or semi-automatically in anterior-posterior radiographs [79, 16, 24]. Sled et al. [32] suggested a landmark-based, computer-
assisted approach for determining HKA angles from full-limb radiographs. The rater is guided to identify a collection of landmarks in their semi-automated manner. After that, the HKA angle is calculated automatically using the landmarks that have been placed. The techniques obtained high accuracy, although it required manual input and skilled picture readers. For the examination of medical pictures, there has been a general trend toward automated, deep learning-based systems in recent years [22]. From typical knee radiographs, Gielis et al. [12] provided an automated image processing pipeline to estimate the HKA angle. To predict landmarks, they suggest using random forest regression.
For HKA angle computation using full-leg radiographs, Nguyen et al. [25] proposed a distributed deep learning algorithm. A convolutional neural network is used to discover 10 areas of interest (ROIs) in a two-level technique (CNN). The coordinates of anatomical landmarks are then determined using these ROIs and a second CNN. Nguyen methods is quick, with a run time of less than one second, however it produces an average bias of 0.402° and a mismatch of more than 1.5° in 17.7% of the examined participants when compared to radiologists' evaluations. Pei et al. [28] described a deep learning solution for calculating the HKA angle from full-leg radiographs using automated leg bone segmentation. A CNN-like network
Nguyen et al. [25], Pei et al. [28], and Gielis et al. [12] provided the first automated approaches for HKA angle computation based on machine learning algorithms. However, the proposed methods I frequently exhibit a systematic bias when compared to radiologists' measures, and
(ii) may result in variances greater than 1.5° in a significant number of examined patients. Furthermore, the suggested experiments only looked at a small number of pictures and only examined the accuracy of HKA angle computation, not the quality of the underlying anatomical landmark detection. The goal of this study is to use trying to cut deep learning
approaches to calculate the HKA angle from whole leg radiographs.
We use YOLOv4 [5], a rapid object identification Ease of Use technique that divides the input image into sub regions and performs object recognition in each one. Such an approach has a benefit over approaches that loop over all parts of the image in a sequential manner, such as R-CNN [13], and may result in higher accuracy and reduced run time. YOLOv4 is extremely generalizable and less likely to fail when applied to different domains or unexpected inputs since it uses the complete image during training and testing time to implicitly encode contextual information about classes as well as their looks. After determining the ROI.
Fig. 1. Left: Examples of legs with Varus malalignment, neutral alignment, and valgus malalignment. The load-bearing axis is shown in red. Right: Computation of knee alignment based on landmarks illustrated by close-up images of the valgus leg. The mechanical axes of the femur (green line) and tibia (blue line) are computed based on landmarks for the hip, knee, and ankle. The center of the femoral head (orange circle) is derived from 6 landmarks placed at the boundary of the femoral head. The landmarks at the femoral notch and tibia spines are directly placed at the distinct anatomical regions. The center of the talus bone is derived from two landmarks defined at the superior medial and lateral edges of the talus. The HKA angle is the angle enclosed by the femoral mechanical axis and the tibia mechanical axis
-
METHODS
-
Full-Leg X-Rays from the OAI
Define The OAI database was used to examine 3,843 full-leg X-Rays. Table 1 provides detailed demographic data. Due to an incomplete field of view (N = 11) and missing image size information in the DICOM metadata (N = 3), several X-Rays were removed from this research. If one leg's hip, knee, or ankle is beyond the field of vision, that leg is removed from the research, but the lateral leg is used (N = 3). All cases that were addressed individually are presented in Supplementary Figure A1 along with subject identification.
-
Knee alignment studies of Cooke and Duryea.
Dr. Cooke and Dr. Duryea separately conducted two independent image reading research for evaluating HKA angles in full-leg radiographs as provided by the OAI in two distinct image reading centers. Cooke's measurements were supported by OAI and carried out by OAISYS Inc.2 with the
help of Queen's University (Kingston, Ontario) employees using the semi-automated Horizon Surveyor instrument (OAISYS Inc., Perth, Canada). Duryea's measurements were taken independently of OAI at Dr. Jeff Duryea's laboratory at Brigham and Women's Hospital in Boston, MA, using customized software to help the reader place the landmarks. 3 Cooke and Duryea both examined ,965 of the 7,683 legs available. Cooke only analyzed 124 legs, whereas Duryea only evaluated 594.
-
Automated determination of HKA angles by employing YARLA
Our goal is to provide an automated approach for determining the HKA angle in full-leg X-Ray images that follows radiologists' practical view. As a result, we use a step-by-step approach to locating the corresponding landmarks for each joint, determining mechanical axes, and lastly computing the HKA angle. Six markers surrounding the femoral head must be identified for the hip. These markers are carefully spread evenly throughout the whole femoral head. Two unique anatomical markers must be identified for the knee and ankle, respectively (see Fig. 2).
For automated image analysis, several object detectors based on machine learning were proposed [35]. Many of them have already demonstrated great object accuracy rate as well as high accuracy in determining bounding boxes surrounding these objects. We selected YOLOv4 for ROI identification in our proposed YARLA since it has been demonstrated to be highly quick and precise on data from the "Microsoft COCO: Common Objects in Context" challenge [21] as well as data from PASCAL Visual Object Classes challenges (http://host.robots.ox.ac.uk/pascal/VOC/). Unlike other object detection algorithms, YOLOv4 takes the entire image as input and analyses it all at once, lead to a short run time and excellent accuracy owing to implicit encoding of contextual information. Its structure is unique.
CNNs have been found to be effective in accurately regressing landmark coordinates in medical images [19,26,27]. The vanishing gradient problem [15] affects very deep CNNs for classification and regression applications because the gradient is back-propagated to previous layers and the repeated multiplications result in an indefinitely tiny gradient. ResNets [14] proposed "identity shortcut connections," which skip one or more layers to make training a deep CNN easier. The recent success of CNNs for land mark recognition, as well as the promise of ResNets for improved gradient flow, facilitated us to use them for landmark regression in each ROI (hip, knee, and ankle) previously discovered using YOLOv4. The appropriate landmarks are placed at the boundaries of these areas.
Due to memory limitations, the input size of CNNs is limited. We use a three-level ResNet technique to allow our CNNs to focus on the relevant region surrounding the individual landmarks with the greatest amount of detail. The centers of the ROIs for each joint, as identified by YOLOv4, are utilized to calculate a surrounding area of 170 mm 170 mm in size at the first level. At the second level, a 135 mm x 135 mm area is removed at the first stage's projected position. In the third and final level, there is a 100 mm x 100 mm region
Table 1. Demographics: In this study 3,843 X-Rays of the OAI database are analyzed that were acquired at four visits every 12 months (v12, v24, v36, and v48). At the different time points, mainly radiographs of different persons were taken. The majority of legs were assessed by both, Cooke and Duryea
OAI Time Port
v12
V24
V36
V48
Number of subjects
1,472
1,275
919
177
Subjects in common with v12
all
37
14
8
Subjects in common with v24
37
all
22
5
Subjects in common with v36
14
22
all
6
Subjects in common with v48
8
5
16
all
Sex (male; female)
671; 801
65; 710
349; 570
73; 104
Age [years]
61.86 ± 9.08
63.67 ± 9.22
64.27 ± 8.94
64.58 ± 9.1
BMI [kg/m2]
29.75 ± 4.83
28.36 ± 4.97
28.31 ± 4.94
28.57 ± 5.59
Cooke: Legs measured
2,456
2,547
1,822
264
Duryea: Legs measured
2,858
2,521
1,828
352
Legs measured by both studies
2,942
2,549
1,838
354
Legs measured by Cooke only
84
28
10
2
Legs measured by Duryea only
486
2
16
90
Cooke: Average HKA angles
-1.37 ± 3.86
-1.17 ± 3.30
-0.95 ± 3.02
-1.06 ± 3.07
Duryea: Average HKA angles
-1.41 ± 3.80
-1.24 ± 3.33
-1.08 ± 3.03
-1.27 ± 3.05
Cooke: Alignment classes (Varus; Neutral; Valgus; NA)
1,025;923; 416; 92
1,026; 1,114;378; 29
638; 885; 293; 6
97; 125;42; 0
Duryea: Alignment classes (Varus; Neutral; Valgus; NA)
1,234; 1,138; 486; 0
1,031; 1,130; 360; 0
660; 901; 267; 0
132; 173; 47; 0
Fig. 2. A) Pipeline of YARLA for computation of the HKA angle from full-leg X-Rays: The first step of the algorithm is YOLOv4 which locates ROIs in an image (hip: orange, knee: green, ankle: purple). In the second step three levels of ResNets are employed for the regression of landmark coordinates. The
HKA angle is finally derived from the resulting two axes.
B) Flow-chart of the ResNet architecture. Six residual blocks with projection shortcuts are employed. The number of filters is indicated in brackets. 3 × 3 convolutions are employed in each block. The last block is followed by a
dense layer with 20 0 0 nodes as well as a dense layer having as many nodes as the number of landmark coordinates.
-
Experimental setup.
900 X-Rays (OAI time point v12) are being used as training, validation, and testing dataset for YOLOv4 and the Resnet out of 3,843 X-Rays used in this study (60 percent, 20 percent, 20 percent). All 900 X-Rays have manual landmarks for both legs, with six for the femoral head, two for the knee, and one for the ankle, respectively. These landmarks are known to as LM ZIB in the following. Only the right leg is trained with our method. The data is flipped from left to right to essentially double the amount of training data while also reducing the volatility in the data. Prior to YOLOv4 training, the X-Rays are flipped so that the patients' left legs look similar to the right legs. All X-Ray images are in color.
Table 2. Methods used for evaluating the performance of YARLA.
YARLA output Method for evaluation
Landmark location
Average distance
Landmark location
Occlusion heat maps
HKA angle
Bland-Altman plots
HKA angle
Proportion of errors being > 1.5
HKA angle
Interclass correlation coefficient (ICC)
HKA angle
Non-parametric Spearmans Rho
Class assignments
Agreement
Class assignments
Confusion matrix
Class assignments
Weighted kappa
Figure 3 shows occlusion heat maps for all ResNet landmark regression grades. With mean image intensity, we used a so-called "occlude" of size 64
64. With an 8-pixel stride, the occlude was moved across the appropriate X-
Ray picture. At each point, the amount of change in the landmark coordinate prediction was evaluated. The occlusion heat maps' magnitudes were standardized to [0,1], and values below 0.7 were truncated.
Evaluation of the performance of YARLA.
The methods used to evaluate YARLA's performance are shown in Table 2. The mismatch between the landmarks computed by YARLA and the manually placed ones is analyzed for all 360 legs in the testing data. Occlusion heat maps [34] are used to investigate which regions are most essential for the ResNet's computation of land mark coordinates. With a stride of 8, a "occlude" is placed over the respective ROI. To occlude the true image, the occlude sets the intensities of all images in a 64 image region to the mean
image intensity at each place. For each point of the occlude, the quantity of change is analyzed, and the most critical regions for ResNet's prediction are qualitatively assessed (see Figure 1).
RESULT
Several aspects of the approach were evaluated in order to assess the quality of YARLA, including the object detector YOLOv4, the precision of the identified landmarks, HKA angle computations, and class assignments.
-
YOLOv4 success rate
For all legs in the testing data, YOLOv4 successfully identifies the ROIs of the hip, knee, and ankle. For the Angle OAI situations, YOLOv4 successfully identifies all areas for 5,809 of the 5,818 legs (99,85 percent success rate, see Supplementary Figure B1 for images and subject identifiers of the nine X-Ray images for which YOLOv4 failed
-
Automatically detected landmarks' location
The difference between YARLA-determined landmark positions and manually located ones is on average 1.72 1.00 mm for the center of the femoral head, 1.94 1.33 mm for the malalignment notch, 1.63 1.29 mm for the tibia spines, and
1.54 1.33 mm for the center of the talus bone at the ankle in the testing data.
-
Analysis of systematic bias and outliers
Bland-Altman plots were examined to see if there was any systematic bias in the HKA angles calculated by Cooke, Duryea, or YARLA, or if there were any outliers. In addition, the HKA angles computed using LM ZIB are assessed. To Cooke and Duryea, LM ZIB has an average mismatch of 0.12 0.6° and 0.18 0.46°, respectively. The average discrepancy between YARLA and LM ZIB is 0.03 0.48°.
Cooke and Duryea's and YARLA's disagreements are
0.13 0.65° and 0.21 0.56°, respectively. Cooke and Duryea have a 0.07 0.57° discrepancy. The discrepancy between YARLA and the two studies for the Angle OAI data is 0.09 0.73° and 0.18 0.67°, respectively, although Cooke and Duryea observed a disagreement of 0.09 0.63°.
-
Analysis of systematic bias and outliers
Bland-Altman plots were examined to see if there was any systematic bias in the HKA angles calculated by Cooke, Duryea, or YARLA, or if there were any outliers. In addition, the HKA angles computed using LM ZIB are assessed. To Cooke and Duryea, LM ZIB has an average mismatch of 0.12 0.6° and 0.18 0.46°, respectively. The average discrepancy between YARLA and LM ZIB is 0.03 0.48°.
Cooke and Duryea's and YARLA's disagreements are
0.13 0.65° and 0.21 0.56°, respectively. Cooke and Duryea have a 0.07 0.57° discrepancy. The discrepancy between YARLA and the two studies for the Angle OAI data is 0.09 0.73° and 0.18 0.67°, respectively, although Cooke and Duryea observed a disagreement of 0.09 0.63°.
Table 3. Confusion matrices for the testing as well as the Angle_OAI data.
Cooke vs Duryea
Training data
Angle_OAI dta
Duryea
Varus
Neutral
Valgus
Duryea
Varus
Neutral
Valgus
Cooke Varus
99
7
0
Cooke Varus
2003
141
0
Neutral
7
102
6
Neutral
154
2221
77
Valgus
0
5
53
Valgus
1
123
737
YARLA vs Cooke
Training data
Angle_OAI dta
Cooke
Varus
Neutral
Valgus
Cooke
Varus
Neutral
Valgus
YARLA Varus
95
3
0
YARLA Varus
1915
102
1
Neutral
11
107
8
Neutral
225
2281
135
Valgus
0
5
50
Valgus
0
83
730
Training data
Angle_OAI dta
Duryea
Varus
Neutral
Valgus
Duryea
Varus
Neutral
Valgus
YARLA Varus
125
3
0
YARLA Varus
2056
68
0
Neutral
13
148
7
Neutral
235
2487
84
Valgus
0
3
58
Valgus
2
79
766
Table 4. Table 4 Evaluation of non-parametric Spearmans Rho, accuracy of class assignment, and weighted kappa.
E. Spearmans Rho.
Statistical comparisons of HKA angles between YARLA and the two experiments are shown in Table 4. Very significant correlations have been established for both the testing and Angle OAI data. For all four raters, Spearman's Rho is greater than 0.98. Significant correlations were
identified in all cases with p < 0.001.
Agreement is computed for the Angle_OAI data between the automated HKA angle computations of YARLA and those of Cooke and Duryea. For the testing data, additionally, HKA angles were derived from our manually determined landmarks, LM_ZIB, and compared to the results of YARLA, Cooke, and Duryea.
Spearmans Rho
Testing Data
Cooke
Duryea
LM_ZIB
YARLA
Cooke
0.99 (p < 0.001)
0.99 (p < 0.001)
0.98 (p < 0.001)
Duryea
0.99 (p < 0.001)
0.99 (p < 0.001)
0.99 (p < 0.001)
LM_ZIB
0.99 (p < 0.001)
0.99 (p < 0.001)
0.99 (p < 0.001)
YARLA
0.98 (p < 0.001)
0.99 (p < 0.001)
0.99 (p < 0.001)
Angle_OAI data
Cooke
Duryea
YARLA
Cooke
0.98 (p < 0.001)
0.98 (p < 0.001)
Duryea
0.98 (p < 0.001)
0.98 (p < 0.001)
YARLA
0.98 (p < 0.001)
0.98 (p < 0.001)
Accuracy of class assignment
Testing Data
Cooke
Duryea
LM_ZIB
YARLA
Cooke
0.92
0.92
0.90
Duryea
0.92
0.93
0.93
LM_ZIB
0.92
0.93
0.93
YARLA
0.90
0.93
0.93
Angle_OAI data
Cooke
Duryea
YARLA
Cooke
0.91
0.90
Duryea
0.91
0.92
YARLA
0.90
0.92
Weighted kappa Testing data
Cooke
Duryea
LM_ZIB
YARLA
Cooke
0.88
0.87
0.85
Duryea
0.88
0.89
0.88
LM_ZIB
0.87
0.89
0.88
YARLA
0.85
0.88
0.88
Angle_OAI data
Cooke
Duryea
YARLA
Cooke
0.86
0.83
Duryea
0.86
0.87
YARLA
0.83
0.87
-
Agreement of class assignment.
For both the testing data and the Angle OAI data between YARLA and all other measures, the accuracy of class assignment is equal to or greater than 90%. The maximum level of accuracy (93%) was attained by YARLA and LM ZIB, as well as Duryea (testing data). Between YARLA and Cooke (Angle OAI data), the lowest accuracy (90%) was reached.
-
Confusion matrices.
Confusion matrices for both testing and Angle OAI data are provided in Table 3. In the Angle OAI data, 255 knees with Varus malalignment and 135 knees with valgus malalignment were mislabeled as neutral when compared to Cooke, and 235 and 84 when compared to Duryea. In comparison to Cooke and Duryea, the number of legs in neutral alignment mislabeled as misaligned is on the decline (total 185 misclassifications vs. 147 misclassifications).
-
Weighted kappa.
Between YARLA and the two other evaluators, there is almost complete agreement, as determined by a weighted kappa of greater than 0.80. (Table 4). The maximum kappa (0.88) achieved between YARLA and LM ZIB measurements, as well as Duryea measurements (testing data). Between YARLA and Cooke (Angle OAI data), the lowest kappa (0.83) is achieved. When comparing the tests with the Angle OAI data, the kappa for YARLA vs. Cooke decreases (0.85 vs. 0.83) and increases slightly for YARLA vs. Duryea (0.85 vs. 0.87).
-
Run time.
On average, YOLOv4 takes 0.6 seconds each X-ray to identify ROI in both legs. For both legs, the application of the three ResNet levels takes an average of 2.1 seconds. On average, it takes around 3 seconds to compute the HKA angles for both legs in one X-Ray.
REFERENCES
[1] G. Eason, H. Akoglu , Users guide to correlation coefficients, Turkish journal of emer- gency medicine 18 (3) (2018) 9193 . [2] H. Alizai , F.W. Roemer , D. Hayashi , M.D. Crema , D.T. Felson , A. Guermazi , An update on risk factors for cartilage loss in knee osteoarthritis assessed us- ing MRI-based semiquantitative grading methods, European radiology 25 (3) (2015) 883893 . [3] J.J. Bartko , The intraclass correlation coefficient as a measure of reliability, Psy-chological reports 19 (1) (1966) 311 . [4] J.M. Bland , D. Altman , Statistical methods for assessing agreement between two methods of clinical measurement, The lancet 327 (8476) (1986) 307310 . [5] A. Bochkovskiy , C.-Y. Wang , H.-Y.M. Liao , YOLOv4: Optimal Speed and Accuracy of Object Detection, arXiv preprint arXiv:2004.10934 (2020) . [6] N. Carion , F. Massa , G. Synnaeve , N. Usunier , A. Kirillov , S. Zagoruyko , End to-end object detection with transformers, in: European Conference on Com- puter Vision, Springer, 2020, pp. 213 229 . [7] D.T. Cooke , L. Harrison , B. Khan , A. Scudamore , A.M. Chaudhary , Analysis of limb alignment in the pathogenesis of osteoarthritis: a comparison of Saudi Arabian and Canadian cases, Rheumatology international 22 (4) (2002) 160164 . [8] T. Cooke , R. Scudamore , J. Bryant , C. Sorbie , D. Siu , B. Fisher , A quantitative approach to radiography of the lower limb. Principles and applications, The Journal of Bone and Joint Surgery. British volume 73 (5) (1991) 715720 . [9] T.D.V. Cooke , E.A. Sled , R.A. Scudamore , Frontal plane knee alignment: a call for standardized measurement, Journal of Rheumatology 34 (9) (2007) 17961801 . [10] T. Falk , D. Mai , R. Bensch , Ö. Çiçek , A. Abdulkadir , Y. Marrakchi ,-
Böhm , J. Deubner , Z. Jäckel , K. Seiwald , et al. , U-net: deep learning for cell counting, detection, and morphometry, Nature methods 16 (1) (2019) 6770 .
-
, C. Gupta , C. Knight , B. Kainz D. Rueckert , Fast multiple landmark localisation using a patch-based iterative network, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2018, pp. 563571 .
[21] T.-Y. Lin , M. Maire , S. Belongie , J. Hays , P. Perona , D. Ramanan ,P. Dollár , C.L. Zit- nick , Microsoft coco: Common objects in context, in: European conference on computer vision, Springer, 2014, pp. 740 755 .
[22] G. Litjens , T. Kooi , B.E. Bejnordi , A .A .A . Setio , F. Ciompi , M. Ghafoorian , J.A. Van Der Laak , B. Van Ginneken , C.I. Sánchez , A survey on deep learning in medical image analysis, Medical image analysis 42 (2017) 6088 . [23] R. Moyer , W. Wirth , J. Duryea , F. Eckstein , Anatomical alignment, but not goniometry, predicts femorotibial cartilage loss as well as mechanical align- ment: data from the Osteoarthritis Initiative, Osteoarthritis and cartilage 24 (2) (2016) 254261 . [24] G. Neumann , D. Hunter , M. Nevitt , L. Chibnik , K. Kwoh , H. Chen ,T. Har- ris , S. Satterfield , J. Duryea , et al. , Location specific radiographic joint space width for osteoarthritis progression, Osteoarthritis and cartilage 17 (6) (2009) 761765 .
[25] T.P. Nguyen , D.-S. Chae , S.-J. Park , K.-Y. Kang , W.-S. Lee , J. Yoon, Intelligent analysis of coronal alignment in lower limbs based on radiographic image with convolutional neural network, Computers in Biology and Medicine (2020) 103732 .
[26] J.M. Noothout , B.D. de Vos , J.M. Wolterink , T. Leiner , I. Igum , Cnn-based land- mark detection in cardiac cta scans, arXiv preprint arXiv:1804.04963 (2018) . [27] C. Payer , D. tern , H. Bischof , M. Urschler , Regressing heatmaps for multi- ple landmark localization using CNNs, in: International Conference on Med- ical Image Computing and Computer-Assisted Intervention, Springer, 2016, pp. 230238 . [28] Y. Pei , W. Yang , S. Wei , R. Cai , J. Li , S. Guo , Q. Li , J. Wang , X.Li , Automated measurement of hipkneeankle angle on the unilateral lower limb x-rays us- ing deep learning, Physical and Engineering Sciences in Medicine (2020) 110 .
[29] J. Redmon , S. Divvala , R. Girshick , A. Farhadi , You only look once: Unified, re- al-time object detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779 788 . [30] O. Schipplein , T. Andriacchi , Interaction between active and passive knee sta- bilizers during level walking, Journal of orthopaedic research 9 (1) (1991) 113119 . [31] L. Sharma , J. Song , D. Dunlop , D. Felson , C.E. Lewis , N. Segal , J. Torner , T.D.V. Cooke , J. Hietpas , J. Lynch , et al. , Varus and valgus alignment and in- cident and progressive knee osteoarthritis, Annals of the rheumatic diseases 69 (11) (2010) 19401945 . [32] E.A. Sled , L.M. Sheehy , D.T. Felson , P.A. Costigan , M. Lam ,T.D.V. Cooke , Relia- bility of lower limb alignment measures using an established landmark-based method with a customized computer software program, Rheumatology inter- national 31 (1) (2011) 7177 .
[33] C. Szegedy , V. Vanhoucke , S. Ioffe , J. Shlens , Z. Wojna , Rethinking the inception architecture for computer vision, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 28182826 . [34] M.D. Zeiler , R. Fergus , Visualizing and understanding convolutional networks, in: European conference on computer vision, Springer, 2014, pp. 818833 . [35] Z.-Q. Zhao , P. Zheng , S.-t. Xu , X. Wu , Object detection with deep learning: A review, IEEE transactions on neural networks and learning systems 30 (11) (2019) 32123232 [36] P. Zheng , S.-t. Xu , X. Wu , Object detection with deep learning: A review, IEEE transactions A review, IEEE transactions.
-