-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathtesting.html
More file actions
422 lines (390 loc) · 22.9 KB
/
testing.html
File metadata and controls
422 lines (390 loc) · 22.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<meta name="description" content="">
<meta name="author" content="">
<title>Sapient - CCTV Analysis AI</title>
<!-- Bootstrap core CSS -->
<link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom fonts for this template -->
<link href="vendor/fontawesome-free/css/all.min.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Varela+Round" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Nunito:200,200i,300,300i,400,400i,600,600i,700,700i,800,800i,900,900i" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="css/grayscale.min.css" rel="stylesheet">
<link href="css/custom.css" rel="stylesheet">
</head>
<body id="page-top">
<!-- Navigation -->
<nav class="navbar navbar-expand-lg navbar-light fixed-top" id="mainNav">
<div class="container">
<a class="navbar-brand js-scroll-trigger" href="index.html">SAPIENT</a>
<button class="navbar-toggler navbar-toggler-right" type="button" data-toggle="collapse" data-target="#navbarResponsive" aria-controls="navbarResponsive" aria-expanded="false" aria-label="Toggle navigation">
Menu
<i class="fas fa-bars"></i>
</button>
<div class="collapse navbar-collapse" id="navbarResponsive">
<ul class="navbar-nav ml-auto">
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="requirements.html">Requirements</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="research.html">Research</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="hci.html">HCI</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="design.html">Design</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="testing.html">Testing</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="evaluation.html">Evaluation</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="management.html">Management</a>
</li>
</ul>
</div>
</div>
</nav>
<!-- Header -->
<header class="masthead">
<div class="container d-flex h-100 align-items-center">
<div class="mx-auto text-center">
<h1 class="mx-auto my-0 text-uppercase">Testing</h1>
<h2 class="text-white-50 mx-auto mt-2 mb-5">Introducing our testing methods</h2>
<a href="#testing_strategy" class="btn btn-primary js-scroll-trigger">Show me more</a>
</div>
</div>
</header>
<div class="container-fluid bg-light">
<div class="row">
<!--Sidebar menu-->
<div class="col-sm-2">
<br>
<br>
<div id="sidebar" class="sidebar">
<nav class="navbar navbar-shrink">
<ul class="navbar-nav">
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#testing_strategy">Testing Strategy</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#sketches">Unit and Integration Testing</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#compatibility">Compatibility</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#performance">Performance</a>
</li>
<li class="nav-item">
<a class="nav-link js-scroll-trigger" href="#user_acceptance">User Acceptance</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="col-sm-10">
<section id="testing_strategy" class="projects-section bg-light">
<div class="container">
<div class="col-lg-12 text-center">
<h1 class="section-heading text-uppercase">Testing Strategy</h1>
<p class="line"></p>
<p class="text-black-100" style="text-align: justify;" >There two main parts that we have to test for our application:</p>
<br>
<p class="text-black-100" style="text-align: justify;" ><b><u>Web Application</u></b>:
<br>
In here, we will test its compatibility and the performance. As we mentioned in the design section, we have two web applications which are hosted in AzureVM and local. We will test if all the functionalities from them can be used in different web browser. In terms of performance, we will use <b>FireFox</b> to test how fast the image recognition can process based on different types of input(image, video and live-streaming).
</p>
<br>
<p class="text-black-100" style="text-align: justify;"><b><u>Image Recognition:</u></b></p>
<ul class="text-black-100" style="text-align: justify;">
<li>Posture Recognition</li>
<li>Face Recognition</li>
</ul>
<p class="text-black-100" style="text-align: justify;">In the design session, we can see that only the Item Recognition is an API, and others are SDK. The Item Recognition is using Google Vision API, therefore, there is no need to test its accuracy.
<br>
<br>
For Face and Posture Recognition, we will apply unit and integration testing to them. In unite and integration testing, we will describe how we create the models, select the best model and test the model selected.
<br>
<br>
Finally, we will list out the compatibility of each recognition for each input type.</p>
</div>
</div>
</section>
<section id="designprinciples" class="projects-section bg-light">
<div class="container">
<div class="col-lg-12 text-center">
<h1 class="section-heading text-uppercase">Unit and Integration testing</h1>
<p class="line"></p>
<p class="text-black-100" style="text-align: justify;">
The unittest class is used when developing the webapp, our project is about using the back-end to analyse the video
Therefore for most cases, the unit testing is hard to be done by code under the theory of TDD. The reason is that,
the video output have to be run and checked by human to see the exactly the AI information outline on the video
is correct which is not static at all, it changes every time by different cases
</p>
<p class="text-black-100" style="text-align: justify;">
Therefore when we were doing the development, most TDD is done by hand other than code. These unit testing is
just used to test the whole program can catch up the exception, or can run the video successfully.
For most functions tested in the unit test is now been disabled for real application, the reason is that
it is not worth to keep unit tests which cannot cover most cases, everything still need check by hand.
The unit test by code including user login, logout, run the video reading.
</p>
<p class="text-black-100" style="text-align: justify;">
The following paragraph will discuss how we implement the unit test by hand to build the app and how the integration test we took to see
the overall performance of the system
</p>
<p class="text-black-100" style="text-align: justify;">
We are using the integration test to seee the overall performance, especially the accuracy of the web app.
We found that the accuracy is always low. As said, we stopped by the point to have the dangerous information notification
system since we find the accuracy is low.
</p>
<h2 align="left"><u>Posture Recognition</u></h2>
<p class="text-black-100" style="text-align: justify;" >As we mentioned, using this SDK requires you to have a model. So far, We trained 25 models with different datasets and classifiers. There are three different categories of samples we gathered for generating datasets, one is purely using the image samples from opensource photo library which is showing sitting, standing and laying, the second category is using the photos we took by using camera and the other is having the first two categories together. We found that it is not necessary that the more samples we have, the more accurate AI can be. It really depends on how classifiers calculate the data and the accuracy of the photo samples. However, the sample size has to be greater than 15 in order to process the classification to generate a model.</p>
<div class="row">
<div class="column">
<p class="text-black-100" style="text-align: justify;" >After we got all the models, we use 3 different types of human actions image "sitting" "standing" "laying", and you can see from the right, each type has quantity of three. The testing images are the mixed of opensource photos and the photos we took from camera for testing models. 'y' means successfully recognise the posture, 'n' means failure. 'st' means 'standing', 'si' means 'sitting' and 'la' means 'laying'. The models name labelled at the top are all trained by MLPclassifier. As a result, we found that 'training4' model is the most accurate.</p>
</div>
<div class="column">
<img class="sketch" src="img/testing/model_select.png" style="width: 100%">
</div>
</div>
<br>
<div class="row">
<div class="column">
<p class="text-black-100" style="text-align: justify;">After we selected the model, we used Graident Boosted Tree and Adaboost classifier to generate the models with the same datasets trained for 'training4'. Finally, we compare the accuracy among the three classifiers. As a result, the model generated from MLP classifier has the best overall performance.</p>
</div>
<div class="column">
<img class="sketch"src="img/testing/model_test.png" style="width: 100%">
</div>
</div>
<br>
<p class="text-black-100" style="text-align: justify;">Although none of the models recognise the standing image, we used another few standing images, the model we selected partly recognised them correctly.</p>
<h2 align="left"><u>Face Item Recognition</u></h2>
<p class="text-black-100" style="text-align: justify;">
We choose a ten-second video to test for the item and face recognition.
There are two people including in the video with one person has a small knives held by hand.
We found the face has been nicely detected three time per second accurately within the range of 3 meters
and the deadly (small knives) never been recognised when it is 5 meters away from the camera. It shows that
how the AI CCTV far away from the real-world application at the moment.
</p>
<p class="text-black-100" style="text-align: justify;">
The accuracy and test cases will be discussed further in the following paragraph
</p>
</div>
</div>
</section>
<section id="compatibility" class="projects-section bg-light">
<div class="container">
<div class="col-lg-12 text-center">
<h1 class="section-heading text-uppercase">Compatibility</h1>
<p class="line"></p>
<h2 class="section-heading text-uppercase"><u>Web Application</u></h2>
<br>
<h2 align="left"><u>AzureVM Integration</u></h2>
<br>
<p class="text-black-100" style="text-align: justify;">Here we are testing the compatibility of the webapp hosted in AzureVM with different web-browser:</p>
<table>
<tr>
<th></th>
<th>Image Upload</th>
<th>Webcam Snapshot</th>
<th>Video</th>
</tr>
<tr>
<td style="text-align: center;">Google Chrome</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">No</td>
<td style="text-align: center;">Yes</td>
</tr>
<tr>
<td style="text-align: center;">Safari</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">No</td>
<td style="text-align: center;">Yes</td>
</tr>
<tr>
<td style="text-align: center;">FireFox</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
</tr>
<tr>
<td style="text-align: center;">Edge</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
</tr>
<tr>
<td style="text-align: center;">Internet Explorer</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">No</td>
<td style="text-align: center;">Yes</td>
</tr>
</table>
<br>
<p class="text-black-100" style="text-align: justify;">The first row are the functionalities of the webapp. As you can see from the table, only the <b>FireFox</b> and <b>Edge</b> are able to perform all the functionalities. The reason why the webcam does not work in the others is because the webcam JS library we used require the website to be HTTPS, therefore, the webcam cannot ask for the permission to access the computer webcam usb.</p>
<br>
<h2 class="section-heading text-uppercase"><u>Image Recognition</u></h2>
<br>
<table>
<tr>
<th></th>
<th>jpg</th>
<th>mp4</th>
<th>live streaming</th>
</tr>
<tr>
<td style="text-align: center;">Posture Recognition</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
</tr>
<tr>
<td style="text-align: center;">Face Recognition</td>
<td style="text-align: center;">No</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
</tr>
<tr>
<td style="text-align: center;">Item Recognition</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
<td style="text-align: center;">Yes</td>
</tr>
</table>
<br>
<p class="text-black-100" style="text-align: justify;">Here we are testing the compatibility of the backend with different input. From the table, we can see that only the Face Recognition is not compatible with image input.</p>
</div>
</div>
</section>
<section id="performance" class="projects-section bg-light">
<div class="container">
<div class="col-lg-12 text-center">
<h1 class="section-heading text-uppercase">Performance</h1>
<p class="line"></p>
<br>
<h2 align="left"><u>AzureVM Integration</u></h2>
<p class="text-black-100" style="text-align: justify;" >Here we are testing the efficiency of the whole image recognition when it deals with different type and size of inputs from the web application. We are using a timer to measure the processing time and take the average of the three measurement for each field.</p>
<br>
<table>
<tr>
<th></th>
<th>480p</th>
<th>720p</th>
<th>1080p</th>
</tr>
<tr>
<td style="text-align: center;">jpg</td>
<td style="text-align: center;">5.75sec</td>
<td style="text-align: center;">6.44sec</td>
<td style="text-align: center;">8.36sec</td>
</tr>
<tr>
<td style="text-align: center;">mp4 (3sec)</td>
<td style="text-align: center;">0min 43.6sec</td>
<td style="text-align: center;">1min 54.64sec</td>
<td style="text-align: center;">4min 01.67sec</td>
</tr>
</table>
<br>
<p class="text-black-100" style="text-align: justify;" >Here is the performance table of processing the image and the 3 second of the video. The images have the same content but with different resolution, and the same as the video. The reason why we are using 3 secs of video is because the image recognition takes a long time to process, and the problem mostly came from the Posture Recognition. The Posture Recognition analyse each frame from the video, hence, the more resolution you have in the video, the slower you will get from the overall performance.
<br>
<br>
The following is the specification of the AzureVM:
</p>
<ul class="text-black-100" style="text-align: justify;">
<li>CPU: Intel Xeon® E5-2690 v3 2.60GHz processor</li>
<li>Ram: 56GB</li>
<li>SSD: 340GB</li>
<li>GPU: one-half K80 card with 12GB</li>
</ul>
<br>
<br>
<h2 align="left"><u>Local Live Integration</u></h2>
<br>
<table>
<tr>
<th></th>
<th>Posture</th>
<th>Face & Item</th>
</tr>
<tr>
<td style="text-align: center;">Live Streaming</td>
<td style="text-align: center;"><=0.2FPS</td>
<td style="text-align: center;">10FPS</td>
</tr>
</table>
<br>
<p class="text-black-100" style="text-align: justify;">This table shows the overall performance of the real-time posture recognition and face/item recognition. The posture recognition requires large computational processing ability, it would takes 5 seconds at least to read one frame when doing the live-stream and/video. It would be hard to detect the faces since the face recognition requires a smooth and fluent video/live input. When the machine is running the posture recognition, face recognition found hard to recognise the face since too less frames (0.2 frame per second or said 0.2fps ) are fed to the face recognition.
<br>
<br>
The following is the specification of the computer running the live streaming image recognition:
</p>
<ul class="text-black-100" style="text-align: justify;">
<li>CPU: i7 3.5GHz </li>
<li>Ram: 16GB</li>
<li>GPU: Intel Iris plus Graphics 650 1.5GB</li>
</ul>
</div>
</div>
</section>
<section id="user_acceptance" class="projects-section bg-light">
<div class="container">
<div class="col-lg-12 text-center">
<h1 class="section-heading text-uppercase">User Acceptance</h1>
<p class="line"></p>
<p class="text-black-100" style="text-align: justify;">We were demonstrating weekly the development of our Recognition API to our TA using the front-end developed for the demo purposes. The presentation was also done to the client during our end of the week meeting.
This helped us track down the suggestions for improvement made from our TA and complete them accordingly.</p>
<h2 class="section-heading text-uppercase"><u>Testers</u></h2>
<p class="text-black-100" style="text-align: justify;">Our testers were our TA along with the client and some of our coursemates and people on street in order to have a wide variety of opinions for our project. Talking with our classmates
was really important to take ideas on their approach to the development of our project. Any ideas suggested were taking into consideration with respect to time limit and complexity.</p>
<p class="text-black-100" style="text-align: justify;">
<!-- <h2 class="section-heading text-uppercase"><u>Test Cases</u></h2> -->
<h2 class="section-heading text-uppercase"><u>Feedbacks</u></h2>
<p class="text-black-100" style="text-align: justify;">Our TA mentioned that we should add a loading animation while our Recognition API is analysing the input since
large file take more time to process. Adding this animation would help the user realize that the API is still processing the input and made our UI look more professional. </p>
<p class="text-black-100" style="text-align: justify;">
When we were testing with the street people. Most are excited about that AI CCTV. We uploading their faces pictures by taking a short video and train the data very quickly.
They has been recognised immediately after that 3 minutes short preparing
</p>
<p class="text-black-100" style="text-align: justify;">
However, most people are not happy with the posture recognition, the first thing is that it looks a bit useless at the moment since it can only detect sitting, standing and lying down.
The other feed back given by the testers is that the the posture program is working too slow, it tooks few seconds to process one frame.
</p>
<p class="text-black-100" style="text-align: justify;">
The tester said they are worry about the sensitivity about the item recognition, since it can only detect the top 5 most significant items within the frame.
It means that small deadly like pistols or small knives may be ignored by the recognition system.
</p>
<p class="text-black-100" style="text-align: justify;">
A really good feedback is that people are shocked by the face recognition, it only takes 3 minuets to take video and analysis faces, then it can work with more than 50%
accuracy to detect the people in the frames.
</p>
</div>
</div>
</section>
</div>
</div>
</div>
<!-- Footer -->
<footer class="bg-black small text-center text-white-50">
<div class="container">
Copyright © Sapient Platform 2018
</div>
</footer>
<!-- Bootstrap core JavaScript -->
<script src="vendor/jquery/jquery.min.js"></script>
<script src="vendor/bootstrap/js/bootstrap.bundle.min.js"></script>
<!-- Plugin JavaScript -->
<script src="vendor/jquery-easing/jquery.easing.min.js"></script>
<!-- Custom scripts for this template -->
<script src="js/grayscale.min.js"></script>
<script src="js/sidebar.js"></script>
</body>
</html>