SAFETYLIT WEEKLY UPDATE

We compile citations and summaries of about 400 new articles every week.
RSS Feed

HELP: Tutorials | FAQ
CONTACT US: Contact info

Search Results

Journal Article

Citation

Yeh CC, Jhang KJ, Chang CC. Math. Biosci. Eng. 2019; 17(1): 266-285.

Affiliation

Department of Computer Science and Engineering, National Taiwan Ocean University, Keelung 20224, Taiwan.

Copyright

(Copyright © 2019, American Institute of Mathematical Sciences)

DOI

10.3934/mbe.2020015

PMID

31731351

Abstract

Indoor positioning technologies have gained great interest from both industry and academia. Variety of services and applications can be built based on the availability and accessibility of indoor positioning information, for example indoor navigation and various location-based services. Different approaches have been proposed to provide indoor positioning information to users, in which an underlying system infrastructure is usually assumed to be well deployed in advance to provide the position information to users. Among many others, one common strategy is to deploy a bunch of active sensor nodes, such as WiFi APs and Bluetooth transceivers, to the indoor environment to serve as reference landmarks. The user's current location can thus be obtained directly or indirectly according to the active sensor signals collected by the user. Different from conventional infrastructure-based approaches, which put additional sensor devices to the environment, we utilize available objects in the environment as location landmarks. Leveraging wildly available smartphone devices as customer premises equipment to the user and the cutting-edge deep-learning technology, we investigate the feasibility of an infrastructure-free intelligent indoor positioning system based on visual information only. The proposed scheme has been verified by a real case study, which is to provide indoor positioning information to users in Taipei Main Station, one of the busiest transportation stations in the world. We use available pedestrian directional signage as location landmarks, which include all of the 52 pedestrian directional signs in the testing area. The Google Objection Detection framework is applied for detection and recognition of the pedestrian directional sign. According to the experimental results, we have shown that the proposed scheme can achieve as high as 98% accuracy to successfully identify the 52 pedestrian directional signs for the three test data sets which include 6,341 test images totally. Detailed discussions of the system design and the experiments are also presented in the paper.


Language: en

Keywords

R-CNN ; deep-learning ; indoor positioning ; signage detection ; smart environment

NEW SEARCH


All SafetyLit records are available for automatic download to Zotero & Mendeley
Print