1. Introduction

Nowadays, mobile phones are more than a device that can only satisfy the communication need between people. Cameras and other integrated additional devices are found in almost every smartphone. Other than these devices, there are tele, macro and fisheye lenses that can easily be integrated to the smartphones. Some of those lens kits are presented in Refs. [1, 2]. Fisheye lenses

© 2017 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

© The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and eproduction in any medium, provided the original work is properly cited.

that are compliant to mobile phones are one of these new equipments. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. Additionally, these lenses are cost efficient compared to conventional fisheye lenses. The characteristics of Olloclip lens used in this study are presented in Ref. [3]. Cameras on mobile phones are as capable as compact cameras that we use in our daily lives. Smartphone cameras used for acquiring image instead of conventional cameras have opened a new scientific study field. Another scientific study field is that using smartphone cameras together with the developing technologies has given the opportunity to achieve new study fields that have not been made before. Chugh et al. [4] present a detailed survey of methods for detecting road conditions. Smartphone sensors are gaining importance in this field, as they are cost effective and also increase scalability. Analysing from the research activities, it is certain that this area will gain more importance in recent future. The objective of the research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones [5]. Perttunen et al. [5] present experimental results from real urban driving data that demonstrate the usefulness of the system. To monitor road and traffic conditions in such a setting, Mohan et al. [6] present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. Mohan et al. [6] focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio and/or GPS sensors in these phones to detect potholes, bumps, braking and honking. Wagner et al. [7] present two techniques for natural feature tracking in real-time on mobile phones and use an approach based on heavily modified state-of-the-art feature descriptors, namely scale invariant feature transform (SIFT) and Ferns. Object-wise 3D reconstruction is a cardinal problem in computer vision, with much work being dedicated to it throughout recent years. Unlike other approaches, some approaches use global computation, whereas Prisacariu et al. [8] adopt a local computation method related with signed distance transformation and its derivatives. By this method, 3D renderings are quickly obtained by hierarchical ray casting. Real-time mobile phone performances and speeds faster than 100 fps on PC are achieved by the tracker and GPU acceleration is not required [8]. Tanskanen et al. [9] propose the first dense stereo-based system for live interactive 3D reconstruction on mobile phones. Pan et al. [10] present a novel system that allows for the generation of a coarse 3D model of the environment within several seconds on mobile smartphones. The contribution of this work is the presentation of a novel approach to generate visually appealing, textured 3D models from a set of at least three panoramic images on mobile phones without the need for remote processing [10]. Wagner et al. [11] present a novel method for the real-time creation and tracking of panoramic maps on mobile phones. The maps generated with this technique are visually appealing, very accurate and allow drift-free rotation tracking. Nowadays, smartphones are widely used in the world, and generally, they are equipped with many sensors. Almazan et al. [12] study how powerful the low-cost embedded Inertial Measurement Unity (IMU) and Global Positioning System (GPS) could become for intelligent vehicles. Main contribution is the method employed to estimate the yaw angle of the smartphone relative to the vehicle co-ordinate system. The results show that the system achieves high accuracy, the typical error is 1%, and is immune to electromagnetic interference [12]. Recently, mobile phones have become increasingly attractive for augmented reality (AR). The recent advent of GPS and orientation sensors on commodity mobile devices has led to the development of numerous mobile augmented reality (AR) applications and broader public awareness and use of these applications. By using the phone orientation sensor to display the appropriate subset of the panorama, orientation accuracy can be effectively increased and augmentations tightly registered with the background [13]. Kurz and Benhimane [14] presented novel approaches to use the direction of the gravity measured with inertial sensors to improve different parts in the pipeline of handheld AR applications [14]. Amongst all the possible applications, AR systems can be very useful as visualization tools for structural and environmental monitoring. Porzi et al. [15] presented a successful implementation on an android device of an egomotion estimation algorithm by porting the tracking module of parallel tracking and mapping (PTAM). Porzi et al [15] described the development of the egomotion estimation algorithm for an android smartphone. In recent decades, many indoor positioning techniques have been researched and some approaches have even been developed into consumer products. Two devices are selected, the iPhone 3GS and the iPhone 4, to analyse their sensors for usability of an inertial navigation system. A precise Inertial Navigation System (INS) cannot be completely acquired by a strapdown algorithm because of inaccurate and noisy sensors that are used by both the iPhones. In order to enhance the accuracy, several filters were used. Finally, strapdown algorithms were analysed and verified with related testing and best filter combination was found for each of the devices [16]. Burgess et al. [17] expand on previous work by using a multi-floor model taking into account dampening between floors, and optimize a target function consisting of least squares residuals, to find positions for WiFis and the smartphone measurement locations [17]. Burgess et al. [18] have presented a method for simultaneously mapping the radio environment and positioning several smartphones in multi-story buildings.

that are compliant to mobile phones are one of these new equipments. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. Additionally, these lenses are cost efficient compared to conventional fisheye lenses. The characteristics of Olloclip lens used in this study are presented in Ref. [3]. Cameras on mobile phones are as capable as compact cameras that we use in our daily lives. Smartphone cameras used for acquiring image instead of conventional cameras have opened a new scientific study field. Another scientific study field is that using smartphone cameras together with the developing technologies has given the opportunity to achieve new study fields that have not been made before. Chugh et al. [4] present a detailed survey of methods for detecting road conditions. Smartphone sensors are gaining importance in this field, as they are cost effective and also increase scalability. Analysing from the research activities, it is certain that this area will gain more importance in recent future. The objective of the research is to improve traffic safety through collecting and distributing up-to-date road surface condition information using mobile phones [5]. Perttunen et al. [5] present experimental results from real urban driving data that demonstrate the usefulness of the system. To monitor road and traffic conditions in such a setting, Mohan et al. [6] present Nericell, a system that performs rich sensing by piggybacking on smartphones that users carry with them in normal course. Mohan et al. [6] focus specifically on the sensing component, which uses the accelerometer, microphone, GSM radio and/or GPS sensors in these phones to detect potholes, bumps, braking and honking. Wagner et al. [7] present two techniques for natural feature tracking in real-time on mobile phones and use an approach based on heavily modified state-of-the-art feature descriptors, namely scale invariant feature transform (SIFT) and Ferns. Object-wise 3D reconstruction is a cardinal problem in computer vision, with much work being dedicated to it throughout recent years. Unlike other approaches, some approaches use global computation, whereas Prisacariu et al. [8] adopt a local computation method related with signed distance transformation and its derivatives. By this method, 3D renderings are quickly obtained by hierarchical ray casting. Real-time mobile phone performances and speeds faster than 100 fps on PC are achieved by the tracker and GPU acceleration is not required [8]. Tanskanen et al. [9] propose the first dense stereo-based system for live interactive 3D reconstruction on mobile phones. Pan et al. [10] present a novel system that allows for the generation of a coarse 3D model of the environment within several seconds on mobile smartphones. The contribution of this work is the presentation of a novel approach to generate visually appealing, textured 3D models from a set of at least three panoramic images on mobile phones without the need for remote processing [10]. Wagner et al. [11] present a novel method for the real-time creation and tracking of panoramic maps on mobile phones. The maps generated with this technique are visually appealing, very accurate and allow drift-free rotation tracking. Nowadays, smartphones are widely used in the world, and generally, they are equipped with many sensors. Almazan et al. [12] study how powerful the low-cost embedded Inertial Measurement Unity (IMU) and Global Positioning System (GPS) could become for intelligent vehicles. Main contribution is the method employed to estimate the yaw angle of the smartphone relative to the vehicle co-ordinate system. The results show that the system achieves high accuracy, the typical error is 1%, and is immune to electromagnetic interference [12]. Recently, mobile phones have become increasingly attractive for augmented reality (AR). The recent advent of GPS and orientation sensors on commodity mobile devices has led to the development of numerous mobile augmented reality (AR) applications and broader public

26 Smartphones from an Applied Research Perspective

Computer vision applications for mobile phones are gaining increasing attention due to several practical needs resulting from the popularity of digital cameras in today's mobile phones. Hadid et al. [19] described the task of face detection and authentication in mobile phones, and experimentally analyse a face authentication scheme using Haar-like features with AdaBoost for face and eye detection and local binary pattern (LBP) approach for face authentication. Shen et al. [20] address the challenges of performing face recognition accurately and efficiently on smartphones by designing a new face recognition algorithm called opti-sparse representation classification (opti-SRC). Sparse representation classification (SRC) is a state-of-the-art face recognition algorithm, which has been shown to outperform many classical face recognition algorithms in OpenCV.

Monitoring aquatic environment is of great interest to the ecosystem, marine life and human health [21]. An efficient method for monitoring marine debris is smartphone-based aquatic robot (SOAR). It is a robotic system having low cost. The aim is to monitor debris in water environment. It contains a smartphone and a robotic fish platform. Robotic fish have a capability to moving through water and smartphone is used to capture images [22]. Another method for detecting debris is Samba. Samba is an aquatic robot that contains a smartphone and a robotic fish platform to monitor harmful marine debris. Using camera of the smartphone, Samba can recognize aquatic debris in dynamic and complex environments [22]. Maindalkar and Ansari [23] present design of aquatic robot for aquatic pollutants monitoring. The android smartphone is integrated with aquatic robot to capture images and to acquire data of different sensors. The implemented design contains CV algorithm for image processing on openCV platform. The real-time pollutant detection is done with the CV algorithm efficiently [23].

Muaremi [24] investigate the potential of a modern smartphone and a wearable heart rate monitor for assessing affect changes in daily life. Muaremi et al. [24] use smartphone features and heart rate variability (HRV) measures as predictors for building classification models to discriminate among low, moderate and high perceived stress. As smartphones evolve, researchers are studying new techniques to ease the human-mobile interaction. User interface of mobile phone can be operated by eye tracking and blink detection functions on EyePhone. These results are preliminary, but they suggest that EyePhone is a favourable tool for driving mobile applications with automation [25]. The advent of mobile sensing technology provides a potential solution to the challenge of collecting repeated information about both behaviours and situations such as to detect the type of situation using the sensors built into today's ubiquitous smartphones [26]. Sandstrom et al. [26] focused on using location sensors to learn the semantics of places, so that we could examine relationships between place, affect and personality. Sensor-enabled smartphones are opening a new frontier in the development of mobile sensing applications. The recognition of human activities and context from sensordata using classification models underpins these emerging applications [27]. The key contribution of community similarity networks (CSN) is that it makes the personalization of classification models practical by significantly lowering the burden to the user through a combination of crowd-sourced data and leveraging networks that measure the similarity between users. Lu et al. [28] present Jigsaw, a continuous sensing engine for mobile phone applications that require continuous monitoring of human activities and context. Supporting continuous sensing applications on mobile phones is very challenging. Lu et al. [29] propose StressSense for unobtrusively recognizing stress from human voice using smartphones. Lane et al. [30] discuss the emerging sensing paradigms, and formulate an architectural framework for discussing a number of open issues and challenges emerging in the new area of mobile phone sensing research [30]. Rachuri et al. [31] have presented EmotionSense, a novel system for social psychology study of user emotion based on mobile phones. Rachuri et al. [31] have presented the design of novel components for emotion and speaker recognition based on Gaussian mixture models. The driving vision is a smartphone service, called Mood-Sense, that can infer its owner's mood based on information already available in today's smartphones. In Ref. [32], it is suggested that user mood can be separated into four main types with 91% average accuracy. These results can be obtained with 3 weeks of research data and basic smartphone handling statistics. Although these results are not decisive, they show practicability of mood inference without any microphone and/or camera with bulky power requirements and social interaction [32].

Recently, the calibration methods using display devices such as monitors, tablets or smartphones have come to the forefront [33]. Gruen and Akca [34] report about first experiences in calibration and accuracy validation of mobile phone cameras. Ha et al. [33] propose a novel camera calibration method for defocused images using a smartphone under the assumption that the defocus blur is modelled as a convolution of a sharp image with a Gaussian point spread function (PSF). The effectiveness of the proposed method has been emphasized in several real experiments using a compact display device such as a smartphone [33]. Delaunoy et al. [35] propose a new approach to estimate the geometric extrinsic calibration of all the elements of a smartphone or tablet (such as the screen, the front and the back cameras) by using a planar mirror. Saponaro and Kambhamettu [36] described a method for calibrating a smartphone camera by taking two images at different rotations while tolerating small translations. Ahn et al. [37] were intended to analyse accuracy of smartphone image in determining three-dimensional location for approximated objects before photo survey system using smartphone is developed, and then evaluate its usability.

Muaremi [24] investigate the potential of a modern smartphone and a wearable heart rate monitor for assessing affect changes in daily life. Muaremi et al. [24] use smartphone features and heart rate variability (HRV) measures as predictors for building classification models to discriminate among low, moderate and high perceived stress. As smartphones evolve, researchers are studying new techniques to ease the human-mobile interaction. User interface of mobile phone can be operated by eye tracking and blink detection functions on EyePhone. These results are preliminary, but they suggest that EyePhone is a favourable tool for driving mobile applications with automation [25]. The advent of mobile sensing technology provides a potential solution to the challenge of collecting repeated information about both behaviours and situations such as to detect the type of situation using the sensors built into today's ubiquitous smartphones [26]. Sandstrom et al. [26] focused on using location sensors to learn the semantics of places, so that we could examine relationships between place, affect and personality. Sensor-enabled smartphones are opening a new frontier in the development of mobile sensing applications. The recognition of human activities and context from sensordata using classification models underpins these emerging applications [27]. The key contribution of community similarity networks (CSN) is that it makes the personalization of classification models practical by significantly lowering the burden to the user through a combination of crowd-sourced data and leveraging networks that measure the similarity between users. Lu et al. [28] present Jigsaw, a continuous sensing engine for mobile phone applications that require continuous monitoring of human activities and context. Supporting continuous sensing applications on mobile phones is very challenging. Lu et al. [29] propose StressSense for unobtrusively recognizing stress from human voice using smartphones. Lane et al. [30] discuss the emerging sensing paradigms, and formulate an architectural framework for discussing a number of open issues and challenges emerging in the new area of mobile phone sensing research [30]. Rachuri et al. [31] have presented EmotionSense, a novel system for social psychology study of user emotion based on mobile phones. Rachuri et al. [31] have presented the design of novel components for emotion and speaker recognition based on Gaussian mixture models. The driving vision is a smartphone service, called Mood-Sense, that can infer its owner's mood based on information already available in today's smartphones. In Ref. [32], it is suggested that user mood can be separated into four main types with 91% average accuracy. These results can be obtained with 3 weeks of research data and basic smartphone handling statistics. Although these results are not decisive, they show practicability of mood inference without any microphone and/or camera with bulky

Recently, the calibration methods using display devices such as monitors, tablets or smartphones have come to the forefront [33]. Gruen and Akca [34] report about first experiences in calibration and accuracy validation of mobile phone cameras. Ha et al. [33] propose a novel camera calibration method for defocused images using a smartphone under the assumption that the defocus blur is modelled as a convolution of a sharp image with a Gaussian point spread function (PSF). The effectiveness of the proposed method has been emphasized in several real experiments using a compact display device such as a smartphone [33]. Delaunoy et al. [35] propose a new approach to estimate the geometric extrinsic calibration of all the elements of a smartphone or tablet (such as the screen, the front and the back cameras) by using a planar mirror. Saponaro

power requirements and social interaction [32].

28 Smartphones from an Applied Research Perspective

Fisheye lenses provide instant wide-angle images from one point with a single camera. Fisheye optics are placed onto charge couple device (CCD) or complementary metal oxide semiconductor (CMOS) cameras without requiring any complex technology. They do not require an external mirror or rotational device. Thus, these optics are small in size and do not require any maintenance [38]. They have a very short focal length, which produces a hemisphere [39]. By using fisheye lenses, a large area of any surrounding space can be acquired with a single photograph. Therefore, fisheye lenses are useful in most of the applications. In addition to high quality landscape and interior visualizations (e.g. ceiling frescos of historical buildings) in commercial demonstrations or internet presentations, fisheye images are also beneficial for measurement operations [40].

The first fisheye lenses have been created by Hill in 1924 [41], but, they have not been preferred in photogrammetric measurements since they provide images with huge distortions and they do not meet central projection. Using the images obtained from fisheye lens imaging systems in photogrammetric measurement and modeling processes becomes popular in recent years by the help of the development in software and hardware technologies. Later, a significant increase has been seen in terms of volume scientific research on this subject matter. Recently, there have been several academic studies presenting the benefit from fisheye lenses. Fisheye cameras are finding increasing number of applications in surveillance, robotic vision, automotive rear-view imaging systems, etc. because of their wide-angle properties [42]. Fisheye lens cameras have also been used during sky observations [43], visual sun compass creation [44], and sunpath diagram derivation [45]. Beekmans et al. [46] present a complete approach for stereo cloud photogrammetry using hemispheric sky imagers. This approach combines calibration, epipolar rectification and block-based correspondence search for dense fisheye stereo reconstruction for clouds. A novel panoramic imaging system that uses a curved mirror as a simple optical attachment to a fisheye lens is given in Ref. [47]. Streckel et al. [48] describe a visual markerless real-time tracking system for augmented reality applications. The system uses a firewire camera with a fisheye lens mounted at 10 fps. Brun et al. [49] present a new mobile mapping system mounted on a vehicle to reconstruct outdoor environment in real time. Yamamoto et al. [50] propose a mobile web map interface that is based on a metaphor of the wired fisheye lens. The user can easily navigate through the area surrounding the present location while keeping the focus within the map. These features enable users to find the target quickly. Yamamoto et al. [50] confirmed the advantages of the proposed system by evaluation experiments. The new system will be able to contribute to the novel mobile web map services with fisheye views for mobile terminals such as cellular phones. Ahmad and Lima [51] present a cooperative approach for tracking a moving spherical object in three-dimensional space by a team of mobile robots equipped with sensors in a highly dynamic environment. Zheng and Li [52] explore the use of a fisheye camera to achieve the scene tunnel acquisition. In Ref. [53], authors have focused on dioptric systems to implement a robot surveillance application for fast and robust tracking of moving objects in dynamic, unknown environments. Another application that uses fisheye lens is a research that examines the use of fisheye lenses as optical sensors on unmanned aerial vehicle (UAV) platform in Queensland Technical University in Australia [54]. Grelsson [55] used a fisheye camera for horizon detection in aerial images. Naruse et al. [56] propose three-dimensional measurement method of underwater objects using a fisheye stereo camera. In Ref. [57], a novel technique to accurately estimate the global position of a moving car using an omnidirectional camera and untextured three-dimensional city model is proposed. Today, one of the areas that most frequently benefit from fisheye lenses is applications done in combination with terrestrial laser scanners. Georgantas et al. [58] present a comparison of automatic photogrammetric techniques to terrestrial laser scanning for three-dimensional modeling of complex interior spaces. The 8 mm fisheye lens that was used allowed us to acquire photos with a global view of the scene and thus with textured zones in every image, which is essential for the scale invariant feature transform (SIFT) algorithm. Image analysis tasks such as 3D reconstruction from endoscopic images require compensation of geometric distortions introduced by the lens system [59]. Hu et al. [60] propose effective pre-processing techniques to ensure the applicability of face detection tools onto highly distorted fisheye images.

Schneider and Schwalbe [61] present the integration of a geometric model of fisheye lenses and a geometric terrestrial laser scanner model in a bundle block adjustment. Fisheye projection functions are designed such that a greater portion of the scene is projected onto the image sensor on the image plane, at the expense of introducing (often considerable) radial distortion [62]. The fisheye lens camera should be calibrated to be used in applications that require high accuracy [63]. There are different studies in literature, which focus on the calibration of fisheye lenses. Abraham and Forstner [38] presented rigorous mathematical models for the calibration of a stereo system composed of two fisheye lens cameras and for the epipolar rectification of the images acquired by this dual system.

Arfaoui and Thibault [64] have described a method using a compact calibration object for fisheye lens calibration. The setup generated a robust and accurate virtual calibration grid, and the calibration was performed by rotating the camera around two axes. The experimental results and the comparison with a 3D calibration object showed that the virtual grid method is efficient and reliable [64]. Kim and Paik [65] presented a novel 3D simulation method for fisheye lens distortion in a vehicle rear-view camera. The proposed method creates a geometrically distorted image of an object in 3D space according to the lens specifications. The proposed simulation method can be applied to designing a general optical imaging system for intelligent surveillance as well as a vehicle rear-view backup camera [65] Torii et al. [66] present a pipeline for camera pose and trajectory estimation, and image stabilization and rectification for dense as well as wide baseline omnidirectional images. The experiments with real data demonstrate the use of the proposed image stabilization method. Five image sequences of a city scene captured by a single hand-held fisheye lens camera are used as our input [66].

In Ref. [67], Kodak DSC 14 Pro with Nikkor 8 mm fisheye lens is calibrated with an equidistant projection. In addition to decentring, symmetric radial and affinity distortion models, precise mathematical models were used, which were based on stereo-graphic, equidistant, orthogonal and equisolid-angle projections. Kannala and Brandt [68] propose a generic camera model, which is suitable for fisheye lens cameras as well as for conventional and wide-angle lens cameras, and a calibration method for estimating the parameters of the model. Fisheye lenses are not perspective lenses, image resolution in these lenses are not fixed (univocal), illumination is not distributed homogeneously [69]. Upto now, many researchers have considered the relationship between distorted radius and undistorted radius in the image plane ignoring the variation of the angle. Zhu et al. [70] present a fisheye camera model based on the refractive nature of the incoming rays and estimate the model parameters without calibration objects using Micusik's method [71]. In photogrammetry, the collinearity mathematical model, based on perspective projection combined with lens distortion models, is generally used in the camera calibration process. However, fisheye lenses are designed for the following different spherical projections models such as stereographic, equidistant, orthogonal and equisolid angle [63]. The calibration results of Fuji-Finepix S3pro camera with Bower-Samyang 8 mm lens were assessed by the help of precise mathematical models. Bower-Samyang 8 mm is cheaper than other fisheye lenses and unlike others; it is based on stereographic projection [63].

and robust tracking of moving objects in dynamic, unknown environments. Another application that uses fisheye lens is a research that examines the use of fisheye lenses as optical sensors on unmanned aerial vehicle (UAV) platform in Queensland Technical University in Australia [54]. Grelsson [55] used a fisheye camera for horizon detection in aerial images. Naruse et al. [56] propose three-dimensional measurement method of underwater objects using a fisheye stereo camera. In Ref. [57], a novel technique to accurately estimate the global position of a moving car using an omnidirectional camera and untextured three-dimensional city model is proposed. Today, one of the areas that most frequently benefit from fisheye lenses is applications done in combination with terrestrial laser scanners. Georgantas et al. [58] present a comparison of automatic photogrammetric techniques to terrestrial laser scanning for three-dimensional modeling of complex interior spaces. The 8 mm fisheye lens that was used allowed us to acquire photos with a global view of the scene and thus with textured zones in every image, which is essential for the scale invariant feature transform (SIFT) algorithm. Image analysis tasks such as 3D reconstruction from endoscopic images require compensation of geometric distortions introduced by the lens system [59]. Hu et al. [60] propose effective pre-processing techniques to ensure the applicability of face detection tools onto highly distorted fisheye

Schneider and Schwalbe [61] present the integration of a geometric model of fisheye lenses and a geometric terrestrial laser scanner model in a bundle block adjustment. Fisheye projection functions are designed such that a greater portion of the scene is projected onto the image sensor on the image plane, at the expense of introducing (often considerable) radial distortion [62]. The fisheye lens camera should be calibrated to be used in applications that require high accuracy [63]. There are different studies in literature, which focus on the calibration of fisheye lenses. Abraham and Forstner [38] presented rigorous mathematical models for the calibration of a stereo system composed of two fisheye lens cameras and for the epipolar

Arfaoui and Thibault [64] have described a method using a compact calibration object for fisheye lens calibration. The setup generated a robust and accurate virtual calibration grid, and the calibration was performed by rotating the camera around two axes. The experimental results and the comparison with a 3D calibration object showed that the virtual grid method is efficient and reliable [64]. Kim and Paik [65] presented a novel 3D simulation method for fisheye lens distortion in a vehicle rear-view camera. The proposed method creates a geometrically distorted image of an object in 3D space according to the lens specifications. The proposed simulation method can be applied to designing a general optical imaging system for intelligent surveillance as well as a vehicle rear-view backup camera [65] Torii et al. [66] present a pipeline for camera pose and trajectory estimation, and image stabilization and rectification for dense as well as wide baseline omnidirectional images. The experiments with real data demonstrate the use of the proposed image stabilization method. Five image sequences of a city scene captured by a single hand-held fisheye

In Ref. [67], Kodak DSC 14 Pro with Nikkor 8 mm fisheye lens is calibrated with an equidistant projection. In addition to decentring, symmetric radial and affinity distortion models, precise

rectification of the images acquired by this dual system.

lens camera are used as our input [66].

images.

30 Smartphones from an Applied Research Perspective

Most of the fisheye lenses are technically based on equidistant or equisolid-angle projection. Initially, equisolid-angle projection geometry is constructed and then diagonal fisheye lenses are constructed. The distortion of the image edges is more significant than fisheye lenses with equidistant projection. The only way to construct orthographic projection geometry is to use sophisticated optical construction. Stereographic projection is not practically realizable [67]. Among the other models proposed, an important one is the equidistant model. The model proposes that the distance between an image point and the centre of radial distortion is proportional to the angle between a corresponding three-dimensional point, the optical centre and the optical axis [72]. Equidistant fisheye lenses are often used for scientific measurement where the measurement of angles is necessary. Thus, it is also sometimes referred to as an equiangular fisheye lens [73]. Perhaps the most common model is the equidistance projection [68]. Friel et al. [74] use the equidistance projection equation to describe the radial distortion, as this is typically among the most commonly used and inexpensive fisheye lens types. The work described in Ref. [74] shows that it is possible to carry out automatic calibration of fisheye lenses, using information derived from real-world automotive scenes, and to obtain calibration data to a high degree of accuracy.

The main purpose of this study is to test fisheye lens equipment used with mobile phones. Mobile phone imaging with the additional hardware has been used more popularly not only outside but also in indoor applications. Therefore, hardware properties of this wide-angle optics will be used in the photogrammetric documentation in the near future for mobile phone imaging. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. In addition to this advantage, it is experimented whether fisheye lens and mobile phone combination can be used in a photogrammetric way, and if so, what will be the result. In this study, standard calibration of 'Olloclip 3 in one' fisheye lens used with iPhone 4S mobile phone and 'Nikon FC-E9' fisheye lens used with Nikon Coolpix8700 are compared based on equidistant model. By using photogrammetric bundle block adjustment, the results of these calibrations are analysed. Geometric properties of these wide-angle lenses will be more important in the photogrammetric measurement assessment. This study suggests a precalibration process of these kinds of hardware for the photogrammetric process in the test field. In the literature, although there are many geometric camera calibration publications, none of them compares the mobile phone fisheye lens kit with conventional fisheye lens on the fundamentals of photogrammetric measurement assessment. The results of this photogrammetric process are also compared with conventional wide-angle hardware in this paper.

The second section of this chapter briefly describes fisheye projection models. The third section of this chapter briefly describes equidistant model. The fourth section reports an empirical study for calibration of the combination of iPhone 4S camera with Olloclip 3 in one fisheye lens and Nikon Coolpix8700 camera FC-09 fisheye lens combination by using equidistant model. The fifth section interprets the results that resulted from the experiment process. The sixth section concludes the study.
