Optics Letters

5 stars based on 66 reviews

Computer Vision and Pattern Recognition Authors and titles for cs. CV in Sep [ total of entries: HaiderAlexander Wong. Computer Vision and Pattern Recognition cs. CV tracking bbinary com 602015 Medical Physics physics. ChungFarzad KhalvatiMasoom A. This paper has been withdrawn by the tracking bbinary com 602015 due to a crucial sign error in equation 5,6. CV ; Machine Learning stat. Changxing DingDacheng Tao.

Source code is available on this http URL. Iterative hypothesis testing for multi-object tracking in presence of features with variable reliability. Tracking bbinary com 602015 Issue on Visual Tracking bbinary com 602015. Michael FireJonathan Schler. This paper has been withdrawn by the author due to incomplete experiments. Dictionary based Approach to Edge Detection.

Nitish ChandraKedar Khare. BekrisAlberto F. CV ; Robotics cs. Image Classification with Rejection using Contextual Information. Jinma GuoJianmin Li. CV ; Learning cs. LG ; Machine Learning stat. Andrew KnyazevAlexander Malyshev. Computer Vision and Image Understanding, volumeDecembertracking bbinary com 602015 An end-to-end generative framework for video segmentation and recognition. LG ; Neural and Evolutionary Computing cs. CV ; Artificial Intelligence cs.

AI ; Computation and Language cs. Future Localization from an Egocentric Depth Image. Convexity Shape Constraints for Image Segmentation. RoyerDavid L. Diffusion tensor imaging with deterministic error bounds.

CV ; Numerical Analysis tracking bbinary com 602015. Object Proposals for Text Extraction in the Wild. Lluis GomezDimosthenis Karatzas. Accelerated graph-based spectral polynomial filters. Edge-enhancing Filters with Negative Weights. CV ; Information Theory cs. IT ; Combinatorics math. Bardia YousefiC. This paper has been withdrawn by the author due to a mistake in file.

Proposal-free Network for Instance-level Object Segmentation. Shape Interaction Matrix Revisited and Robustified: This is an extended version of our iccv15 paper. Piotr KoniuszAnoop Cherian. A deep matrix factorization method for learning attribute representations. A reliable order-statistics-based approximate nearest neighbor search algorithm. OCR accuracy improvement on document images through a novel pre-processing approach. Abdeslam El HarrajNaoufal Raissouni. Person Recognition in Personal Photo Collections.

Accepted to ICCVrevised. Shervin MinaeeYao Wang. Martin UeckerMichael Lustig. Magnetic Resonance in Medicine 77 CE ; Medical Physics physics. CV ; Computers and Society cs.

CV ; Graphics cs. GR ; Optics physics. SLIC superpixels at over Hz. Jianping ZhangKe Chen. CV ; Numerical Analysis math. Sparse Representation for 3D Shape Estimation: A Convex Relaxation Approach.

Extended version of the paper: Analyzing structural characteristics of object category representations from their semantic-part distributions. Neuron detection in stack images: CV ; Neurons and Cognition q-bio. AI ; Information Retrieval cs. IR ; Multimedia cs. Zehra CamlicaH. TizhooshFarzad Khalvati. Direct high-order edge-preserving regularization for tomographic image reconstruction. LionheartPhilip J. WithersPeter D. CV ; Mathematical Software cs. MS ; Numerical Analysis cs.

NA ; Numerical Analysis math. Ziming ZhangVenkatesh Saligrama. AI ; Robotics cs. Paul IroftiBogdan Dumitrescu. Exact simultaneous recovery of locations and structure from known orientations and corrupted point correspondences. CO ; Optimization and Control math. Mauricio DelbracioGuillermo Sapiro. Xavier GibertVishal M. PatelRama Chellappa. Recurrent Spatial Transformer Networks. Under consideration for publication in Pattern Recognition Letters.

Efficient Clustering on Riemannian Manifolds: A Kernelised Random Projection Approach. Linearized Kernel Dictionary Learning. Alona GoltsMichael Elad. Alex KendallRoberto Cipolla. ICRA ; Fixed numerical error with rotation results. GuJie Yang. This paper has been withdrawn by the author due to part-based representation.

On one hand, not all data can be successfully identified as 'parts' by NMF. On the other hand, inverse sparse representation could not fit this situation. I will give a clearer explanation from clustering instead of part-based representation. Image Set Querying Tracking bbinary com 602015 Localization.

Top 10 reliable binary option brokers uk

  • Binary options experts platinum club santiago in india

    Bitcoin trading coinbase

  • Sharekhan trading brokerage charges in india

    How automated binary works with letters

Forex championship ea dubai

  • Oil trading jobs geneva

    Tera trade broker bugle

  • Trade surplus capital account deficit

    High frequency trading programmer salary

  • Interactive brokers commodities account

    Binary options trading daily on forex proven strategies that!

Exxeta trading options

36 comments High speed internet options in my area

Ddns hostname option trading

To receive news and publication updates for Journal of Sensors, enter your email address in the box below. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration.

The proposed method consists of three steps: The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic. A large-scale video analysis using multiple cameras is gaining attractions in visual surveillance applications. In particular, as the use of the object-based video analysis increases, the demand for extraction of object information is growing up.

Since an object information is changed depending on the camera parameters such as a location of the installed camera, viewing angle, and focal length, for this reason, various normalized object feature extraction methods were proposed.

This method estimates human moving trajectories by tracking and recognizing the human motion. Chu and Yang detected a moving object using a background model and estimated the object velocity using the object with a previously known length [ 5 ]. However, this method uses two or more cameras for the depth estimation. In order to estimate the 3D information using single camera, the camera calibration methods [ 8 — 11 ] are proposed.

Arfaoui and Thibault used a diffractive virtual grid to estimate camera parameters for a fish-eye lens camera [ 12 ].

Kual-Zheng proposed an object height estimation method that extracts feature points and estimates vanishing points using a special pattern such as a cubic box [ 16 ]. Since this method uses a special pattern board for the camera calibration, successful analysis is difficult when the size of a face is small.

Zhao and Hu used a pure translation to calibrate a camera [ 19 ], and Li et al. However, this method cannot accurately estimate vanishing points when the background does not include sufficient pairs of parallel lines. User input of the human height is another burden of this method. To solve the abovementioned problems, the proposed method calibrates a camera by detecting and tracking the object region.

In addition, a projective matrix, which is a result of the camera calibration, is applied to the proposed human height estimation method, and then estimated human heights are accumulated and corrected using the Random Sample Consensus RANSAC algorithm. As a result, the proposed method can estimate the normalized human height using an uncalibrated camera for a visual surveillance system. This paper is organized as follows. Section 2 describes the camera projective model, and Section 3 presents the proposed camera calibration and human height estimation algorithms.

Experimental results are shown in Section 4 , and Section 5 concludes the paper. An object is projected onto a two-dimensional image with different sizes depending on the distance between the object and a camera. In order to estimate the human height using a single camera, the projective relationship between the 3D space information and the 2D image plane is needed.

The pin-hole camera projective model [ 22 ] is given as where represents the coordinate in the 3D space, matrix contains intrinsic camera parameters, represents the camera rotation matrix, represents the camera translation vector, represents the coordinate in the 2D image plane, and represents the scale factor.

The camera intrinsic parameter is determined by focal length , principal point , skewness , and aspect ratio as. To simplify the camera calibration process, the proposed method assuming that , the principal point is the center of the image, , and. In the same manner, the camera rotation with regard to -axis is zero and the translations with regard to -axis and -axis are also zero. Using the vanishing points and lines, Liu et al. In order to estimate the physical size of an object in the 3D space using the object size in the 2D image, the proposed method detects the moving human to estimate the 3D space information.

To estimate the human height, the proposed method assumes the foot position on the flat ground plane. As a result, the foot position in the 2D image plane is inversely projected into the 3D space to obtain the human height information.

The proposed human height estimation algorithm is an extended version of Jung et al. Figure 1 shows the block diagram of the proposed human height estimation method, where represents the th input frame, the moving human region, the human tracking region, the projective matrix, the th height estimation result of the human, and the error corrected height estimation result.

The proposed method first detects a moving human to estimate its height. If the detected human region includes the background region or if the region loses some part of the human body, an accurate estimation of the human height is difficult. For this reason, the proposed method generates a background using the Gaussian mixture model GMM [ 25 , 26 ] and then detects and labels the foreground image.

The regions that do not have enough pixels in the foreground image are removed to reduce the noise. The detected foreground regions include not only a single human region but also a group of human region possibly with nonhuman objects, which make human tracking difficult, and as a result, human height estimation error increases. For that reason, the proposed method classifies each region according to whether it is a human region or not. The proposed classification method uses the combined histogram of oriented gradients and local binary pattern HOG-LBP and a support vector machine- SVM- based human detection method [ 27 ].

Using the detected human information, each foreground region is classified into two regions. The first region is a single human region that has only one human object.

The second region is a single nonhuman region that has either none or multiple humans. Figure 2 shows the moving human region detection and classification results. Figures 2 b and 2 c , respectively, show the foreground image and human detection result of a video shown in Figure 2 a. Figure 2 d shows the region classification result, where single human and single nonhuman regions are, respectively, represented by red and black boxes.

The proposed method tracks the human and estimates the height in a video using the detected single human region. Although the Kalman filter tracker [ 28 ] is a popular stochastic tracking method, it cannot track a nonlinearly moving object. To solve this problem, the proposed method uses a particle filter tracker [ 29 ]. In a surveillance input video, human information, such as size and shape, changes while the human is walking. For this reason, the model-based tracking [ 30 ] method models the target human using a color histogram to deal with the dynamic characteristics of the moving human.

In the proposed method, the HSV color histogram is used to represent the human region to reduce the sensitivity to the illuminance. The particle filter tracking results may include a probabilistic error and cannot detect the entire human region.

Moreover, if the number of particles increases to reduce the tracking error, the time complexity also increases. To solve these problems, the proposed method detects the tracked human region by matching the detected human region with the tracked human regions as where represents the th human region, the th moving human region that is detected using the background model, the number of pixels in the moving human region , and the tracking region about the th human.

After matching, the proposed method uses additional trackers for unmatched single human regions. Figure 3 shows the human tracking results using the proposed method. In Figure 3 , the red box represents the particle filter tracking result about the moving human, and the white box represents the optimal rectangular region that encloses the detected human region. The normalized human height can be estimated in meters in the 3D space by estimating camera parameters. For the automatic camera calibration, the vanishing points and line should be estimated using the parallel lines in the image.

However, this method cannot calibrate the camera if the background structure does not have a sufficient number of parallel lines. For automatic calibration without using parallel lines, the proposed calibration method uses the moving human information [ 23 ]. Both vanishing points and line can be estimated using the detected foot and head positions.

The vertical vanishing point can be estimated using the intersection between the foot-to-head lines that include both the foot and head points from the corresponding human region. The horizontal line is estimated using two or more horizontal vanishing points, and the horizontal vanishing point is using the intersection between the foot-to-foot and head-to-head lines.

The foot-to-foot and head-to-head lines, respectively, include foot and head points. Figure 4 illustrates the human-based vanishing point and line estimation process. The proposed method computes the foot point in the 3D space for the normalized human height estimation using multiple videos acquired by different cameras. To calculate the foot point in the 3D space, the foot point in the 2D image is inversely projected into 3D space.

The 3D point is on the line that connects the human foot point in the 3D space with the corresponding image sensor. Since the camera height is estimated based on the ground plane that includes the human foot points, the 3D foot point can be obtained by normalizing the inversely projected point with respect to the -axis.

As a result, the foot point in 3D can be calculated as where represents the foot point in the 3D space, the foot point in the 2D image, the projective matrix, and the -axis coordinate of the point that is inversely projected from the foot point in 2D image. The reference head point in the 3D space can be estimated by translating the foot point to the vertical direction of the ground plane. Using the reference head point in the 3D space, the corresponding head point in the 2D image is given as where represents the reference head point in the 2D image and the corresponding head point in the 3D space.

Using the reference head point, the human height can be estimated as where represents the estimated human height, the reference height, the -axis coordinate of the reference head point in the 2D image, the -axis coordinate of the human head point in the 2D image, and the -axis coordinate of the human foot point in the 2D image.

In this work, the reference height of 1. Figure 5 shows the human height estimation model using reference height. The accuracy of human height estimation depends on the detected human region. To reduce the human height estimation error, the proposed method accumulates the estimated human heights in each frame and corrects the errors using the RANSAC algorithm.

Next step computes the sum of squared differences SSD between the average height and each estimated human height. The first and second steps repeat times to obtain the error corrected height.

The proposed human height estimation results are shown in this section. The test video was acquired using an uncalibrated camera viewing down the ground plane at the height in between 2. In addition, the performance evaluation of tracking and surveillance PETS dataset [ 32 ] was used to test the proposed algorithm. Figure 6 shows the result of human height estimation using the proposed method. Although the size of the human in a 2D image looks different by scenes, the resulting normalized height is correctly estimated from all different scenes using a prespecified height of the reference object for the camera calibration.

Figure 7 shows the results of error correction of the estimated height. Figure 7 a shows the human height estimation error caused by the human pose change. The human height estimation error is corrected using the proposed method as shown in Figure 7 b. Figure 8 a shows the human height estimation error caused by an occlusion.

The height estimation error of the occluded human is reduced by the proposed method as shown in Figure 8 b. Figure 9 shows estimation results of multiple human heights. As shown in Figure 9 a , the height of separated human is estimated. In Figures 9 b , 9 c , and 9 d some humans adjoined and made multiple human regions.