English / Japanese
|
Tomokazu Sato, Ph.D. Professor Faculty of Data Science,
Shiga University |
March, 1977 |
Born in Tanabe, Wakayama prefecture, Japan |
1995.4 - 1999.3 |
Bachelor course student |
1999.4 - 2001.3 |
Master course student |
2001.4 - 2003.3 |
Doctor
course student |
2003.4 - 2011.4 |
Assistant professor of Nara Institute of Science and Technology, Japan |
2010.3 - 2011.3 |
Visiting researcher of CMP in Czech Technical University in Prague |
2011.5 - 2017.12 |
Associate professor of Nara Institute of Science and Technology, Japan |
2018.1
- |
Professor
of Shiga University |
Researches interests:
Structure from motion for omni-directional video |
|
|
Abstract: Multi-camera type of omni-directional
camera has advantages of high-resolution and almost uniform resolution for
any direction of view. In this research, an extrinsic camera parameter
recovery method for a moving omni-directional multi-camera system (OMS) is
proposed. First, we discuss a perspective n-point (PnP) problem for an OMS,
and then describe a practical method for estimating extrinsic camera
parameters from multiple image sequences obtained by an OMS. The proposed
method is based on using the shape-from-motion and the PnP techniques. |
Depth estimation for omni-directional video |
|
|
Abstract: This paper proposes a
method for estimating depth from long-baseline image sequences captured by a
precalibrated moving omni-directional multi-camera system (OMS). Our idea for
estimating an omni-directional depth map is very simple; only counting
interest points in images is integrated with the framework of conventional
multibaseline stereo. Even by a simple algorithm, depth can be determined
without computing similarity measures such as SSD and NCC that have been used
for traditional stereo matching. The proposed method realizes robust depth
estimation against image distortions and occlusions with lower computational
cost than traditional multi-baseline stereo method. These advantages of our
method are fit for characteristics of omni-directional cameras. |
3D modeling from video images |
|
|
Abstract: In this paper, we propose
a dense 3-D reconstruction method that first estimates extrinsic camera
parameters of a hand-held video camera, and then reconstructs a dense 3-D
model of a scene. In the first process, extrinsic camera parameters are
estimated by tracking a small number of predefined markers of known 3-D
positions and natural features automatically. Then, several hundreds dense
depth maps obtained by multi-baseline stereo are combined together in a voxel
space. We can acquire a dense 3-D model of the outdoor scene accurately by
using several hundreds input images captured by a handheld video camera. |
Interactive 3D modeling with AR support |
|
|
Abstract: In most of conventional
methods, some skills for adequately controlling the camera movement are
needed for users to obtain a good 3-D model. In this study, we propose an
interactive 3-D modeling interface in which special skills are not required.
This interface consists of “indication of camera movement” and “preview of reconstruction result.” In
experiments for subjective evaluation, we verify the usefulness of the
proposed 3D modeling interfaces. |
Extrinisc camera parameter estimation using vision and GPS |
|
|
Abstract: This paper describes a method
for estimating extrinsic camera parameters using both feature points on an
image sequence and sparse position data acquired by GPS. Our method is based
on a structure-from-motion technique but is enhanced by using GPS data so as
to minimize accumulative estimation errors. Moreover, the position data are
also used to remove mis-tracked features. The proposed method allows us to
estimate extrinsic parameters without accumulative errors even from an
extremely long image sequence. |
Realtime image mosaicing |
|
|
Abstract:This paper presents a
real-time video mosaicing system that is one of practical applications of
mobile vision. To realize video mosaicing on an actual mobile device, in our
method, image features are automatically tracked on the input images and
6-DOF camera motion parameters are estimated with a fast and robust
structure-from-motion algorithm. A preview of generating a mosaic image is also
rendered in real time to support the user. Our system is basically for the
flat targets, but the system also has the capability of 3-D video mosaicing
in which an unwrapped mosaic image can be generated from a video image
sequence of a curved document. |
Feature-landmark based Geometric Registration |
|
|
Abstract: In this research,
extrinsic camera parameters of video images are estimated from correspondences
between pre-constructed feature-landmarks and image features. In order to
achieve real-time camera parameter estimation, the number of matching
candidates are reduced by using priorities of landmarks that are determined
from previously captured video sequences. |
Image inpainting using energy function |
|
|
Abstract: Image inpainting is a
technique for removing undesired visual objects in images and filling the
missing regions with plausible textures. In this paper, in order to improve
the image quality of completed texture, the objective function is extended by
allowing brightness changes of sample textures and introducing spatial
locality as an additional constraint. The effectiveness of these extensions
is successfully demonstrated by applying the proposed method to one hundred
images and comparing the results with those obtained by the conventional
methods. |
Inpainting for 3-D model |
|
|
Abstract: 3D mesh models generated
with range scanner or video images often have holes due to many occlusions by
other objects and the object itself. This paper proposes a novel method to
fill the missing parts in the incomplete models. The missing parts are filled
by minimizing the energy function, which is defined based on similarity of
local shape between the missing region and the rest of the object. The
proposed method can generate complex and consistent shapes in the missing
region. |
Omnidirectional telepresence syetem |
|
|
Abstract: This paper describes a
novel telepresence system which enables users to walk through a
photorealistic virtualized environment by actual walking. To realize such a
system, a wide-angle high-resolution movie is projected on an immersive
multi-screen display to present users the virtualized environments and a
treadmill is controlled according to detected user’s locomotion. In
this study, we use an omnidirectional multi-camera system to acquire images
of a real outdoor scene. The proposed system provides users with rich sense
of walking in a remote site. |
My
doctor's thesis:
"Reconstruction of 3-D
Models of Outdoor Scenes Based on Estimating Extrinsic Camera Parameters from
Multiple Image Sequences", NAIST-IS-MT9951049, March 2003.
Complete list of published papers:
Click
this link to see all publications (Japanese papers are included).