- 最后登录
- 2017-9-18
- 注册时间
- 2011-1-12
- 阅读权限
- 90
- 积分
- 12276
- 纳金币
- 5568
- 精华
- 0
|
Abstract
Creating natural looking human animations is a challenging and
time-consuming task, even for skilled animators. As manually generating
such motions is very costly, tools for accelerating this process
are highly desirable, in particular for pre-visualization or animation
involving many characters. In this work a novel method for
fully automated data-driven texturing of motion data is presented.
Based on a database containing a large unorganized collection of
motions samples (mocap database) we are able to either: transform
a given ”raw” motion according to the characteristic features
of the motion clips included in the database (style transfer) or even
complete partial animation, e.g. by adding the motion of the upper
body if only legs have been previously animated (motion completion).
By choosing an appropriate database different artistic goals
can be achieved such as making a motion more natural or stylized.
In contrast to existing approaches like the seminal work by Pullen
and Bregler [2002] our method is capable of dealing with arbitrary
motion clips without manual steps, i.e. steps involving annotation,
segmentation or classification. As indicated by the examples, our
technique is able to synthesize smooth transitions between different
motion classes if a large mocap database is available. The results
are plausible even in case of a very coarse input animation missing
root translation.
1 Overview
The basic idea of our method is to take advantage of motion samples
from large databases to improve a given motion. To this end,
for each frame pose of the input motion, matching motion segments
of a few frames in length are retrieved from the mocap database.
For efficient retrieval a technique called Online Lazy Neighborhood
Graph (OLNG) is employed [Tautges et al. 2011]. In essence this
method is able to identify global temporal similarities based on local
neighborhoods in pose space. In a second step, using multi
grid optimization techniques, a new motion is synthesized based on
the input and the prior information from the database. For our implementation
a skeleton-based pose representation with joints and
bones is assumed. However, since the method is directly applicable
to other motion data (i.e. positional marker data) this constitutes no
general limitation of our approach. In the following the individual
steps of our pipeline will be discussed in more detail.
Preprocessing. In a preprocessing step all mocap data from the
prior-database is first normalized with respect to global position and
orientation [Kr¨uger et al. 2010]. Based on normalized positional
data of all available joints we then build an efficient spatial indexing
structure (kd-tree) that is required for OLNG. In addition, linear
marker velocities as well as accelerations are stored. These quantities
are needed for subsequent prior-based motion synthesis.
Motion synthesis. We use an energy minimization formulation
which is frequently used in data driven computer animation. Our
specific choice of the energy terms to be minimized most closely
resembles the one used in [Tautges et al. 2011]. Here, the objective
function is consisting of three different terms: a control term
Econtrol that measures the distance of synthesized and given joint
positions included in the feature set, as well as pose Epose and motion
priors Esmooth and Emotion enforcing positions, acceleration
e-mail: kruegerb@cs.uni-bonn.de
and velocities of joints to be comparable to examples retrieved from
the database. The objective function is minimized using gradient
descent. In addition, to avoid skating artifacts, footprint constraints
were forced by an inverse kinematics approach.
To improve the robustness of our method and to speed up the process
of optimization, we employ a multi-scale approach.
2 Results
To test the effectiveness of our approach we made several tests for
three different scenarios that might occur in practice:
Motion completion: For a given motion missing joints are synthesized.
In our case an animation of the lower body was used as
input to our method, and a plausible upper body motion was created.
Motion texturing: In this case a rough low quality motion (e.g.
from interpolating few key frames) is transformed to a detailed full
body animation. We transform a rough walking and jumping jack
motion with stiff limbs and no root movement to a realistic full body
animation.
Style transfer: Here, characteristic features of one individual are
transferred to another within the same motion class. More precisely,
we took a complex walking sequence and adopted this motion to
match the style of a different subject. This was achieved by using
a database containing only motion samples from the respective
subject.
3 Conclusion and Future Work
In this work a general frame-work for automated data-driven motion
texturing, completion and style transfer for human motions was
sketched. Our approach works reasonably well across different motion
classes that previously could only be handled with massive user
interaction.
We need a mocap-database containing motions which are suitable
for processing a given clip according to our method. Thus,
the results strongly depend on the prior information stored in the
database. Investigating, the impact of using different databases is
of fundamental importance and requires more work.
References
KR¨UGER, B., TAUTGES, J., WEBER, A., AND ZINKE, A.
2010. Fast local and global similarity searches in large motion
capture databases. In Proceedings of the 2010 ACM SIGGRAPH/
Eurographics Symposium on Computer Animation, Eurographics
Association, Aire-la-Ville, Switzerland, Switzerland,
SCA ’10, 1–10.
PULLEN, K., AND BREGLER, C. 2002. Motion capture assisted
animation: texturing and synthesis. ACM Trans. Graph. 21
(July), 501–508.
TAUTGES, J., ZINKE, A., KR¨U GER, B., BAUMANN, J., WEBER,
A., HELTEN, T., M¨ULLER, M., SEIDEL, H.-P., AND
EBERHARDT, B. 2011. Motion reconstruction using sparse accelerometer
data. ACM Trans. Graph. 30 (May), 18:1–18:12. |
|