12 第1页 | 共2 页下一页
返回列表 发新帖
查看: 3273|回复: 12
打印 上一主题 下一主题

[其它] Data-driven Texturing of Human Motions

[复制链接]

797

主题

1

听众

1万

积分

资深设计师

Rank: 7Rank: 7Rank: 7

纳金币
5568
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

跳转到指定楼层
楼主
发表于 2011-12-28 09:11:44 |只看该作者 |倒序浏览
Abstract

Creating natural looking human animations is a challenging and

time-consuming task, even for skilled animators. As manually generating

such motions is very costly, tools for accelerating this process

are highly desirable, in particular for pre-visualization or animation

involving many characters. In this work a novel method for

fully automated data-driven texturing of motion data is presented.

Based on a database containing a large unorganized collection of

motions samples (mocap database) we are able to either: transform

a given ”raw” motion according to the characteristic features

of the motion clips included in the database (style transfer) or even

complete partial animation, e.g. by adding the motion of the upper

body if only legs have been previously animated (motion completion).

By choosing an appropriate database different artistic goals

can be achieved such as making a motion more natural or stylized.

In contrast to existing approaches like the seminal work by Pullen

and Bregler [2002] our method is capable of dealing with arbitrary

motion clips without manual steps, i.e. steps involving annotation,

segmentation or classification. As indicated by the examples, our

technique is able to synthesize smooth transitions between different

motion classes if a large mocap database is available. The results

are plausible even in case of a very coarse input animation missing

root translation.

1 Overview

The basic idea of our method is to take advantage of motion samples

from large databases to improve a given motion. To this end,

for each frame pose of the input motion, matching motion segments

of a few frames in length are retrieved from the mocap database.

For efficient retrieval a technique called Online Lazy Neighborhood

Graph (OLNG) is employed [Tautges et al. 2011]. In essence this

method is able to identify global temporal similarities based on local

neighborhoods in pose space. In a second step, using multi

grid optimization techniques, a new motion is synthesized based on

the input and the prior information from the database. For our implementation

a skeleton-based pose representation with joints and

bones is assumed. However, since the method is directly applicable

to other motion data (i.e. positional marker data) this constitutes no

general limitation of our approach. In the following the individual

steps of our pipeline will be discussed in more detail.

Preprocessing. In a preprocessing step all mocap data from the

prior-database is first normalized with respect to global position and

orientation [Kr¨uger et al. 2010]. Based on normalized positional

data of all available joints we then build an efficient spatial indexing

structure (kd-tree) that is required for OLNG. In addition, linear

marker velocities as well as accelerations are stored. These quantities

are needed for subsequent prior-based motion synthesis.

Motion synthesis. We use an energy minimization formulation

which is frequently used in data driven computer animation. Our

specific choice of the energy terms to be minimized most closely

resembles the one used in [Tautges et al. 2011]. Here, the objective

function is consisting of three different terms: a control term

Econtrol that measures the distance of synthesized and given joint

positions included in the feature set, as well as pose Epose and motion

priors Esmooth and Emotion enforcing positions, acceleration

e-mail: kruegerb@cs.uni-bonn.de

and velocities of joints to be comparable to examples retrieved from

the database. The objective function is minimized using gradient

descent. In addition, to avoid skating artifacts, footprint constraints

were forced by an inverse kinematics approach.

To improve the robustness of our method and to speed up the process

of optimization, we employ a multi-scale approach.

2 Results

To test the effectiveness of our approach we made several tests for

three different scenarios that might occur in practice:

Motion completion: For a given motion missing joints are synthesized.

In our case an animation of the lower body was used as

input to our method, and a plausible upper body motion was created.

Motion texturing: In this case a rough low quality motion (e.g.

from interpolating few key frames) is transformed to a detailed full

body animation. We transform a rough walking and jumping jack

motion with stiff limbs and no root movement to a realistic full body

animation.

Style transfer: Here, characteristic features of one individual are

transferred to another within the same motion class. More precisely,

we took a complex walking sequence and adopted this motion to

match the style of a different subject. This was achieved by using

a database containing only motion samples from the respective

subject.

3 Conclusion and Future Work

In this work a general frame-work for automated data-driven motion

texturing, completion and style transfer for human motions was

sketched. Our approach works reasonably well across different motion

classes that previously could only be handled with massive user

interaction.

We need a mocap-database containing motions which are suitable

for processing a given clip according to our method. Thus,

the results strongly depend on the prior information stored in the

database. Investigating, the impact of using different databases is

of fundamental importance and requires more work.

References

KR¨UGER, B., TAUTGES, J., WEBER, A., AND ZINKE, A.

2010. Fast local and global similarity searches in large motion

capture databases. In Proceedings of the 2010 ACM SIGGRAPH/

Eurographics Symposium on Computer Animation, Eurographics

Association, Aire-la-Ville, Switzerland, Switzerland,

SCA ’10, 1–10.

PULLEN, K., AND BREGLER, C. 2002. Motion capture assisted

animation: texturing and synthesis. ACM Trans. Graph. 21

(July), 501–508.

TAUTGES, J., ZINKE, A., KR¨U GER, B., BAUMANN, J., WEBER,

A., HELTEN, T., M¨ULLER, M., SEIDEL, H.-P., AND

EBERHARDT, B. 2011. Motion reconstruction using sparse accelerometer

data. ACM Trans. Graph. 30 (May), 18:1–18:12.
分享到: QQ好友和群QQ好友和群 腾讯微博腾讯微博 腾讯朋友腾讯朋友 微信微信
转播转播0 分享淘帖0 收藏收藏0 支持支持0 反对反对0
回复

使用道具 举报

797

主题

1

听众

1万

积分

资深设计师

Rank: 7Rank: 7Rank: 7

纳金币
5568
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

沙发
发表于 2012-1-13 14:47:31 |只看该作者


回复

使用道具 举报

   

671

主题

1

听众

3247

积分

中级设计师

Rank: 5Rank: 5

纳金币
324742
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

板凳
发表于 2012-2-12 23:28:55 |只看该作者
好,真棒!!
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

地板
发表于 2012-3-16 23:19:50 |只看该作者
提醒猪猪,千万不能让你看见
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

5#
发表于 2012-3-17 23:20:20 |只看该作者
都闪开,介个帖子,偶来顶
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

6#
发表于 2012-4-30 23:26:09 |只看该作者
读铁系缘分,顶铁系友情
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

7#
发表于 2012-5-9 23:27:19 |只看该作者
路过……
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

8#
发表于 2012-6-14 23:26:38 |只看该作者
顶!学习了!阅!
回复

使用道具 举报

tc    

5089

主题

1

听众

33万

积分

首席设计师

Rank: 8Rank: 8

纳金币
-1
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

9#
发表于 2012-9-19 23:26:47 |只看该作者
顶!学习了!阅!
回复

使用道具 举报

5969

主题

1

听众

39万

积分

首席设计师

Rank: 8Rank: 8

纳金币
-1
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

10#
发表于 2012-10-4 23:25:03 |只看该作者
好可爱的字,学习了
回复

使用道具 举报

12 第1页 | 共2 页下一页
返回列表 发新帖
您需要登录后才可以回帖 登录 | 立即注册

手机版|纳金网 ( 闽ICP备2021016425号-2/3

GMT+8, 2024-11-28 04:17 , Processed in 0.100795 second(s), 29 queries .

Powered by Discuz!-创意设计 X2.5

© 2008-2019 Narkii Inc.

回顶部