Skeletal Animation(骨骼动画)
有关骨骼动画的东西都放在这里好了。
http://en.wikipedia.org/wiki/Skeletal_animation
Skeletal animation is a technique in computer animation, particularly in the animation of vertebrates, in which a character is represented in two parts: a surface representation used to draw the character (called the skin) and a hierarchical set of bones used for animation only (called the skeleton).
This technique is used by constructing a series of ‘bones,’ sometimes referred to as rigging. Each bone has a three dimensional transformation (which includes its position, scale and orientation), and an optional parent bone. The bones therefore form a hierarchy. The full transform of a child node is the product of its parent transform and its own transform. So moving a thigh-bone will move the lower leg too. As the character is animated, the bones change their transformation over time, under the influence of some animation controller.
Each bone in the skeleton is associated with some portion of the character’s visual representation. Skinning is the process of creating this association. In the most common case of a polygonal mesh character, the bone is associated with a group ofvertices; for example, in a model of a human being, the ‘thigh’ bone would be associated with the vertices making up the polygons in the model’s thigh. Portions of the character’s skin can normally be associated with multiple bones, each one having a scaling factors called vertex weights, or blend weights. The movement of skin near the joints of two bones, can therefore be influenced by both bones.
For a polygonal mesh, each vertex can have a blend weight for each bone. To calculate the final position of the vertex, each bone transformation is applied to the vertex position, scaled by its corresponding weight. This algorithm is called matrix palette skinning, because the set of bone transformations (stored as transform matrices) form a palette for the skin vertex to choose from.
http://www.cyberkreations.com/kreationsedge/age_id=29
Software Skinning
http://www.cyberkreations.com/kreationsedge/age_id=30
Matrix Palette Skinning Using Vertex Shader 1.1
http://www.hakenberg.de/diffgeo/animation.htm#Skeleton
http://www.gamedev.net/community/forums/topic.aspopic_id=214347
Good tutorials on Skeletal Animations/strong>
Bones Tutorial
What are bones
Bones in 3D mimic joints, rather than actual bones. You have a bone in your forearm. When you rotate your elbow joint, the forearm bone moves, along with all your skin and muscle attached to the forearm bone. A bone in 3D is just a position and an orientation… a joint that matter rotates around.
In some modelling packages, the bone appears to have volume. This can be either just a visual aid, showing bones attaching to other bones, or used to automatically attach vertices to the bone.
Bones are laid out in a tree hierarchy. There is one root bone, and all bones are children of that bone, or children of it’s children. Assuming the hip is the root bone, you might then have three children, left thigh, right thigh, midsection. The thighs will then each have lower legs, which then have feet, which may in turn have several toes. The toes might be jointed. The midsection will go up to an upper chest, split into shoulders and neck. The neck will likely have a child bone for the head. The shoulders will branch out to upperarms, forearms, hands, and fingers.
While the bones actually describe the physical joints, we often name them for the bone that joint will affect when a bone has only a single child, or even the general area of the body it may affect.
Why use bones
Once a character has been rigged up to a skelton, ie, it’s vertices associated with a bone, or several bones, bones become quite useful. By moving the bones, all the child bones are moved along with it. When you move your forearm, your wrist, hand, and fingers come along for the ride. A bone rotation is often stored as a quaternion, which is just 4 numbers. Keeping track of each vertex for each keyframe would take much more room, so, bones are efficient in terms of storage space. Bones also improve the visual quality of the animation. Imagine two keyframes, indicating a start point, and a rotation of 90 degrees. Typical vertex tweening would distort the mesh as it moves each vertex linearly between the two keyframes. With bones, we interpolate the angle, not the positions, resulting in an object which keeps it’s shape, and just rotates about the joint.
So, bones give us smaller, better looking animations, that are easier for the animator to animate, and much easier to control in code for advanced users.
What’s this about multiple bones affecting a vertex
Near joints, you often affect these vertices by both the bone you’re coming from, and the bone you’re heading to. For example, vertices near your knee are affected by both the thigh and lower leg. The mimics the elasticity of skin. When you rotate a joint 90 degrees, you don’t see a square jagged rotation in your arm, you see a gentle curve easing around the joint.
So, what’s the mathematical theory behind these things
For each bone you have a translation, and rotation… maybe a scale, but often not. Each bone is just a transformation like your world transform. For each vertex you have a number of bones that influence this vertex. Each bone influence has a weight, and the weights must add to 1.0f. For example, a vertex in the middle of your forearm is likely affected only by that one bone, so we associate the vertex with bone n, and weight 1.0f. As we approach the elbow, a vertex is likely affected by bone n, and bone n+1. As you pass the joint, the weights will change from being weighted towards bone n, to bone n+1… ie: As we pass the elbow, we’ll lower the influence of the forearm, and increase the influence of the lowerarm. Typically for hardware based skinning, we have four bones, and three weights. The 4th weight can be inferred as 1.0f minus the other three weights. When first learning, it’s easiest to ignore the weighting, and just associate a single bone to each vertex. The weighting is easily added later once everything else is working.
There are two main parts to the math. The vertex level math, and the bone level math. What we want to do eventually, is to take a vertex and move from it’s position and orientation relative to where the bone was when you exported your mesh, rotate it around that point to the angle we’d like it at, then move it back relative to where the bone is now. We’d also like each bone to be affected by all it’s parent bones.
At the bone level, you have two poses. The first describes the pose the vertex data is in. If your model is exported in a sitting position with it’s hands in it’s lap, you need to know the translation and rotatation of each bone that describes that pose. This is called the reference pose. The other pose describes the pose you’d like the mesh to use… for instance, a keyframe of your walk cycle.
Both of these poses can be described mathematically using the following technique:
As mentioned, a bone is just a transform matrix. Each bone is multiplied by it’s parent bone’s transform. If you start with your root bone and then go to it’s children, then their children, you’re fine, as you’ll always process the parent first, then the child. So, start with the hip, then create your left thigh matrix, multiply it by the hip, then create your left lowerleg matrix, and multiply that by the left thigh, then create the left foot, and multiply that by the left lowerleg, etc.
hip = createtransform(hip_pos, hip_rot);
hip_total = hip;
leftthigh = createtransform(leftthigh_pos, leftthigh_rot);
leftthigh_total = hip_total * leftthigh;
…
Ok, so now we’ve got two poses convertex to matrices, but what nowUsually, after you’ve created your pose for your reference pose, you’ll take each matrix and find it’s inverse. Store this set of inverse matrices, it’s really the only part of the reference pose we care about. You do this once, when you’ve loaded your mesh and just refer to it as needed. An inverse matrix does the opposite of the matrix. The inverse of the reference pose can be thought of as “undo the reference pose”. Multiplying a vertex by the inverse reference pose of a bone will move the vertex to be relative to that bone, and undo any rotation the bone may have applied.
Now we need to blend the two poses. Again, it’s just matrix multiplication.
hip_xform = invrefpos_hip_total * currentpose_hip_total
leftthigh_xform = invrefpos_leftthigh_total * currentpose_leftthigh_total
…
If we look back at what we were trying to do, you’ll see we’ve done it. That’s it.
At the vertex level, we want to perform the following math.
vertex = (originalpos * bonexform1) * weight1
vertex += (originalpos * bonexform2) * weight2
vertex += (originalpos * bonexform3) * weight3
vertex += (originalpos * bonexform4) * (1 – (weight1 + weight2 + weight3))
In software skinning you can just use as many matrices as you’d like. In hardware skinning, typically all 4 bones are used on each vertex. Unused bones just set their weight to 0 to have no influence.
edit:
leftthigh_total = hip_total * leftthigh;
used to say
leftthigh_total = hip_total * leftthigh_total;
by mistake. :(
edit:
Where to bones fit into the pipeline
Right before world transformation, while you’re still in object space.
If doing skinning in software, you can apply the bones and write the vertices to a new chunk of memory, put the data into a dynamic vertex buffer, and render.
If doing skinning in hardware, simply put the skinning code before your world,view,proj code, and change the worldviewproj code to use whatever temporary register you used as their source.
Can bones be mixed with morphing
Yes. In our engine we apply morphs first, then move the vertices with bones. We typically reserve morphing for things like facial animation, and things that are too finicky or detailed to use bones easily.
vs_1_1 gives you a minimum of 96 constants, whereas vs_2_0, vs_2_x, and vs_3_0 give you a minimum of 256 constants. If you are using 4×4 matrices, then each matrix requires 4 constants. This gives you a total of 24 on vs_1_1 and 64 on others. If you are efficient and use 3×4 matrices, then each matrix requires 3 contstants. This gives you a total of 32 on vs_1_1 and 85 on others.
The D3D runtime probably reserves a number of constants so it has room for all of the other states that it must store in the constant registers.
骨骼动画反向动力学(IK)的实现
反向动力学,Inverse Kinematics,简称IK。简单地说,由父骨骼的方位和子骨骼的相对变换得到子骨骼的方位,称为正向动力学(Forward Kinematics,FK);而IK则是先确定子骨骼的方位,反向推导出其继承链上n级父骨骼方位的方法。【详细点链接】
http://school.3dmax8.com/cankao/3dsmax_web/inverse_kinematics_ik.html
反向运动学(IK)
骨骼动画的实现思路是从我们人的身体的运动方式而来的(所以VR就是对现实世界的虚拟嘛 :-))。动画人物的身体(肉、皮肤)是一个 格(Mesh)模型, 格的内部是一个骨架结构。当人物的骨架运动时,身体就会跟着骨架一起运动。骨架是由一定数目的骨骼组成的层次结构,每一个骨骼的排列和连接关系对整个骨架的运动有很重要的影响。每一个骨骼数据都包含其自身的动画数据。和每个骨架相关联的是一个“蒙皮”(Skin)模型,它提供动画绘制所需要的几何模型(Vertex,Normal,etc)和纹理材质信息。每个顶点都有相应的权值(Weight),这些权值定义了骨骼的运动对有关顶点的影响因子。当把动画人物的姿势和全局运动信息作用到骨架上时,这个“蒙皮”模型就会跟随骨架一起运动。如下图所示:
所以关键是对骨架进行动画生成,生成的方法也是用关键帧。关键帧动画是对人物的 格(Mesh)模型采用关键帧生成动画;而骨骼动画则是对人物的骨架采用关键帧生成动画,然后再让 格(Mesh)模型跟随骨架运动。关键帧动画实现的2个关键点是:关键帧的选取和中间帧的插补。
关键帧的指定有2种基本的方法:前向动力学(FK)和逆向动力学(IK)。前向动力学用一组节点的角度来找到末端受动器的位置;而逆向动力学则是找到将末端受动器置于所要位置所需的一组节点角度。前向动力学的优点是:计算简单,运算速度快,缺点是:需指定每个关节的角度和位置,而由于骨架的各个节点之间有内在的关联性,直接指定各关节的值很容易产生不自然协调的动作;逆向动力学的优点是:只需指定主要关节点的位置,负担轻,缺点是:计算模型比较复杂,开发者需要机械运动和动力学、几何学以及向量数学等方面的相关知识。
中间帧的插值分2步:(1) 根据当前时间,通过插值计算出每个骨骼的旋转、平移等值,形成中间帧的骨架。插值算法一般采用四元数(Quternion)的球面线性插值(Spherical linear interpolation)SLERP,SLERP特别适合在两个方位之间进行插值,不会出现像对欧拉角插值那样出现万象锁的现象,而且这种插值能产生更平滑和连续的旋转,表达方式也很简洁;(2) 根据骨架的变化情况,插值计算出骨架的“蒙皮”模型的各个顶点的位置变化。对于某个特定骨骼,“蒙皮”模型的顶点变换矩阵=初始姿势的变换矩阵的逆×姿势变换后的矩阵。另外还要考虑顶点可能受多个骨骼运动的影响。这时我们对每个与当前顶点相关联的骨骼,将其运动姿势变换矩阵×当前顶点相对于该骨骼的偏移向量×该骨骼对当前顶点的影响因子(即权重Weight),对所有与当前顶点相关联的骨骼都这么处理,然后相加,就得到当前顶点的新位置。
由此看出,如何设置各关键帧的骨架的各节点的位置和骨骼的转向(也就是骨架的POSE)是其中的关键,有2种方法:一种是由动画师手工放置,这个对动画师的技术要求就比较高,要求动画师对现实生活中的人和动物等的动作有细心的观察。否则设置的骨架动作就会不自然、不协调;另外一种是基于运动捕捉(Motion Capture)的方法,就是在人的各个关节处安置运动捕捉传感器,当人做各种动作时,捕捉仪器就将各节点的位置数据记录下来,这样我们就可以根据这些节点数据进行骨架建模。由于这是捕捉的真实的人的动作,所以这种方式得到的动画就很自然、很真实,但捕捉仪器造价昂贵,国内估计只有很少几家有财力的游戏公司才购置了这些设备吧。
目前有好多3D模型格式支持Skeletal Animation,像Microsoft的.X格式、MilkShape的MS3D格式、Half Life的MDL格式、ID Software的MD5格式等。我准备首先研究一下MS3D格式,因为它有公开的格式说明文档,阅读起来比较容易,而且应用很广。当然,首先要深入学习Skeletal Animation的底层技术,打好坚实的基础,呵呵!
基于可编程图形处理器的骨骼动画算法及其比较
关键字: 骨骼动画 可编程图形处理器 计算机图形 蒙皮 格 统一
骨骼动画相对于之前的动画方法具有占用空间小的优点,但是其代价是计算量的增加,从而导致绘制效率的降低。另一方面,近年来可编程图形处理器技术在计算机图形处理领域已得到广泛研究。因此,提出了基于可编程图形处理器实现骨骼动画的方法,采用了最先进的可编程处理管线。使用多种不同的方法在可编程图形处理器中实现了骨骼动画,并对其性能进行了分析与比较。该方法借助可编程图形处理器强大的计算能力,分担了骨骼动画中的顶点更新的计算任务,从而大大提升了骨骼动画的绘制效率。
引言
提高计算机动画的画面质量与渲染的效率和减少计算机动画所需占用的空问,已经成为计算机动画使用中比较重要的问题。计算机动画的画面质量与所占用的空间大小成正比,与渲染的效率成反比。越是精致的动画,占用的空间越大,渲染的速度越低。所以如何在保证画面质量的情况下减少动画所要占用的空间,并提高计算机动画的渲染效率成为一个值得研究的课题。较早出现的一种称为顶点动画的三维动画技术,根据时闯对两个关键帧的信息进行插值计算,以得到对应时间的动画数据。其优点是实现起来比较简单,所需的计算量也很少。但它同时也带来一些问题。比如,顶点动画需要占用大量的内存,插值计算时动画容易产生变形等。由于顶点动画的一些问题,一种新的方法被提出,即骨骼动画,它特别适合于人物和其它的脊椎动物的动画模拟。骨骼动画作为一种新的动画技术,它与许多技术进行了结合。比如在骨骼动画中融入了正向反向动力学,电影中常用的动作捕捉技术等“I。WuFu-ehe等嘲提出了一种创建骨骼的方法,称为范围连接图方法。这种方法保留了拓扑结构的信息,并且它不像中间轴变换方法那样对模型摆动比较敏感。尤其应用于复杂模型时得到的结果十分符合人的感觉,使动画显得更加的真实。骨骼动画的诞生对于游戏渲染引擎来说是一项莫大的进步,它在很大程度上提高了游戏中人物渲染的效率。渲染引擎包括了骨骼动画,粒子系统,LOD(1evelofdctial),光照系统,阴影等诸多组成部分。作为渲染引擎的一部分,对于如何提高骨骼动画的运行效率,应从整个系统的高度去考虑。
GPU(graphic process unit)作为游戏渲染的重要硬件之一,随着其处理能力的不断提升,和可编程性能的不断完善,如何让GPU代替CPU完成更多的工作已成为国际计算机图形处理及其相关领域的研究热点之一。吴恩华等人叫介绍了一些在GPU中实现辐射度、光线跟踪、碰撞检测、流体模拟方面的文章,并提出了一些关于使用GPU像素Shader进行代数运算的方法。沈潇“1等提出了一种基于可编程图形硬件的实时的阴影生成算法,改善了阴影渲染效果,并且防止了走样。Yang Xiao嗍等提出了一种基于GPU的实时处理阴影锥的方法,相对于过去在很大程度上提升了阴影生成的效率。Aaron旧等提出了一种基于GPU的数据结构库,使GPU编程者可以根据不同的算法使用不同的数据结构。Cbristophd”等提出了一种基于GPU的实时 格简化方法,将 格简化算法移植到GPU中,并提出了一种通用计算的数据结构,这种数据结构使用于流处理器架构。对于骨骼动画而言,将某些计算移植到GPU中去无疑对骨骼动画,乃至整个渲染引擎的效率而言都有很大的提升。Kipfe一”1等提出了一种基于GPU的实时大规模的粒子系统,并构建了一个通用的粒子系统的引擎,该引擎还实现了粒子之间的碰撞和可见性排序等功能。Shiue等人91提出了一种基于GPU的实时细分方法。他提出的实时GPU细分核心在不同细分深度生成细分 格,使得所有的计算工作都可由GPU的Shader来负责,由此细分算法的所有主要的特征能够得以实现。
1、基础知识
1.1骨骼动画
骨骼蒙皮动画也叫做骨骼动画。使用骨骼动画技术。可以塑造出各种各样,栩栩如生的动画角色。其中,人体的骨骼动画运用得最为广泛。一般来说,骨骼动画由两部分来表示。一部分是形成层次的一系列的骨骼,通常称为骨架,另一部分是蒙在骨架上的皮肤。
骨骼结构是一系列相连接的骨头,它们组成一个层次叫骨层级。其中一个骨头称为根骨,它是整个骨骼结构的关键,整个骨架都是由它发展而来。其它的骨头都附属于根骨,要么是它的子骨要么是它的兄弟骨。由此组成如图1所示的骨骼结构。在检索骨架的时候,一般先找到形成骨架的根骨,然后,由根骨开始进行检索,直到找到所需要的骨骼。
在骨骼动画中仅仅建立骨架是不够的。为了使骨骼可以动起来,一般通过对骨骼进行变换达到使模型动起来的效果。在对骨骼做完变换之后还需要更新整个骨架,以此来产生一个新的动作。骨骼的更新需要遵守一些规则。图2展示了骨骼更新所需要遵守的规则。
在图2中,整个骨骼的更新从根骨开始,直至所有的骨骼都更新完毕。每一个骨骼的变换都会影响到其兄弟骨骼和子骨骼。这种影响在一定的范围内会一直传递下去。
骨骼和骨架只是个概念结构,用于控制模型的动作,它与模型 格相关联。通过骨骼来控制 格,从而让模型动起来。这里的模型 格就是所谓的蒙皮 格。
在蒙皮 格中,所有 格点均与骨骼相关联。因此,只要对骨骼进行变换,再对与骨骼相关联的 格点做相应变换后,就可以使模型动起来。
在骨骼动画中每个顶点都与骨骼相关联,骨骼的运动会带动附于其上的顶点。骨骼对顶点的影响力的大小由每个骨骼的权重系数决定,对某个顶点产生影响的骨骼的权重总和为l。
使用骨骼动画可以节省许多内存空间,且能够较好的控制模型 格的变换。但是,骨骼动画的这些好处也是要付出一些代价的,与较少的内存消耗对应的是更大的计算量。由于现代CPU的超强运算能力以及GPU通用计算的发展,这似乎不是一个大问题。甚至可以把一部分计算转移到GPU上去,以节省更多的CPU计算能力给其它的部分的计算。
1.2可编程Shader技术
在过去GPU中的Shader分为两类,即顶点Shader(Vertex Shader)和像素Shader(PixelShader)。这两类Shader分别位于渲染管线的不同阶段,分别处理顶点和像素的数据。两类Shader只能各自处理各自的数据,即便Shader空闲也不能用于另一方数据的处理。
可以根据需要转移到顶点处理阶段来进行。虽然在统一渲染架构中合并了两类Shader,但是渲染管线中的顶点处理阶段和像素处理阶段仍然存在。除了原先的顶点和像素处理阶段,在新的GPU架构中还增加了几何Shader(GeometryShader)。新的处理管线如图3所示。
2、基于可编程GPU的骨骼动画
2.1骨骼动画中主要数据计算的分析
由于CPU与GPU构造的不同,决定了他们计算与数据存储方式的不同。所以,基于CPU的算法并不能够简单的复制用于GPU中,必须先要对其进行分析与简化。骨骼动画中的数据计算主要包括以下4方面,骨骼与顶点的对应关系,骨骼的偏移矩阵与骨架的对应关系,骨架的更新,顶点的更新。
使用GPU迸行计算与使用CPU进行计算有很大的不同。GPU在顶点处理阶段处理的每一个点,都使用同样的算法进行计算,对于像素处理阶段的每一个像素也是如此。并且在GPU中每个顶点或者像素的计算是相互独立的。当前的计算结果不能用于下~个顶点或者像素。而且顶点与顶点之间,像素与像素之间,计算结果不能被共享。由此可见,GPU较适合处理逻辑比较简单的任务。所以在设计基于GPU的算法时应尽量保证程序的简洁。
在了解了GPU的计算特性之后,下面将对这4种计算分别进行分析:
·骨骼与顶点的对应关系:骨骼与顶点的创建是分开的,但在创建的过程中,我们要将骨骼与顶点绑定在一起。这样当我们改变骨骼时,顶点才会跟着一起移动。
声明:本站部分文章及图片源自用户投稿,如本站任何资料有侵权请您尽早请联系jinwei@zod.com.cn进行处理,非常感谢!