Online Simulation of Emotional Interactive Behaviors with Hierarchical Gaussian Process Dynamical Models
|
|
|
- Sandra Collins
- 10 years ago
- Views:
Transcription
1 Online Simulation of Emotional Interactive Behaviors with Hierarchical Gaussian Process Dynamical Models Nick Taubert, Andrea Christensen, Dominik Endres, Martin A. Giese Section Computational Sensomotorics, Dept. of Cognitive Neurology, Hertie Institute for Clinical Brain Sciences and Center for Integrative Neuroscience, University Clinic Tübingen, Ottfried-Müller Str. 5, 7076 Tübingen, Germany Figure 1: Motion sequences of synthesized emotional handshakes. From left to right: neutral, angry, happy and sad. Different emotions are associated with different postures and ranges of joint movements. c 01 ACM. This is the authors version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definite version will be published in the proceedings of the ACM Symposium on Applied Perceptions (SAP) 01. Abstract The online synthesis of stylized interactive movements with high levels of realism is a difficult problem in computer graphics. We present a new approach for the learning of structured dynamical models for the synthesis of interactive body movements that is based on hierarchical Gaussian process latent variable models. The latent spaces of this model encode postural manifolds and the dependency between the postures of the interacting characters. In addition, our model includes dimensions representing emotional style variations (for neutral, happy, angry, sad) and individually-specific motion style. The dynamics of the state in the latent space is modeled by a Gaussian Process Dynamical Model, a probabilistic dynamical model that can learn to generate arbitrary smooth trajectories in real-time. The proposed framework offers a large degree of flexibility, in terms of the definition of the model structure as well as the complexity of the learned motion trajectories. In order to assess the suitability of the proposed framework for the generation of highly realistic motion, we performed a Turing test : a psychophysical study where human observers classified the emotions and rated the naturalness of the generated and natural emotional handshakes. Classification results for both stimulus groups were not significantly different, and for all emotional styles, except for neutral, participants rated the synthesized handshakes equally natural as animations with the original trajectories. This shows that the proposed method generates highly-realistic interactive movements that are almost indistinguishable from natural ones. As a further extension, we demonstrate the capability of the method to interpolate between different emotional styles. CR Categories: I..9 [Artificial Intelligence]: Robotics Kinematics and dynamics; I..6 [Artificial Intelligence]: Learning Parameter Learning G.3 [Mathematics of Computing]: Probability and Statistics Markov Processes G.3 [Mathematics of Computing]: Probability and Statistics Probabilistic algorithms J.4 [Computer Applications]: Social and Behavioral Sciences Psychology I.3.6 [Computer Graphics]: Methodology and Techniques Interaction techniques; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Animation; I.5.1 [Pattern Recognition]: Models Statistical; Keywords: kinematics and dynamics, interaction techniques, animation, machine learning, probabilistic algorithms, gradient methods, nonlinear programming 1 Introduction Accurate probabilistic models of interactive human motion are important for many technical applications, including computer animation, robotics, motion recognition, and for the analysis of data in neuroscience and motor control. A particular important problem in these fields is the online synthesis and simulation of human body motion, since often models have to react to inputs or behavior of the user, e.g. when real humans are immersed in an environment with virtual characters or humanoid robots. Alternatively, such movements might have to be fitted online to sensory inputs, for example when tracking people in computer vision. Such applications are difficult to address with methods for the offline synthesis of human motion, e.g. by motion capture and subsequent filtering or interpolation between stored trajectories [Bruderlin and Williams 1995; Wiley and Hahn 1997]. Ideal for many applications are representations in terms of dynamical systems, which can be directly embedded in control architectures or real-time processing loops for people tracking. The development of such dynamical models is difficult because of the high dimensionality of human motion data (e.g. 159 dimensions for the motion capture data used in this paper). However, the [email protected] [email protected] [email protected], equal contrib. with M.A. Giese [email protected]
2 intrinsic dimensionality of such data is typically rather small. Thus, dimensionality reduction techniques offer a way to preserve the relevant variability in a low-dimensional latent space, and these methods are an important part of most human motion models. Gaussian process latent variable models (GP-LVM) have been proposed as a powerful approach for data modeling through dimensionality reduction [Lawrence 004; Lawrence 005]. The resulting lowdimensional latent representations are suitable for generalizations and extractions of characteristic features from few training examples. GP-LVMs are able to represent smooth nonlinear continuous mappings from the latent space to the data space and make it possible to model complex data manifolds in relatively low-dimensional latent spaces. Consequently, these methods have been used for the modeling of human movements in computer graphics and robotics. Since GP-LVMs are generative probabilistic models, they can serve as the building blocks of hierarchical architectures [Bishop and Tipping 1998; Lawrence and Moore 007] to model conditional dependencies. In this paper we use them for the modeling of the kinematics of coordinated movements of multiple actors in an interactive setting. Background Statistical motion models have been used for the modeling and synthesis of human motion data (see e.g. [Grochow et al. 004; Chai and Hodgins 005]) and the editing of motions styles (e.g. [Li et al. 00; Lau et al. 009]) or style interpolation [Brand and Hertzmann 000a; Ikemoto et al. 009]. However, most of these techniques result in off-line models that are not suitable for the modeling of reactions of external inputs or to input from other characters in the scene. The dominant approach for the construction of dynamic models is physics-based animation. Early approaches have been based on simple optimization principles, such as the minimization of energy ([Witkin and Kass 1988; Fang and Pollard 003]). However, it turned out that the generation of realistic and stylized complex human body motion is difficult. Recently such physics-based animation has been combined with the use of motion capture data and learning techniques in order to improve the quality of physicsbased animation (e.g. [Popovic and Witkin 1999; Safonova et al. 004; Sulejmanpasic and Popovic 005; Ye and Liu 008]). A few approaches have used statistical approaches to learn dynamical systems that generate human motion [Brand and Hertzmann 000a; Hsu et al. 005; Wang et al. 008], where the major problem is to develop methods that are accurate enough to capture the details of human motion, and at the same time generalize well for similar situations. Gaussian Process latent variable models have been used in computer graphics for the modeling of kinematics and motion interpolation (e.g. [Grochow et al. 004; Mukai and Kuriyama 005; Ikemoto et al. 009]) and for the learning of low-dimensional dynamical models [Ye and Liu 010]. Here, we try specifically to devise a flexible method for learning hierarchical models of this type that can capture dependencies between multiple interacting characters. The modeling of realistic motion with emotional style is a classical problem in computer graphics [Brand and Hertzmann 000b; Rose et al. 1998; Unuma et al. 1995; Wang et al. 006]. It is essential to capture the emotional movement with high accuracy, because the emotional perception depends more on the motion than on the body shape [Atkinson et al. 004]. Even though neural activities differ when real and virtual stimuli are shown to human observers ([Perani et al. 001; Hana et al. 005]), the perception of emotional content is highly robust and for the most part independent of the character s body [McDonnell et al. 008]. However, motion representations without body shapes are generally perceived as less emotionally intense than in the case where body shape is present [McDonnell et al. 009]. Here, we develop a method that allows us to model emotional interactive motion with high accuracy in the context of real-time animation systems. For this purpose, we extend our previous hierarchical model [Taubert et al. 011] in two directions: we introduce a dynamical nonlinear mapping similar to the GPDM to model (and learn) the dynamics of interactive movements, and we introduce style variables to modulate the emotional content and represent the personal style of the actors. The nonlinear dynamical mapping replaces the Hidden Markov Model (HMM) which we used in [Taubert et al. 011] for the generation of time series. The advantage of this mapping over the HMM is increased accuracy, which e.g. allows for the modeling of hand contact during handshakes in a top-down fashion. This was very hard to achieve with the HMM. We exemplify our approach on emotional handshakes between two individuals. We validate the realism of the movements of the developed statistical model by psychophysical experiments and find that the generated patterns are hard to distinguish from natural interaction sequences. 3 Gaussian Process Latent Variable Model Gaussian Process latent variable models are a special class of nonlinear latent variable model that map a low-dimensional latent space on the data [Bishop 1999]. Here, a high dimensional data set Y = [y 1,..., y N ] T R N D is represented by instances of low dimensional latent variables X = [x 1,..., x N ] T R N q, where N is the length of the data set, D is the dimension of the data space and q is the dimension of the latent space. Data points are generated from points in the latent space x by applying a function f(x) and adding isotropic Gaussian noise ε, y = f(x) + ε, f(x) GP (m Y (x), k Y (x, x )), (1) where f(x) is drawn from a Gaussian process with mean function m Y (x) and kernel function k Y (x, x ). We assume a zero mean function m Y (x) = 0 and use a non-linear radial basis function (RBF) kernel [Rasmussen and Williams 008] for a high dimensionality reduction and smooth trajectories in latent space. Furthermore, the variance term for ε can be absorbed into this kernel via the noise precision γ 3: ( k Y (x, x ) = γ 1 exp γ x x ) + γ 1 3 δ x,x, () where γ 1 is the output scale and γ the inverse width of the RBF term. Let K Y denote the N N kernel covariance matrix, obtained by applying the kernel function (eqn. ) to each pair of data points. Then the likelihood of the latent variables and the kernel parameters γ = (γ 1, γ, γ 3) can be written as p(y X, γ) = D N (y :,d 0, K Y ). d=1 where K Y depends on γ and X. Learning the GP-LVM is accomplished by minimizing the negative log-posterior of the latent variables and the kernel parameters via scaled conjugate gradients
3 This approach forces the GP-LVM to learn a latent representation which captures the variation w.r.t. these variables (in particular, variation across emotional style, actor style and time). The negative joint log-probability for the two GP-LVMs in the bottom layer is L = ln(p(y1 X1, γ )p(y X, γ )p(x1, X )p(γ )). (5) Note that the shared mapping function (eq. (4)) is responsible for the dependency of both observations to the same kernel parameters γ. 4. Figure : Graphical model representation. A couple C consists of two actors {1; }, which performed R trials of handshakes with N time steps per each trial and emotional style e. y{1;} : observed joint angles and their latent representations x{1;} for each actor. The x{1;} are mapped onto the y{1;} via a shared function f (xz ) which has a Gaussian process prior. i: latent interaction representation, mapped onto the individual actors latent variables by a function g(i) which also has a Gaussian process prior. The dynamics of i are described by a second order dynamical model with a mapping function h(i<n, c<n, e<n ), where < n indicates time steps before n, here: n 1 and n. Prior mean and kernel functions have been omitted for clarity. (SCG) [Lawrence and Moore 007], L = ln(p(y X, γ )p(x)p(γ )) DN D 1 = ln π ln KY YT K 1 Y Y N X 1X ln γj. xn xtn n=1 j (3) We choose an isotropic Gaussian prior over the latent variables and a scale-free prior over the kernel parameters, since we have no reason to believe otherwise. 4 Emotional interaction model In order to learn the interactions between pairs of actors we devised a hierarchical model based on GP-LVMs with RBF kernels and a Gaussian Process Dynamical Model (GPDM) [Wang et al. 007b] with linear+rbf kernel for the temporal evolution, see Figure. Furthermore, we introduce style variables [Wang et al. 007a] to capture the emotional content (e) and individual motion style of the actors (c). Our model is comprised of three layers, which are learned jointly. This mode of learning includes both the bottom-up and top-down contributions at each hierarchy level. Interaction Layer The latent representation (X1, X ) of the joint angle data of the bottom layer model forms the 1-dimensional observation variable in the interaction layer. The mapping is represented by a GP-LVM and a 6-dimensional latent variable I. The negative log-probability of the interaction layer model can be written as L = ln(p(x1, X I, β )p(i)p(β )). We use a a non-linear RBF kernel in this layer, with parameters β. Furthermore, we want to model style modulation in this latent space, i.e. the characteristics of the couples of actors and their emotions. We make use of the multifactor model [Wang et al. 007a], where a kernel matrix is constructed by an element-wise product of kernel matrices generated by different kernel functions. The element-wise multiplication of the matrices has a logical AND effect in the resulting kernel matrix and is very useful for binding a specific style to pairs of latent points. For example, we enforce that points corresponding to different emotions do not correlate. In this layer, we therefore introduce new latent variables in 1-ofK encoding [Bishop 007]: an emotional style (out of K possible styles) is represented by a unit vector along an axis of the latent space, e.g. e = (1, 0, 0, 0) is neutral, e = (0, 1, 0, 0) is angry etc. The couple identity (and hence, individual motion style) c is encoded similarly. The total kernel function in the interaction model is thus: kx ([i, c, e], [i0, c0, e0 ]) β + β3 1 δi,i0, = (ct c0 )(et e0 ) β1 exp i i0 where the factors (ct c0 ) and (et e0 ) correlate training data points from the same couple and same emotional style, respectively. For these style variables we used a linear kernel because we would also like to interpolate between the styles (see the supplementary video for an interpolation example). The resulting negative log-probability with style variables therefore has the form L = ln(p(x1, X I, C, E, β )p(i)p(β )), 4.1 (6) (7) Bottom Layer and can be optimized w.r.t. the style variables, too. In the bottom layer of our model each actor {1; } is modeled by a GP-LVM. Observed joint angles Y (D = 159) of one individual actor are represented by a 6-dimensional latent variable X. The Y were treated as independent of actor identity, trials, emotional styles and time by sharing the mapping function in eq. (1) for the two GP-LVMs (see fig. ) yz = f (xz ) + ε, f (xz ) GP (0, ky (xz, xz 0 )), z {1, }. (4) 4.3 Dynamic Layer In the top layer we learn the temporal evolution of I, i.e. the dynamics. To this end, we use a second-order GPDM [Wang et al. 007b] which allows us to model e.g. dynamical dependencies on velocity and acceleration of I. Furthermore, a GPDM can capture the non-linearities of the data without overfitting and it can learn complex dynamics from small training sets. The dynamic mapping
4 Iout = [i3,..., in ]T. From eq. (9) we obtain a negative logprobability of the dynamic layer L = ln(p(i C, E, α )p(α )) q(n ) q 1 ln π ln KI ITout K 1 I Iout X 1 (i1 it1 + i it ) ln αj, j = Figure 3: Handshake trajectories in the latent spaces of the interaction layer. The separation between emotional styles is clearly visible. where KI is the (N ) (N ) kernel matrix constructed from the training inputs Iin = [[i,..., in 1 ]T, [i1,..., in ]T ] and style variables C, E. The whole model can be learned by adding the negative logprobabilities from the different layers (eqns. (5,7,11)), dropping the isotopic Gaussian priors of the bottom and interaction layer because of the top-down influence from the next higher layer: on the latent coordinates I is conceptually similar to the GP-LVM with the difference that the interaction variables from previous time steps in 1, in are mapped onto the current in, (1) p(i C, E, α )p(α )p(β )p(γ ), p(in in 1, in, α )p(α ), (9) n=3 where α are the parameters of the kernel function and p(i1, i ) is an isotropic Gaussian prior with zero mean and unit variance. Since this is a second-order GPDM, the kernel function depends on the last two latent positions. We use a linear+rbf kernel ki ([in 1, in ], [iν 1, iν ]) α α3 in iν = α1 exp in 1 iν 1 +α4 itn 1 iν 1 + α5 itn iν + α6 1 δn,ν. Similar to the regular RBF kernel in eq. () is α1 the output scale, α and α3 are the inverse width of the RBF terms. The parameters α4, α5 representing the output scales of the linear terms and α6 is the precision of the distribution of the noise ξ. The RBF terms of the kernel allows us to model nonlinear dynamics in the GPDM, the linear terms enable interpolation. Since individual and emotional style modulate the dynamics, we also extend this kernel function with factors depending on these styles: ki ([in 1, in, cn 1, cn, en 1, en ], [iν 1, iν, cν 1, cν, eν 1, eν ]) = (ctn 1 cν 1 ctn cν )(etn 1 eν 1 etn eν ) α α3 α1 exp in 1 iν 1 in iν T +α4 in 1 iν 1 + α5 in itν + α6 1 δn,ν p(x1, X I, C, E, β ) (8) where ξ is isotropic Gaussian noise. This kind of mapping results in a Markov chain and leads to a non-linear generalization of a hidden Markov model (HMM) [Li et al. 000], N Y L = ln {p(y1 X1, γ )p(y X, γ ) in = h(in 1, in ) + ξ, h(in 1, in ) GP (0, ki ([in 1, in ], [iν 1, iν ])). p(i α ) = p(i1, i ) (11) (10) and minimizing it w.r.t. the latent variables, style variables and the kernel parameters. So far we have described the learning for one motion sequence. Learning multiple sequences proceeds along the same lines, but at the beginning of each sequence, the first two points have no predecessors and hence have the isotropic Gaussian prior from eqn. 9. In other words, the Markov chain is restarted at the beginning of each sequence. 4.4 Motion Generation Our model is fully generative. We generate new interaction sequences in the latent space of the interaction layer by running mean prediction on the GPDM. Given the training pairs, Iin, Iout, the training style variables, C, E, the target style variables, C, E and the kernel function in eq. (10), the predictive distribution for generating new target outputs, I out, can be derived. With the (second order) Markov property (eq. 9) of the GPDM, a new target output point can be generated given the previous steps and style values, i n N (µi ([i n 1, i n, c, e ]), σi ([i n 1, i n, c, e ])1). where µi ([i n 1, i n, c, e ]) = ki ([i n 1, i n, c, e ])K 1 I Iout, σi ([i n 1, i n, c, e ]) = ki ([i n 1, i n, c, e ], [i n 1, i n, c, e ]) T ki ([i n 1, i n, c, e ]), to suppress correlation between points which differ in content and style in the kernel matrix of the GPDM. Let the training outputs (13) (14) K 1 I ki ([i n 1, i n, c, e ]), and 1 is the identity matrix. The function ki ([i n 1, i n, c, e ]) results in a vector, constructed from the test style values, the previous test inputs, all the training inputs and their associated style values, ki ([i n 1, i n, c, e ]) = ki ([i n 1, i n, c, e ], [Iin, C, E]). (15)
5 For trajectory generation, we set the latent position at each time step to be the expected point given the previous steps, ĩ n = µ I([ĩn 1, ĩn, c, ẽ]). (16) Similarly, new poses can be generated by going top-down through the hierarchy of the model to produce new target outputs in each layer, given the generated outputs of the next upper layer, X {1;} = µ X {1;}(Ĩ), Ỹ {1;} = µ Y {1;}( X {1;} ). We applied this method to generate online four types of emotional handshakes. We processed 1 motion capture trajectories with three repetitions for each emotion. On an Intel Quad-Core.7GHz computer this took about 0 hours. The generated movements look quite natural (to see in the supplementary video), as was further corroborated by our psychophysical analysis below. 5 Results 5.1 Psychophysical validation To test whether the accuracy of the developed probabilistic model is sufficient for the generation of emotionally expressive computer animations that are perceived as natural movements we conducted a Turing test, where human observers had to judge naturalness and emotional content of both natural and generated movements in a psychophysical study. Nine participants (3 female, mean age: 3 years, 4 months ) took part in this experiment. All were naïve with respect to the purpose of the study. Each participant was tested individually and the experimenter left the testing room when the testing started to avoid that participants would feel observed while entering their responses. Stimuli were rendered video clips showing two gray avatars shaking hands on a light gray background. Rendering was done in a pipeline working process using commercial software from Autodesk. With Motion Builder we applied the natural and generated motion data to the avatars and rendered them in a next step with 3d studio MAX where the length of the stimuli varied between and 4 seconds. See Figure 1 and the supplementary video for an illustration of the appearance of the avatars. The appearance of the avatars was kept as simple as possible (e.g. omitting clothing, hair, and facial expressions) to keep the participants focussed on the bodily movements. The complete stimulus set consisted of the animated movements of three original handshakes and one generated handshake per emotion (neutral, happy, angry, and sad), it thus consisted of 16 different video clips, each repeated three times in randomized order. The used emotions were chosen from the set of basic emotions described by Ekman [Ekman and Friesen 1971] that have been shown previously to be conveyed by full body movements [Roether et al. 009; Taubert et al. 011]. Presentation of the stimuli and recording of the subjects responses was done using Matlab and the Psychophysics toolbox [Brainard 1997]. Participants performed two tasks: an emotion classification task and a naturalness rating task. In the emotion classification task we tested whether the generated handshake movements still conveyed the intended emotion. In each trial participants observed the video clip twice before they answered the question: Which emotion did the avatars express? by pressing one out of four keys on synthesized handshakes intended emotion perceived emotion neutral happy angry sad neutral happy angry sad class. rate ± 17.6 natural handshakes intended emotion perceived emotion neutral happy angry sad neutral happy angry sad class. rate 74.5 ± 15.4 Table 1: Classification Results. Emotion classification of synthesized (top) and natural (bottom) handshakes. Intended affect is shown in columns, percentages of subjects (N=9) responses in rows. Bold entries on the diagonal mark rates of correct classification. class. rate: overall mean correct classification rate ± standard deviation across subjects for generated and natural movements. This standard deviation, a measure for the agreement between subjects, is similar for synthesized and natural handshakes. standard keyboard marked with letters corresponding to the tested emotions. Stimuli were repeated three times in random order. The discrimination performance of the participants was determined using a contingency-table analysis (χ test). In the second part of the experiment we investigated whether the computer animations of synthesized movements were perceived as less natural than animations created from original motion capture data. For this purpose we conducted a rating task in which participants rated the naturalness of the displayed video clips. Participants were instructed that the stimuli they would see throughout the experiment could either show recorded movements of real humans interacting naturally or artificial computer generated movements. They rated the stimuli on a Likert scale ranging from 1 (very artificial) to 6 (very natural). The same stimulus set as in the emotion classification task was used for the naturalness rating task. Again, each stimulus was repeated three times and the presentation order was randomized. Since the number of natural (4 emotions 3 prototypes 3 repetitions) and synthesized (4 emotions 1 generated movement 3 repetitions) stimuli were not counterbalanced we used non-parametric statistical testing to investigate paired differences between these stimulus classes. Again, participants observed each stimulus twice before they responded by keypress. 5. Results The classification results are shown in Table 1. Participants classified the expressed emotions of the handshake movements with high accuracy. Overall, participants classified the original natural handshakes in 74.5% of the trials correctly. The synthesized movements were on average classified correctly in 70.97% of the trials. These results were confirmed by a contingency-table analysis testing the null hypothesis that the variables intended emotion and perceived emotion were independent. We found highly significant violations of this null hypothesis (generated movements: χ = 501., d.f. = 9, p < 0.001; original/natural movements: χ = , d.f. = 9, p < 0.001). Comparisons of the percentages of correct classification of synthesized and natural animations
6 Figure 4: Mean ratings of naturalness for original (colored) and synthesized (colored, dotted areas) handshake movements. On average the naturalness ratings are comparable for both, synthesized and original movements. Only the neutral synthesized handshake movement was rated as less natural than its original counterpart. For all other emotions participants were unable to distinguish original from synthesized movements. Error bars show standard errors, asterisks indicate significant pairwise difference (** p< 0.01, Wilcoxon rank-sum test). for the four tested emotions revealed no differences between animation classes for any of the emotions (paired Wilcoxon rank-sum test, all p > 0.1). Thus, the animations created from generated handshake movements still conveyed correctly the intended emotions. Furthermore, the standard deviation across subjects, a measure for between-subjects agreement, is similar for synthesized (17.6%) and natural (15.4%) handshakes. The mean results of the naturalness ratings are shown in Figure 4. For happy, angry, and sad handshakes participants were unable to distinguish between natural and synthesized movements. The mean naturalness ratings for those emotional movements did not differ from each other as has been tested with a non-parametric paired difference test (Wilcoxon rank-sum tests, all p < 0.16). Only the neutral synthesized handshake movement was rated as less natural than its original counterpart (p < 0.01). This surprising result for the neutral stimuli can be likely explained by the nature of the motion capture prototypes for the neutral handshakes that were used to generate the synthesized pattern. In one of the three original movements one actor made a communicative gesture after the handshake: he raised his right arm instead of taking it down to the starting position (see the supplementary video for an illustration). This gesture had effects on the naturalness ratings in two ways. On the one hand, it made this movement unique which lead to the participants perception that this movement is very natural (median rating: 5) because they would not assume a computer to generate such individual gestures. On the other hand, this gesture also altered the generated movement that interpolates between the training examples. As a result, in the synthesized movement the right arm of the character shows a slight raising movement, which might have looked unnatural. Future experiments will use training movements with improved segmentation in order to avoid such problems. 6 Conclusion We have devised a new online-capable method for the accurate simulation of interactive human movements. The method is based on a hierarchical probabilistic model that combines layers containing Gaussian Process Latent Variable Models, and a dynamics in the latent space that is implemented with a Gaussian Process Dynamical Model. In addition, the proposed method is able to model the emotional style by appropriate additional dimensions in the latent space. The results of our psychophysical experiments confirm the suitability of the derived probabilistic model for the generation of emotionally expressive movements. The animations of those synthesized movements conveyed the correct information about style changes. Furthermore, when the motion capture examples used to train the system were appropriately segmented, the resulting online generated patterns were perceived as almost as natural as the original motion capture data. This highlights the importance of a robust automatic segmentation process, for which we will employ our own previously developed Bayesian segmentation approach [Endres et al. 011]. Future work will extend the modular architecture of our model to build algorithms for continuous interpolation of emotional styles (an early example of this is contained in the supplementary video). In addition, we will embed the learned model in an online animation pipeline in order to study interactions between real and virtual humans. Acknowledgments We thank E.M.J. Huis in t Veld for help with the movement recordings and all our participants for taking part in the psychophysical study. This work was supported by EU projects FP7-ICT SEARISE, FP TP3 TANGO, FP7-ICT AMARSi, the DFG (GI 305/4-1, and GZ: KA 158/15-1), and the German Federal Ministry of Education and Research (BMBF; FKZ: 01GQ100). References ATKINSON, A. P., D ITTRICH, W. H., G EMMELL, A. J., AND YOUNG, A. W Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception. 33, 6 (June), B ISHOP, C. M., AND T IPPING, M. E A hierarchical latent variable model for data visualization. IEEE Transactions on Pattern Analysis and Machine Intelligence 0(3), B ISHOP, C. M Latent variable models. In Jordan, M. I. (Ed.), Learning in Graphical Models. MIT Press, B ISHOP, C. M Pattern Recognition and Machine Learning. Springer. B RAINARD, D The psychophysics toolbox. Spatial Vision, 10, 433:436. B RAND, M., AND H ERTZMANN, A Style machines. In SIGGRAPH, B RAND, M., AND H ERTZMANN, A Style machines. In Proceedings of the 7th annual conference on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, SIGGRAPH 00, B RUDERLIN, A., AND W ILLIAMS, L Motion signal processing. In Proceedings of the nd annual conference on Computer graphics and interactive techniques, ACM, New York, NY, USA, SIGGRAPH 95, C HAI, J., AND H ODGINS, J. K Performance animation from low-dimensional control signals. ACM Transactions on Graphics 4, 3, E KMAN, P., AND F RIESEN, W. V Constants across cultures in the face and emotion. Journal of Personality and Social Psychology 17,
7 ENDRES, D., CHRISTENSEN, A., OMLOR, L., AND GIESE, M. A Emulating human observers with Bayesian binning: segmentation of action streams. ACM Transactions on Applied Perception (TAP) 8, 3, 16:1 1. DOI: / FANG, A. C., AND POLLARD, N. S Efficient synthesis of physically valid human motion. ACM Transactions on Graphics, 3, GROCHOW, K., MARTIN, S. L., HERTZMANN, A., AND POPOVIC, Z Style-based inverse kinematics. ACM Transactions on Graphics 3, 3, HANA, S., JIANGA, Y., HUMPHREYSC, G. W., ZHOUD, T., AND CAID, P Distinct neural substrates for the perception of real and virtual visual worlds. NeuroImage 4, HSU, E., PULLI, K., AND POPOVIC, J Style translation for human motion. ACM Transactions on Graphics 4, 3, IKEMOTO, L., ARIKAN, O., AND FORSYTH, D. A Generalizing motion edits with gaussian processes. ACM Transactions on Graphics 8, 1. LAU, M., BAR-JOSEPH, Z., AND KUFFNER, J Modeling spatial and temporal variation in motion data. ACM Transactions on Graphics 8, 5. LAWRENCE, N. D., AND MOORE, A. J Hierarchical gaussian process latent variable models. In Proceedings of the International Conference in Machine Learning, Omnipress, LAWRENCE, N. D Gaussian process latent variable models for visualisation of high dimensional data. In Advances in Neural Information Processing Systems, MIT Press, LAWRENCE, N. D Probabilistic non-linear principal component analysis with gaussian process latent variable models. Journal of Machine Learning Research 6, LI, X., PARIZEAU, M., AND PLAMONDON, R Training hidden markov models with multiple observations-a combinatorial method. IEEE Trans. Pattern Anal. Mach. Intell., 4, LI, Y., WANG, T., AND SHUM, H.-Y. 00. Motion texture: a two-level statistical model for character motion synthesis. In SIGGRAPH, MCDONNELL, R., JÖRG, S., MCHUGH, J., NEWELL, F., AND O SULLIVAN, C Evaluating the emotional content of human motions on real and virtual characters. In Proceedings of the 5th symposium on Applied perception in graphics and visualization, ACM, New York, NY, USA, APGV 08, MCDONNELL, R., JÖRG, S., MCHUGH, J., NEWELL, F. N., AND O SULLIVAN, C Investigating the role of body shape on the perception of emotion. ACM Trans. Appl. Percept. 6, 3 (Sept.), 14:1 14:11. MUKAI, T., AND KURIYAMA, S Geostatistical motion interpolation. ACM Transactions on Graphics 4, 3, RASMUSSEN, C. E., AND WILLIAMS, C. K. I Gaussian processes for machine learning. Journal of the American Statistical Association 103, ROETHER, C., OMLOR, L., CHRISTENSEN, A., AND GIESE, M. A Critical features for the perception of emotion from gait. Journal of Vision 6, / ROSE, C., BODENHEIMER, B., AND COHEN, M. F Verbs and adverbs: Multidimensional motion interpolation using radial basis functions. IEEE Computer Graphics and Applications 18, SAFONOVA, A., HODGINS, J. K., AND POLLARD, N. S Synthesizing physically realistic human motion in lowdimensional, behavior-specific spaces. ACM Transactions on Graphics 3, 3, SULEJMANPASIC, A., AND POPOVIC, J Adaptation of performed ballistic motion. ACM Transactions on Graphics 4, 1, TAUBERT, N., ENDRES, D., CHRISTENSEN, A., AND GIESE, M. A Shaking hands in latent space: modeling emotional interactions with gaussian process latent variable models. In KI 011: Advances in Artificial Intelligence, LNAI, S. Edelkamp and J. Bach, Eds. Springer, UNUMA, M., ANJYO, K., AND TAKEUCHI, R Fourier principles for emotion-based human figure animation. In In Proceedings of Computer Graphics, SIGGRAPH 95. WANG, Y., LIU, Z.-Q., AND ZHOU, L.-Z Learning styledirected dynamics of human motion for automatic motion synthesis. In Proc. IEEE International Conference on Systems, Man, and Cybernetics. WANG, J. M., FLEET, D. J., AND HERTZMANN, A Multifactor gaussian process models for style-content separation. In ICML. WANG, J. M., FLEET, D. J., MEMBER, S., AND HERTZMANN, A Gaussian process dynamical models for human motion. IEEE Trans. Pattern Anal. Machine Intell. WANG, J. M., FLEET, D. J., AND HERTZMANN, A Gaussian process dynamical models for human motion. IEEE Transactions on Pattern Analysis and Machine Intelligence 30,, WILEY, D., AND HAHN, J Interpolation synthesis for articulated figure motion. In Virtual Reality Annual International Symposium, IEEE 1997, WITKIN, A. P., AND KASS, M Spacetime constraints. In SIGGRAPH, YE, Y., AND LIU, C. K Animating responsive characters with dynamic constraints in near-unactuated coordinates. ACM Transactions on Graphics 7, 5, 11. YE, Y., AND LIU, C. K Synthesis of responsive motion using a dynamic model. Computer Graphics Forum 9,, PERANI, D., FAZIO, F., BORGHESE, N., TETTAMANTI, M., FER- RARI, S., DECETY, J., AND GILARDI, M Different brain correlates for watching real and virtual hand actions. NeuroImage 14, POPOVIC, Z., AND WITKIN, A. P Physically based motion transformation. In SIGGRAPH, 11 0.
Gaussian Process Dynamical Models
Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 {jmwang,hertzman}@dgp.toronto.edu, [email protected]
Analecta Vol. 8, No. 2 ISSN 2064-7964
EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,
Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control Spaces
Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006), pp. 1 9 M.-P. Cani, J. O Brien (Editors) Spatial Pose Trees: Creating and Editing Motions Using a Hierarchy of Low Dimensional Control
Comparing GPLVM Approaches for Dimensionality Reduction in Character Animation
Comparing GPLVM Approaches for Dimensionality Reduction in Character Animation Sébastien Quirion 1 Chantale Duchesne 1 Denis Laurendeau 1 Mario Marchand 2 1 Computer Vision and Systems Laboratory Dept.
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network
Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Qian Wu, Yahui Wang, Long Zhang and Li Shen Abstract Building electrical system fault diagnosis is the
Learning Shared Latent Structure for Image Synthesis and Robotic Imitation
University of Washington CSE TR 2005-06-02 To appear in Advances in NIPS 18, 2006 Learning Shared Latent Structure for Image Synthesis and Robotic Imitation Aaron P. Shon Keith Grochow Aaron Hertzmann
Tracking Groups of Pedestrians in Video Sequences
Tracking Groups of Pedestrians in Video Sequences Jorge S. Marques Pedro M. Jorge Arnaldo J. Abrantes J. M. Lemos IST / ISR ISEL / IST ISEL INESC-ID / IST Lisbon, Portugal Lisbon, Portugal Lisbon, Portugal
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data Neil D. Lawrence Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield,
Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA
Are Image Quality Metrics Adequate to Evaluate the Quality of Geometric Objects? Bernice E. Rogowitz and Holly E. Rushmeier IBM TJ Watson Research Center, P.O. Box 704, Yorktown Heights, NY USA ABSTRACT
Tracking in flussi video 3D. Ing. Samuele Salti
Seminari XXIII ciclo Tracking in flussi video 3D Ing. Tutors: Prof. Tullio Salmon Cinotti Prof. Luigi Di Stefano The Tracking problem Detection Object model, Track initiation, Track termination, Tracking
Motion Capture Assisted Animation: Texturing and Synthesis
Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the
Biometric Authentication using Online Signatures
Biometric Authentication using Online Signatures Alisher Kholmatov and Berrin Yanikoglu [email protected], [email protected] http://fens.sabanciuniv.edu Sabanci University, Tuzla, Istanbul,
Background Animation Generator: Interactive Scene Design based on Motion Graph
Background Animation Generator: Interactive Scene Design based on Motion Graph Tse-Hsien Wang National Taiwan University [email protected] Bin-Yu Chen National Taiwan University [email protected]
A Cognitive Approach to Vision for a Mobile Robot
A Cognitive Approach to Vision for a Mobile Robot D. Paul Benjamin Christopher Funk Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 [email protected] Damian Lyons Fordham University,
How To Fix Out Of Focus And Blur Images With A Dynamic Template Matching Algorithm
IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X Image Estimation Algorithm for Out of Focus and Blur Images to Retrieve the Barcode
Graduate Co-op Students Information Manual. Department of Computer Science. Faculty of Science. University of Regina
Graduate Co-op Students Information Manual Department of Computer Science Faculty of Science University of Regina 2014 1 Table of Contents 1. Department Description..3 2. Program Requirements and Procedures
Assessment. Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall
Automatic Photo Quality Assessment Presenter: Yupu Zhang, Guoliang Jin, Tuo Wang Computer Vision 2008 Fall Estimating i the photorealism of images: Distinguishing i i paintings from photographs h Florin
Emotion Detection from Speech
Emotion Detection from Speech 1. Introduction Although emotion detection from speech is a relatively new field of research, it has many potential applications. In human-computer or human-human interaction
CHAPTER 6 TEXTURE ANIMATION
CHAPTER 6 TEXTURE ANIMATION 6.1. INTRODUCTION Animation is the creating of a timed sequence or series of graphic images or frames together to give the appearance of continuous movement. A collection of
Template-based Eye and Mouth Detection for 3D Video Conferencing
Template-based Eye and Mouth Detection for 3D Video Conferencing Jürgen Rurainsky and Peter Eisert Fraunhofer Institute for Telecommunications - Heinrich-Hertz-Institute, Image Processing Department, Einsteinufer
Predict Influencers in the Social Network
Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons
A Learning Based Method for Super-Resolution of Low Resolution Images
A Learning Based Method for Super-Resolution of Low Resolution Images Emre Ugur June 1, 2004 [email protected] Abstract The main objective of this project is the study of a learning based method
Machine Learning and Data Analysis overview. Department of Cybernetics, Czech Technical University in Prague. http://ida.felk.cvut.
Machine Learning and Data Analysis overview Jiří Kléma Department of Cybernetics, Czech Technical University in Prague http://ida.felk.cvut.cz psyllabus Lecture Lecturer Content 1. J. Kléma Introduction,
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia
Professor, D.Sc. (Tech.) Eugene Kovshov MSTU «STANKIN», Moscow, Russia As of today, the issue of Big Data processing is still of high importance. Data flow is increasingly growing. Processing methods
animation animation shape specification as a function of time
animation animation shape specification as a function of time animation representation many ways to represent changes with time intent artistic motion physically-plausible motion efficiency control typically
Segmenting Motion Capture Data into Distinct Behaviors
Segmenting Motion Capture Data into Distinct Behaviors Jernej Barbič Alla Safonova Jia-Yu Pan Christos Faloutsos Jessica K. Hodgins Nancy S. Pollard Computer Science Department Carnegie Mellon University
FPGA Implementation of Human Behavior Analysis Using Facial Image
RESEARCH ARTICLE OPEN ACCESS FPGA Implementation of Human Behavior Analysis Using Facial Image A.J Ezhil, K. Adalarasu Department of Electronics & Communication Engineering PSNA College of Engineering
Gaussian Processes in Machine Learning
Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany [email protected] WWW home page: http://www.tuebingen.mpg.de/ carl
6.2.8 Neural networks for data mining
6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural
Classifying Manipulation Primitives from Visual Data
Classifying Manipulation Primitives from Visual Data Sandy Huang and Dylan Hadfield-Menell Abstract One approach to learning from demonstrations in robotics is to make use of a classifier to predict if
Accurate and robust image superresolution by neural processing of local image representations
Accurate and robust image superresolution by neural processing of local image representations Carlos Miravet 1,2 and Francisco B. Rodríguez 1 1 Grupo de Neurocomputación Biológica (GNB), Escuela Politécnica
Automatic parameter regulation for a tracking system with an auto-critical function
Automatic parameter regulation for a tracking system with an auto-critical function Daniela Hall INRIA Rhône-Alpes, St. Ismier, France Email: [email protected] Abstract In this article we propose
Big Data: Rethinking Text Visualization
Big Data: Rethinking Text Visualization Dr. Anton Heijs [email protected] Treparel April 8, 2013 Abstract In this white paper we discuss text visualization approaches and how these are important
A Reliability Point and Kalman Filter-based Vehicle Tracking Technique
A Reliability Point and Kalman Filter-based Vehicle Tracing Technique Soo Siang Teoh and Thomas Bräunl Abstract This paper introduces a technique for tracing the movement of vehicles in consecutive video
Colorado School of Mines Computer Vision Professor William Hoff
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Introduction to 2 What is? A process that produces from images of the external world a description
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO
PHYSIOLOGICALLY-BASED DETECTION OF COMPUTER GENERATED FACES IN VIDEO V. Conotter, E. Bodnari, G. Boato H. Farid Department of Information Engineering and Computer Science University of Trento, Trento (ITALY)
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL
MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL G. Maria Priscilla 1 and C. P. Sumathi 2 1 S.N.R. Sons College (Autonomous), Coimbatore, India 2 SDNB Vaishnav College
STA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct
Motion Retargetting and Transition in Different Articulated Figures
Motion Retargetting and Transition in Different Articulated Figures Ming-Kai Hsieh Bing-Yu Chen Ming Ouhyoung National Taiwan University [email protected] [email protected] [email protected]
Enhanced LIC Pencil Filter
Enhanced LIC Pencil Filter Shigefumi Yamamoto, Xiaoyang Mao, Kenji Tanii, Atsumi Imamiya University of Yamanashi {[email protected], [email protected], [email protected]}
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS B.K. Mohan and S. N. Ladha Centre for Studies in Resources Engineering IIT
Intrusion Detection via Machine Learning for SCADA System Protection
Intrusion Detection via Machine Learning for SCADA System Protection S.L.P. Yasakethu Department of Computing, University of Surrey, Guildford, GU2 7XH, UK. [email protected] J. Jiang Department
Prof. Dr. D. W. Cunningham, Berliner Strasse 35A, 03046 Cottbus, Germany
Curriculum Vitae Prof. Dr. Douglas William Cunningham Work Address: Brandenburg Technical University Cottbus Graphical Systems Department Konrad-Wachsmann-Allee 1 D-03046 Cottbus, Tel: (+49) 355-693816
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set
EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
Computer Animation and Visualisation. Lecture 1. Introduction
Computer Animation and Visualisation Lecture 1 Introduction 1 Today s topics Overview of the lecture Introduction to Computer Animation Introduction to Visualisation 2 Introduction (PhD in Tokyo, 2000,
Bayesian Statistics: Indian Buffet Process
Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note
Practical Tour of Visual tracking. David Fleet and Allan Jepson January, 2006
Practical Tour of Visual tracking David Fleet and Allan Jepson January, 2006 Designing a Visual Tracker: What is the state? pose and motion (position, velocity, acceleration, ) shape (size, deformation,
Determining optimal window size for texture feature extraction methods
IX Spanish Symposium on Pattern Recognition and Image Analysis, Castellon, Spain, May 2001, vol.2, 237-242, ISBN: 84-8021-351-5. Determining optimal window size for texture feature extraction methods Domènec
WebFOCUS RStat. RStat. Predict the Future and Make Effective Decisions Today. WebFOCUS RStat
Information Builders enables agile information solutions with business intelligence (BI) and integration technologies. WebFOCUS the most widely utilized business intelligence platform connects to any enterprise
Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets
Methodology for Emulating Self Organizing Maps for Visualization of Large Datasets Macario O. Cordel II and Arnulfo P. Azcarraga College of Computer Studies *Corresponding Author: [email protected]
Learning an Inverse Rig Mapping for Character Animation
Learning an Inverse Rig Mapping for Character Animation Daniel Holden Jun Saito Taku Komura University of Edinburgh Marza Animation Planet University of Edinburgh Figure 1: Results of our method: animation
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
Deterministic Sampling-based Switching Kalman Filtering for Vehicle Tracking
Proceedings of the IEEE ITSC 2006 2006 IEEE Intelligent Transportation Systems Conference Toronto, Canada, September 17-20, 2006 WA4.1 Deterministic Sampling-based Switching Kalman Filtering for Vehicle
Latent variable and deep modeling with Gaussian processes; application to system identification. Andreas Damianou
Latent variable and deep modeling with Gaussian processes; application to system identification Andreas Damianou Department of Computer Science, University of Sheffield, UK Brown University, 17 Feb. 2016
Machine Learning in FX Carry Basket Prediction
Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines
Programming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks
Christfried Webers. Canberra February June 2015
c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic
Efficient online learning of a non-negative sparse autoencoder
and Machine Learning. Bruges (Belgium), 28-30 April 2010, d-side publi., ISBN 2-93030-10-2. Efficient online learning of a non-negative sparse autoencoder Andre Lemme, R. Felix Reinhart and Jochen J. Steil
Machine Learning for Medical Image Analysis. A. Criminisi & the InnerEye team @ MSRC
Machine Learning for Medical Image Analysis A. Criminisi & the InnerEye team @ MSRC Medical image analysis the goal Automatic, semantic analysis and quantification of what observed in medical scans Brain
Journal of Industrial Engineering Research. Adaptive sequence of Key Pose Detection for Human Action Recognition
IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Adaptive sequence of Key Pose Detection for Human Action Recognition 1 T. Sindhu
SimFonIA Animation Tools V1.0. SCA Extension SimFonIA Character Animator
SimFonIA Animation Tools V1.0 SCA Extension SimFonIA Character Animator Bring life to your lectures Move forward with industrial design Combine illustrations with your presentations Convey your ideas to
Machine Learning and Statistics: What s the Connection?
Machine Learning and Statistics: What s the Connection? Institute for Adaptive and Neural Computation School of Informatics, University of Edinburgh, UK August 2006 Outline The roots of machine learning
Appendix 4 Simulation software for neuronal network models
Appendix 4 Simulation software for neuronal network models D.1 Introduction This Appendix describes the Matlab software that has been made available with Cerebral Cortex: Principles of Operation (Rolls
D-optimal plans in observational studies
D-optimal plans in observational studies Constanze Pumplün Stefan Rüping Katharina Morik Claus Weihs October 11, 2005 Abstract This paper investigates the use of Design of Experiments in observational
Statistical Models in Data Mining
Statistical Models in Data Mining Sargur N. Srihari University at Buffalo The State University of New York Department of Computer Science and Engineering Department of Biostatistics 1 Srihari Flood of
Animation (-4, -2, 0 ) + (( 2, 6, -4 ) - (-4, -2, 0 ))*.75 = (-4, -2, 0 ) + ( 6, 8, -4)*.75 = (.5, 4, -3 ).
Animation A Series of Still Images We Call Animation Animation needs no explanation. We see it in movies and games. We grew up with it in cartoons. Some of the most popular, longest-running television
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING
EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College
Long-term Stock Market Forecasting using Gaussian Processes
Long-term Stock Market Forecasting using Gaussian Processes 1 2 3 4 Anonymous Author(s) Affiliation Address email 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
Chapter 1. Introduction. 1.1 The Challenge of Computer Generated Postures
Chapter 1 Introduction 1.1 The Challenge of Computer Generated Postures With advances in hardware technology, more powerful computers become available for the majority of users. A few years ago, computer
CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.
Lecture Machine Learning Milos Hauskrecht [email protected] 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht [email protected] 539 Sennott
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION
AN IMPROVED DOUBLE CODING LOCAL BINARY PATTERN ALGORITHM FOR FACE RECOGNITION Saurabh Asija 1, Rakesh Singh 2 1 Research Scholar (Computer Engineering Department), Punjabi University, Patiala. 2 Asst.
Short Presentation. Topic: Locomotion
CSE 888.14 Advanced Computer Animation Short Presentation Topic: Locomotion Kang che Lee 2009 Fall 1 Locomotion How a character moves from place to place. Optimal gait and form for animal locomotion. K.
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Facial Expression Analysis and Synthesis
1. Research Team Facial Expression Analysis and Synthesis Project Leader: Other Faculty: Post Doc(s): Graduate Students: Undergraduate Students: Industrial Partner(s): Prof. Ulrich Neumann, IMSC and Computer
An Instructional Aid System for Driving Schools Based on Visual Simulation
An Instructional Aid System for Driving Schools Based on Visual Simulation Salvador Bayarri, Rafael Garcia, Pedro Valero, Ignacio Pareja, Institute of Traffic and Road Safety (INTRAS), Marcos Fernandez
Learning is a very general term denoting the way in which agents:
What is learning? Learning is a very general term denoting the way in which agents: Acquire and organize knowledge (by building, modifying and organizing internal representations of some external reality);
Graphics. Computer Animation 고려대학교 컴퓨터 그래픽스 연구실. kucg.korea.ac.kr 1
Graphics Computer Animation 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr 1 Computer Animation What is Animation? Make objects change over time according to scripted actions What is Simulation? Predict how objects
Principal Components of Expressive Speech Animation
Principal Components of Expressive Speech Animation Sumedha Kshirsagar, Tom Molet, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva 24 rue du General Dufour CH-1211 Geneva, Switzerland {sumedha,molet,thalmann}@miralab.unige.ch
Low-resolution Character Recognition by Video-based Super-resolution
2009 10th International Conference on Document Analysis and Recognition Low-resolution Character Recognition by Video-based Super-resolution Ataru Ohkura 1, Daisuke Deguchi 1, Tomokazu Takahashi 2, Ichiro
New Media production week 9
New Media production week 9 How to Make an Digital Animation [email protected] Hardware PC or Mac with high resolution graphics and lots of RAM Peripherals such as graphics tablet Digital Camera (2D,
Robot Task-Level Programming Language and Simulation
Robot Task-Level Programming Language and Simulation M. Samaka Abstract This paper presents the development of a software application for Off-line robot task programming and simulation. Such application
Local Gaussian Process Regression for Real Time Online Model Learning and Control
Local Gaussian Process Regression for Real Time Online Model Learning and Control Duy Nguyen-Tuong Jan Peters Matthias Seeger Max Planck Institute for Biological Cybernetics Spemannstraße 38, 776 Tübingen,
MODELING AND PREDICTION OF HUMAN DRIVER BEHAVIOR
MODELING AND PREDICTION OF HUMAN DRIVER BEHAVIOR Andrew Liu and Dario Salvucci* MIT Man Vehicle Laboratory, Rm 37-219, 77 Massachusetts Ave., Cambridge, MA 02139 *Nissan Cambridge Basic Research, 4 Cambridge
KNOWLEDGE-BASED IN MEDICAL DECISION SUPPORT SYSTEM BASED ON SUBJECTIVE INTELLIGENCE
JOURNAL OF MEDICAL INFORMATICS & TECHNOLOGIES Vol. 22/2013, ISSN 1642-6037 medical diagnosis, ontology, subjective intelligence, reasoning, fuzzy rules Hamido FUJITA 1 KNOWLEDGE-BASED IN MEDICAL DECISION
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Visualization by Linear Projections as Information Retrieval
Visualization by Linear Projections as Information Retrieval Jaakko Peltonen Helsinki University of Technology, Department of Information and Computer Science, P. O. Box 5400, FI-0015 TKK, Finland [email protected]
Introduction to Computer Graphics
Introduction to Computer Graphics Torsten Möller TASC 8021 778-782-2215 [email protected] www.cs.sfu.ca/~torsten Today What is computer graphics? Contents of this course Syllabus Overview of course topics
Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks
Chin. J. Astron. Astrophys. Vol. 5 (2005), No. 2, 203 210 (http:/www.chjaa.org) Chinese Journal of Astronomy and Astrophysics Automated Stellar Classification for Large Surveys with EKF and RBF Neural
Class-specific Sparse Coding for Learning of Object Representations
Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany
Exploratory Data Analysis with MATLAB
Computer Science and Data Analysis Series Exploratory Data Analysis with MATLAB Second Edition Wendy L Martinez Angel R. Martinez Jeffrey L. Solka ( r ec) CRC Press VV J Taylor & Francis Group Boca Raton
CCNY. BME I5100: Biomedical Signal Processing. Linear Discrimination. Lucas C. Parra Biomedical Engineering Department City College of New York
BME I5100: Biomedical Signal Processing Linear Discrimination Lucas C. Parra Biomedical Engineering Department CCNY 1 Schedule Week 1: Introduction Linear, stationary, normal - the stuff biology is not
Subspace Analysis and Optimization for AAM Based Face Alignment
Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China [email protected] Stan Z. Li Microsoft
