Evaluation Of Mapped Motion

Though the resulting animation looks promising, we would like to verify that the mapped face motion is valid for the target subjects. Theoretically, the best way to evaluate the mapped motion is to have the target subject speak exactly the same sentence in the same manner as the source subject, measure the face motion, and compare it with the mapped motion. However, it is almost

impossible to speak in exactly the same way as another person. Therefore we evaluate our results by comparing the time-dependent properties of the first two major principal components, noting that these components cover more than 90% of the total variance, and that their deformation effects are much more notable than the other components.

The American male subject and the Japanese male target shown in Figure 17.18 were chosen for this evaluation. First we selected four English sentences spoken by the American subject for transfer to the Japanese subject, and eight Japanese sentences spoken by the Japanese subject for comparison. We then extracted linear-combination values of these speakers to reconstruct talking-face animations with their individual face models.

The continuous lines in Figure 17.19(a) show changes in the first and second principal components during vocalization of all sentences. The gray lines in the left figure represent the Japanese subject's trajectory, and the black lines represent the American subject's trajectory. This figure tells us that the American subject's linear-combination values cannot be applied to the target subject because the distribution and direction of the trajectories are quite different.

The distribution of the trajectories denotes the range over which that face will change during speech with respect to the two major principal components. If one trajectory is outside the major area of the other trajectory, mapping will result in an unusual face for the target subject. Additionally, trajectory direction denotes how the face will deform using the first two components. Different directions indicate different face motion during speech. Therefore


100 -50 0 50 100 150

100 -50 0 50 100 150

100 -50 0 50 100 150

100 -50 0 50 100 150

Figure 17.19 The trajectory of the first two PCs during speech for a source subject (black): (a) before and (b) after applying facial motion mapping; and the trajectory of the target subject (gray).

Figure 17.19(a) says that using the American subject's linear combination values for the target subject's face model will result in some unusual face postures, and the target face will move differently than that of the original subject.

In Figure 17.19(b) the Japanese subject's trajectory is again shown in gray, but the motion originating from the American subject (in black) now results from facial motion mapping. When compared to the left figure, we see that the area of the black trajectory is now almost entirely included in the area of the gray trajectory, and the direction of the black trajectory matches one of the direction groups of the gray lines. This implies that the face motion transferred by facial motion mapping is in the possible range of the target's original face motion.

Was this article helpful?

0 0

Post a comment