{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,12,10]],"date-time":"2024-12-10T05:22:14Z","timestamp":1733808134564,"version":"3.30.1"},"reference-count":20,"publisher":"Wiley","issue":"2","license":[{"start":{"date-parts":[[2002,8,13]],"date-time":"2002-08-13T00:00:00Z","timestamp":1029196800000},"content-version":"vor","delay-in-days":104,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["J. Visual. Comput. Animat."],"published-print":{"date-parts":[[2002,5]]},"abstract":"Abstract<\/jats:title>Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and putting these together according to the required performance. This is the strategy followed by the paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1\/25 seconds. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is twofold. On the one hand, the face can be animated; in our case it can be made to speak new sentences. On the other hand, face dynamics can be tracked in 3D without markers for performance capture. Copyright \u00a9 2002 John Wiley & Sons, Ltd.<\/jats:p>","DOI":"10.1002\/vis.283","type":"journal-article","created":{"date-parts":[[2002,8,27]],"date-time":"2002-08-27T01:28:17Z","timestamp":1030411697000},"page":"97-106","source":"Crossref","is-referenced-by-count":15,"title":["Realistic face animation for speech"],"prefix":"10.1002","volume":"13","author":[{"given":"Gregor A.","family":"Kalberer","sequence":"first","affiliation":[]},{"given":"Luc","family":"Van Gool","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2002,8,13]]},"reference":[{"key":"e_1_2_1_2_2","doi-asserted-by":"publisher","DOI":"10.1023\/A:1008166717597"},{"key":"e_1_2_1_3_2","first-page":"35","volume-title":"SIGGRAPH'99 Conference Proceedings","author":"Beier Th","year":"1992"},{"key":"e_1_2_1_4_2","unstructured":"BreglerCh.OmohundroS.Nonlinear image interpolation using manifold learning. InAdvances in Neural Information Processing Systems 1995;7."},{"key":"e_1_2_1_5_2","doi-asserted-by":"crossref","unstructured":"BreglerC CovellM SlaneyM.Video rewrite: driving visual speech with audio. InSIGGRAPH'971997;353\u2013360.","DOI":"10.1145\/258734.258880"},{"key":"e_1_2_1_6_2","doi-asserted-by":"crossref","unstructured":"ChenD StateA.Interactive shape metamorphosis. InSymposium on Interactive 3D Graphics: SIGGRAPH'95 Conference Proceedings1995;43\u201344.","DOI":"10.1145\/199404.199411"},{"key":"e_1_2_1_7_2","doi-asserted-by":"crossref","unstructured":"BrandM.Voice puppetry. InAnimation (SIGGRAPH'99)1999.","DOI":"10.1145\/311535.311537"},{"key":"e_1_2_1_8_2","doi-asserted-by":"crossref","unstructured":"PighinF HeckerJ LischinskyD SzeliskiR SalesinDH.Synthesizing realistic facial expressions from photographs. InProceedings of SIGGRAPH'981998;75\u201384.","DOI":"10.1145\/280814.280825"},{"key":"e_1_2_1_9_2","unstructured":"TaoH HuangThS.Explanation\u2010based facial motion tracking using a piecewise bezi\u00e9r volume deformation model. InProceedings of CVPR1999."},{"key":"e_1_2_1_10_2","doi-asserted-by":"crossref","unstructured":"ReveretL BaillyG BadinP.Mother a new generation of talking heads providing a flexible articulatory control for videorealistic speech animation. InProceedings of ICSL'2000 2000.","DOI":"10.21437\/ICSLP.2000-379"},{"key":"e_1_2_1_11_2","unstructured":"KingS ParentR OlsafskyL.An anatomically\u2010based 3D parameter lip model to support facial animation and synchronized speech. InProceedings of Deform Workshop2000;1\u201319."},{"key":"e_1_2_1_12_2","unstructured":"WatersK FrisbeeJ.A coordinated muscle model for speech animation.Graphics Interface'95 Canadian Human\u2010Computer Communications Society Ontario Canada May1995;163\u2013170."},{"key":"e_1_2_1_13_2","first-page":"123","volume-title":"Hearing by Eye","author":"Munhall KG","year":"1998"},{"key":"e_1_2_1_14_2","unstructured":"ProesmanM.Eyetronics Spring2001.Http:\/\/www.eyetronics.com."},{"key":"e_1_2_1_15_2","doi-asserted-by":"publisher","DOI":"10.1093\/comjnl\/7.4.308"},{"key":"e_1_2_1_16_2","doi-asserted-by":"crossref","unstructured":"BlanzV VetterT.A morphable model for the synthesis of 3D faces. InProceedings of SIGGRAPH1999;187\u2013194.","DOI":"10.1145\/311535.311556"},{"key":"e_1_2_1_17_2","unstructured":"ScottKC KagelsDS WatsonSH RomH WrightJR LeeM HusseyKJ.Synthesis of speaker facial movement to match selected speech sequences. InProceedings of the Fifth Australian Conference on Speech Science and Technology Vol. 2 1994;620\u2013625."},{"key":"e_1_2_1_18_2","doi-asserted-by":"publisher","DOI":"10.1044\/jshr.2803.381"},{"key":"e_1_2_1_19_2","doi-asserted-by":"publisher","DOI":"10.1121\/1.389537"},{"volume-title":"Perceiving Talking Faces","year":"1998","author":"Massaro DW","key":"e_1_2_1_20_2"},{"key":"e_1_2_1_21_2","unstructured":"TraberC.SVOX: The Implementation of a Text\u2010to\u2010Speech System. PhD thesis Computer Engineering and Networks Laboratory ETH no. 11064 1995."}],"container-title":["The Journal of Visualization and Computer Animation"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/api.wiley.com\/onlinelibrary\/tdm\/v1\/articles\/10.1002%2Fvis.283","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/vis.283","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,9]],"date-time":"2024-12-09T18:39:53Z","timestamp":1733769593000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/vis.283"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2002,5]]},"references-count":20,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2002,5]]}},"alternative-id":["10.1002\/vis.283"],"URL":"https:\/\/doi.org\/10.1002\/vis.283","archive":["Portico"],"relation":{},"ISSN":["1049-8907","1099-1778"],"issn-type":[{"type":"print","value":"1049-8907"},{"type":"electronic","value":"1099-1778"}],"subject":[],"published":{"date-parts":[[2002,5]]}}}