default search action
ACM Transactions on Multimedia Computing, Communications, and Applications, Volume 3
Volume 3, Number 1, February 2007
- Changsheng Xu, Namunu Chinthaka Maddage, Xi Shao, Qi Tian:
Content-adaptive digital music watermarking based on music structure analysis. 1 - Pradeep K. Atrey, Mohan S. Kankanhalli, B. John Oommen:
Goal-oriented optimal subset selection of correlated multimedia streams. 2 - Ba Tu Truong, Svetha Venkatesh:
Video abstraction: A systematic review and classification. 3 - Rachel Heck, Michael N. Wallick, Michael Gleicher:
Virtual videography. 4 - Wei-Qi Yan, Mohan S. Kankanhalli:
Multimedia simplification for optimized MMS synthesis. 5 - Datong Chen, Jie Yang, Robert G. Malkin, Howard D. Wactlar:
Detecting social interactions of the elderly in a nursing home environment. 6
Volume 3, Number 2, May 2007
- Tiecheng Liu, John R. Kender:
Computational approaches to temporal sampling of video sequences. 7 - Simon Moncrieff, Svetha Venkatesh, Geoff A. W. West:
Online audio background determination for complex audio environments. 8 - Chika Oshima, Kazushi Nishimoto, Norihiro Hagita:
A piano duo support system for parents to lead children to practice musical performances. 9 - Xiaofei He, Deng Cai, Ji-Rong Wen, Wei-Ying Ma, Hong-Jiang Zhang:
Clustering and searching WWW images using link and page layout analysis. 10 - Byunghee Jung, Junehwa Song, Yoon-Joon Lee:
A narrative-based abstraction framework for story-oriented video. 11 - Ron Shacham, Henning Schulzrinne, Srisakul Thakolsri, Wolfgang Kellerer:
Ubiquitous device personalization and use: The next generation of IP multimedia communications. 12
Volume 3, Number 3, August 2007
- Herng-Yow Chen, Sheng-Wei Li:
Exploring many-to-one speech-to-text correlation for web-based language learning. 13 - Surong Wang, Manoranjan Dash, Liang-Tien Chia, Min Xu:
Efficient sampling of training set in large and noisy multimedia data. 14 - Suiping Zhou, Wentong Cai, Stephen John Turner, Bu-Sung Lee, Junhu Wei:
Critical causal order of events in distributed virtual environments. 15 - Chuanjun Li, S. Q. Zheng, B. Prabhakaran:
Segmentation and recognition of motion streams by similarity search. 16 - David E. Ott, Ketan Mayer-Patel:
An open architecture for transport-level protocol coordination in distributed multimedia applications. 17 - Ziad Sakr, Nicolas D. Georganas:
Robust content-based MPEG-4 XMT scene structure authentication and multimedia content location. 18
Volume 3, Number 4, December 2007
- Gheorghita Ghinea, Chabane Djeraba, Stephen R. Gulliver, Kara Pernice Coyne:
Introduction to special issue on eye-tracking applications in multimedia systems. 1:1-1:4 - Carlo Colombo, Dario Comanducci, Alberto Del Bimbo:
Robust tracking and remapping of eye appearance with passive computer vision. 2:1-2:20 - Jun Wang, Lijun Yin, Jason Moore:
Using geometric properties of topographic manifold to detect and track eyes for human-computer interaction. 3:1-3:20 - Dimitris Agrafiotis, Sam J. C. Davies, Cedric Nishan Canagarajah, David R. Bull:
Towards efficient context-specific video coding based on gaze-tracking analysis. 4:1-4:15 - Thierry Urruty, Stanislas Lew, Nacim Ihaddadene, Dan A. Simovici:
Detecting eye fixations by projection clustering. 5:1-5:20 - Andrew T. Duchowski, Arzu Çöltekin:
Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging. 6:1-6:18 - Lester C. Loschky, Gary S. Wolverton:
How late can you update gaze-contingent multiresolutional displays without detection?. 7:1-7:10 - Norman Murray, David J. Roberts, Anthony Steed, Paul M. Sharkey, Paul Dickerson, John Rae:
An assessment of eye-gaze potential within immersive virtual environments. 8:1-8:17 - Dorothy Rachovides, James Walkerdine, Peter Phillips:
The conductor interaction method. 9:1-9:23
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.