Research Track and Special Sessions
- Submission deadline: October 31, 2018
- Acceptance notification: December 31, 2018
- Camera-ready deadline: January 31, 2019
- Online submission: https://mmsys19.hotcrp.com/
- Submission format: 6-12 pages, using ACM style format (double-blind)
- Reproducibility: Obtain an ACM reproducibility badge by making datasets and code available (Authors will be contacted to make their artifacts available after paper acceptance)
- Call for papers (pdf)
- Ali C. Begen (Ozyegin University and Networked Media, Turkey)
- Laura Toni (University College London, UK)
- Webpage: http://www.mmsys2019.org/participation/research-track/
- Email: firstname.lastname@example.org
Call for Submissions
The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating systems, real-time systems, databases, mobile computing, distributed systems, computer vision, and middleware communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to investigate the intersections and the interplay of the various approaches and solutions developed across these domains to deal with multimedia data types.
MMSys is a venue for researchers who explore:
- Complete multimedia systems that provide a new kind of multimedia experience or system whose overall performance improves the state-of-the-art through new research results in more than one component, or
- Enhancements to one or more system components that provide a documented improvement over the state-of-the-art for handling continuous media or time-dependent services.
Such individual system components include:
- Operating systems
- Distributed architectures and protocols
- Domain languages, development tools and abstraction layers
- Using new architectures or computing resources for multimedia
- New or improved I/O architectures or I/O devices, innovative uses, and algorithms for their operation
- Representation of continuous or time-dependent media
- Metrics and measurement tools to assess performance
This touches aspects of many hot topics including but not limited to: content preparation and delivery systems, HDR, games, virtual/augmented/mixed reality, 3D video, immersive systems, plenoptics, 360° video, volumetric video delivery, multimedia IoT, multi and many-core, GPGPUs, mobile multimedia and 5G, wearable multimedia, P2P, cloud-based multimedia, cyber-physical systems, multi-sensory experiences, smart cities, QoE.
Machine Learning and Statistical Modeling for Video Streaming
Chairs: Wei Wei and Chaitu Ekanadham, Netflix (USA)
The operating conditions for streaming are increasingly vast. Traditional methods are typically tuned to a limited cross-section, and are far from optimal when conditions deviate outside this regime. Statistical modeling and machine learning present an opportunity to overcome these challenges and fundamentally change video streaming by extracting meaningful structure from data on how content, networks, and devices operate together to produce the end user experience. This special session focuses on the use of statistical modeling and machine learning in the field of Internet video streaming to provide high-quality viewing experiences over a broad range of content types, network conditions and device limitations.
Example topics of interest:
- Network quality characterization and prediction
- Encoding optimization
- Characterization and prediction of user behavior
- Content delivery network (CDN) optimization
- Viewer preference for and sensitivity to QoE
Real-Time Video at the Edge
Chairs: Jason Quinlan and Cormac Sreenan, University College Cork (Ireland)
Wireless edge computing, known as edge or fog computing, a new paradigm, which introduces virtualized compute, analytics and storage resource at the network edge, is proposed as a service model for novel video applications. A crucial challenge will be that many of these services will require time-bounded analytics, stringent quality-of-service, if not quality-of-experience constraints, which will ultimately lead to the need for user-centric service provisioning mechanisms such as network slicing and softwarization within the next iteration of current cellular systems, known as 5G. An example of the benefits offered to service providers and online digital media services, is the utilization of virtualized compute provided by distributed fog computing for temporary caching, inline transcoding and content replication across a network that is physically located closer to the user. This special session is intended to illustrate the role of cloud computing at the wireless edge and how the next-generation of wireless networks will use these architectures and protocols to cope with the increasing demand of video applications with strict delay-sensitive high-throughput demands.
Example topics of interest:
- Video optimized software-defined networking and protocols
- Cloud-based video traffic engineering and control-plane architectures
- Video routing over cloud/fog computing environments and geo-local video streaming
- Video applications for fog/edge computing
- Quality of experience/service/route in edge computing
- Edge/fog computing and data-driven intelligence
- Machine learning techniques for edge video streaming
- Edge video caching and storage
- Edge mobility modeling, security and privacy for video streaming
Advanced Transport Protocols for Video
Chair: Miroslav Ponec, Akamai Technologies (Czech Republic)
The impressive and ever-growing demand for scale and quality of video distribution on the Internet requires advances in transport protocols which in turn drive the evolution of multimedia systems. New transport protocols, such as QUIC, provide opportunities for innovation of video streaming at Internet scale while also bringing new challenges for media systems to cope with. This special session is primarily concerned with solving the problems that arise when carrying video content over new and advanced transport protocols. These include challenges with deployment, adoption, compatibility, reliability, resource efficiency, performance, measurement and modeling, scalability, security, fairness, operating systems support, etc.
Example topics of interest:
- Tuning and enhancing transport protocols for video distribution
- OS changes for efficient use of UDP based transport protocols for video streaming
- Load balancing video distribution over QUIC
- P2P and multicast technologies for OTT video distribution
- Impact of multipath on streaming quality
- WebRTC transport for interactive and low-latency video
- Use of FEC at the transport layer for video distribution
- Security and privacy concerns with new transport protocols
- Impact of new transport protocols on end-to-end video workflows and deployments
- Protocol fairness/aggressiveness and its impact on the network and QoE
Volumetric Media: from Capture to Consumption
Chairs: Francesca De Simone, CWI (the Netherlands), Gwendal Simon, IMT Atlantique (France), Vishy Swaminathan, Adobe Research (USA)
Recent advances in 3D capturing technologies enable the generation of dynamic and static volumetric visual signals from real-world scenes and objects, opening the way to a huge number of applications using this data, from robotics to immersive communications. Volumetric signals are typically represented as polygon meshes or point clouds and can be visualized from any viewpoint, providing six degrees of freedom (6DoF) viewing capabilities. They represent a key enabling technology for Augmented and Virtual Reality (AR and VR) applications, which are receiving a lot of attention from main technological innovation players, both in academic and industrial communities. Research challenges linked to volumetric visual data are numerous and include acquisition, processing, compression for storage, delivery, and user experience evaluation. This special session aims at fostering discussions among the MMSys community about latest advances on technologies involving volumetric signals.
Example topics of interest:
- Volumetric signal acquisition (e.g., light-field acquisition for hyper-realistic volumetric data, point cloud and meshes datasets)
- Volumetric signal rendering (e.g., point clouds rendering, mesh generation, surface reconstruction, surface approximation)
- Volumetric data compression (e.g., point cloud compression, geometry compression) and streaming
- Delivery of volumetric data (e.g., algorithms for adaptive streaming, novel architecture for delivery networks, integration into 5G networks)
- Subjective and objective quality assessment of volumetric signals
- User interaction with volumetric data (e.g., 6DoF viewing, user visual attention studies)
- AR and VR applications involving volumetric data
Papers should be between 6-12 pages long (in PDF format) prepared in the ACM style and written in English. MMSys papers enable authors to present entire multimedia systems or research work that builds on considerable amounts of earlier work in a self-contained manner. MMSys papers are published in the ACM Digital Library. The papers are double-blind reviewed.
All submissions will be peer-reviewed by at least three TPC members. All papers will be evaluated for their scientific quality. Authors will have a chance to submit their rebuttals before online discussions among the TPC members.
ACM SIGMM has a tradition of publishing open datasets (MMSys) and open source projects (ACM Multimedia). MMSys 2019 will continue to support scientific reproducibility, by implementing the ACM reproducibility badge system. All accepted papers will be contacted by the Reproducibility Chair, inviting the authors to make their dataset and code available, and thus, obtaining an ACM badge (visible at the ACM DL). The additional material will be published as Appendixes, with no effect on the final page count for papers.