|Lecturer||M.Sc. Trevor Ballard (University of Central Florida)|
|Title||ABR Frontiers: Network-assisted Streaming and Live 360-degree Video|
|Date||Monday 29/10/2018, 14.00 – 15.00|
S3|20 Rundeturmstr. 10, Darmstadt|
New technologies for adaptive bitrate (ABR) streaming allow us to move beyond the traditional confines of VoD DASH streaming. This talk will cover two such technologies and our contributions to them: network-assisted streaming and 360-degree video livestreaming via hardware encoding. |
In the first, network assistance information is provided to the client during a streaming session, offering direct access to the fine-grained internal status of the network. However, traditional ABR algorithms cannot utilize this information, and most learning-based approaches are too slow for this time-critical environment. To combat these challenges, we have developed a contextual bandit algorithm which induces sparsity on the priors of the context, which here represents the network assistance information, effectively keeping only those network statistics which are shown to most impact the user's quality of experience (QoE). We further modify our approach to make decisions extremely quickly without incurring a significant QoE loss, and demonstrate the efficacy of our approach in an NDN testbed.
In the second half of the talk, we present an approach to adaptive tile-based 360-degree video livestreaming via NVENC, a GPU HEVC encoder. A significant body of work has recently appeared on tile-based adaptive streaming, in which less-important parts of a 4K or 360-degree video are requested at a lower bitrate, but there have been few attempts to do this while livestreaming. This is because the few HEVC encoders with tiling support are software encoders which operate far too slowly. Conversely, hardware encoders, which are now common on higher-end consumer GPUs, are extremely fast, but do not support tiling. We circumvent this issue by rearranging the source image to place the image and slice boundaries where our tile boundaries would otherwise be. Furthermore, each image is encoded twice--once at a high bitrate, and once at a lower one. By merging these two encodings and making certain changes in the resulting HEVC bitstream, we get a single video with tiles of either a high or low bitrate.
|Trevor Ballard is a guest researcher in MAKI C3 and the AOC group within KOM. He is primarily interested in multimedia research, and has worked on projects involving adaptive bitrate streaming, collaborative workspaces, and IoT devices. He is from the University of Central Florida, where he worked with Prof. Kien A. Hua in the Data Systems Group, and was previously a research assistant at the University of Hong Kong under Prof. Chuan Wu.|