Suggestions

close search

Add Messaging, Voice, and Authentication to your apps with Vonage Communications APIs

Visit the Vonage API Developer Portal

Back to Tutorials

Custom Video Rendering (iOS)

Overview

This tutorial walks through the steps required to make minor modifications to the video renderer used by an OTPublisher object. You can also use the same techniques to modify the video renderer used by an OTSubscriber object (though this example only illustrates a custom renderer for a publisher).

Setting up your project

The code for this section is in the Basic Video Renderer project of the opentok-ios-sdk-samples repo, so if you haven't already, you'll need to clone the repo into a local directory — this can be done using the command line:

git clone https://github.com/opentok/opentok-ios-sdk-samples.git

Change directory to the Basic Video Renderer project:

cd opentok-ios-sdk-samples/Basic-Video-Renderer

Then install the OpenTok dependency:

pod install

Open the project in Xcode to follow along.

Exploring the code

In this example, the app uses a custom video renderer to display a black-and-white version of the OTPublisher object's video.

In the main ViewController, after initializing the OTPublisher object, the videoRender property of the OTPublisher object is set to an instance of OTKBasicVideoRender:

_publisher = [[OTPublisher alloc] initWithDelegate:self settings:settings];
_renderer = [[OTKBasicVideoRender alloc] init];
_publisher.videoRender = _renderer;

OTKBasicVideoRender is a custom class that implements the OTVideoRender protocol (defined in the OpenTok iOS SDK). This protocol lets you define a custom video renderer to be used by an OpenTok publisher or subscriber.

The [OTKBasicVideoRender init:] method sets a _renderView property to a UIView object. This is the UIView object that will contain the view to be rendered (by the publisher or subscriber). In this sample, the UIView object is defined by the custom OTKCustomRenderView class, which extends UIView:

- (id)init
{
    self = [super init];
    if (self) {
        _renderView = [[OTKCustomRenderView alloc] initWithFrame:CGRectZero];
    }
    return self;
}

The OTKCustomRenderView class includes methods (discussed later) that convert a video frame to a black-and-white representation.

The [OTVideoRender renderVideoFrame:] method is called when the publisher (or subscriber) renders a video frame to the video renderer. The frame an OTVideoFrame object (defined by the OpenTok iOS SDK). In the OTKCustomRenderView implementation of this method, it simply takes the frame and passes it along to the [renderVideoFrame] method of the OTKCustomRenderView object:

- (void)renderVideoFrame:(OTVideoFrame*) frame
{
    [(OTKCustomRenderView*)self.renderView renderVideoFrame:frame];
}

The [OTKCustomRenderView renderVideoFrame] method iterates through the pixels in the plane, adjusts each pixel to a black-and-white value, adds the value to a buffer. It then writes the buffer to a CGImageRef representing the view's image, and calls [self setNeedsDisplay] to render the image view:

- (void)renderVideoFrame:(OTVideoFrame *)frame
{
    __block OTVideoFrame *frameToRender = frame;
    dispatch_sync(self.renderQueue, ^{
        if (_img != NULL) {
            CGImageRelease(_img);
            _img = NULL;
        }

        size_t bufferSize = frameToRender.format.imageHeight
          * frameToRender.format.imageWidth * 3;
        uint8_t *buffer = malloc(bufferSize);

        uint8_t *yplane = [frameToRender.planes pointerAtIndex:0];

        for (int i = 0; i < frameToRender.format.imageHeight; i++) {
            for (int j = 0; j < frameToRender.format.imageWidth; j++) {
                int starting = (i * frameToRender.format.imageWidth * 3) + (j * 3);
                uint8_t yvalue = yplane[(i * frameToRender.format.imageWidth) + j];
                // If in a RGB image we copy the same Y value for R, G and B
                // we will obtain a Black & White image
                buffer[starting] = yvalue;
                buffer[starting+1] = yvalue;
                buffer[starting+2] = yvalue;
            }
        }

        CGDataProviderRef imgProvider = CGDataProviderCreateWithData(NULL,
                                                                     buffer,
                                                                     bufferSize,
                                                                     release_frame);

        _img = CGImageCreate(frameToRender.format.imageWidth,
                             frameToRender.format.imageHeight,
                             8,
                             24,
                             3 * frameToRender.format.imageWidth,
                             CGColorSpaceCreateDeviceRGB(),
                             kCGBitmapByteOrder32Big | kCGImageAlphaNone,
                             imgProvider,
                             NULL,
                             false,
                             kCGRenderingIntentDefault);


        CGDataProviderRelease(imgProvider);
        dispatch_async(dispatch_get_main_queue(), ^{
            [self setNeedsDisplay];
        });
    });
}

Congratulations! You've finished the Custom Video Rendering Tutorial for iOS.
You can continue to play with and adjust the code you've developed here, or check out the Next Steps below.