HTML5 Zone is brought to you in partnership with:

My name is Sagar Ganatra and I'm from Bangalore, India. I'm currently employed with Adobe India and I work there as a ColdFusion Engineer. At Adobe, I have worked on various features of ColdFusion and ColdFusion Builder. I'm very much passionate of web technologies and have a very good understanding of jQuery, Flex, HTML5, Java and of course ColdFusion. Sagar H is a DZone MVB and is not an employee of DZone and has posted 43 posts at DZone. You can read more from them at their website. View Full User Profile

Pushing HTML5 Video Content over ColdFusion WebSockets

04.18.2012
| 3897 views |
  • submit to reddit

I’ve been playing with the WebSocket feature introduced in ColdFusion 10 for some time now. I was trying out pushing images over a ColdFusion WebSocket channel and it worked just fine. But this time I wanted to put WebSockets to test and wanted to push large data at regular intervals. I thought maybe I can push video data over WebSockets and it turned out that there is no direct way to stream video data to many clients. I came across the function - drawImage that can be used to draw an Image or Video on a HTML5 Canvas. Once an image is drawn on the Canvas, it’s base64 encoded data can be obtained by calling the toDataURL function on the Canvas object.  This data can then be transferred over a ColdFusion WebSocket to all subscribers who can then use this  data to draw the image(video frame) on a Canvas.

Here’s the demo video:

Unable to display content. Adobe Flash is required.

Please note, I’m not transferring the audio track present in the Video and I’m still trying to figure how that can be achieved.

Here’s the Publisher code:

<!DOCTYPE html> 
<html>
    <body> 
    <cfwebsocket name="socket" onmessage="messageHandler"> 
    <video id="videoElement" controls muted> 
        <source src="windowsill.webm" type="video/webm"> </video>
    <br> 
    <canvas id="canvasElement" style="border: solid 1px;"> 
    </canvas> 
    
    <script type="text/javascript"> 
      var context,canvasElement,videoElement, previous, current; 

      //message handler for CF WebSocket 
      messageHandler = function(msg){ 
      } 

      //function to call once the DOM content has been loaded 
      document.addEventListener('DOMContentLoaded', function(){ 
        videoElement = document.getElementById('videoElement'); 
        canvasElement = document.getElementById('canvasElement'); 
        context = canvasElement.getContext('2d'); 
      }); 
        
      //function to call once the videos meta data is available 
      document.getElementById('videoElement').addEventListener('loadedmetadata', function(){ 
         
         //set the canvas width and height to videos width and height 
          canvasElement.width = videoElement.videoWidth; 
          canvasElement.height = videoElement.videoHeight; 
          
          //event listener when the video is played 
          videoElement.addEventListener('play', function(){
            //call the draw function 
            draw(this, videoElement.videoWidth, videoElement.videoHeight); 

          }); 
        }); 

        //function to draw the video frame on a temporary canvas at 20fps 
        function draw(video, width, height){ 
          
          //if the video has been paused or ended return false 
          if (video.paused || video.ended) return false; 
          
          //draw the current video frame onto a canvas 
          context.drawImage(video, 0, 0, width, height); 
          
          //get base64 encoded data from Canvas 
          current = canvasElement.toDataURL("image/png"); 

          //just in case if the previous frame is same as current 
          if (previous != current) {
           
           //transfer the base64 encode image over a WebSocket 
           socket.publish("myChannel", current); 

         } 

          previous = current; //draw the video frame on the canvas at 20fps by calling the draw function every 50ms setTimeout(draw, 50, video, width, height); 
        } 
      </script> 
    </body>
</html>

As you can see from the above code, once you start playing the video the draw function is called. Here I've drawn the video on a Canvas using the drawImage function and then used the function toDataURL to get the base64 encoded data of the image. This is then transferred over a ColdFusion WebSocket channel (‘myChannel’). I’m calling this function (‘draw’) every 50ms to draw the current video frame on the canvas (to achieve 20fps) and transfer the image over a WebSocket.

The client\subscriber on receiving the data, draws  the image (video frame) on a canvas. Here’s the subscriber code:

<!DOCTYPE HTML> 
<html> 
  <body> 
      <cfwebsocket name="socket" onmessage="messageHandler" onopen="openHandler"> 
      <canvas id="canvasElement" style="border: solid 1px;" width="426" height="240"> 
      </canvas> 
  </body> 
  <script type="text/javascript"> 
    var canvas, context, count = 0, flag = false; 
    var newImage = new Image(); 
    document.addEventListener('DOMContentLoaded', function(){ 
      canvas = document.getElementById('canvasElement'); 
      context = canvas.getContext('2d'); 
    }); 

    function openHandler(){ 
      
      //subscribe to the CF WebSocket channel 
      socket.subscribe("myChannel", {}, dataHandler); 
    } 
    
    function messageHandler(msg){ 
    } 

    //function that receives the data from the WebSocket channel 
    function dataHandler(msg){ 
      if (msg.type == 'data') { 
        
        //function to call when the image is loaded with base64 data 
        newImage.onload = function(){ 
          
          //draw the image on the canvas 
          context.drawImage(newImage, 0, 0); 
          
          //set the flags when the above function is complete 
          flag = true; count = 1; 

        } 

        //if ready to be drawn on the canvas 
        if (count == 0 || flag == true) { 
          flag = false; 

          //assign base64 data to the source of the image 
          newImage.src = msg.data; 

        } 
      } 
    } 
  </script> 
</html>

On the client side, once the data is received over the WebSocket it is assigned to the source of an Image object. The reason why I do this is because the drawImage function takes either an Image or a Video as it's first argument and doesn't allow base64 data. Once the Image is loaded, it is ready to be drawn on the canvas. This process continues until the video ends or the user pauses the video. 

Published at DZone with permission of Sagar H Ganatra, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)