Sep 23, 2010

Getting XBMC to work with Beyond TV's chapters.xml files

XBMC doesn't appear to read Beyond TV's commercials skip files (named file.chapters.xml), although it has support for them.

I found that recent Beyond TV software writes these files out as Unicode (UTF-16) which probably confuses XBMC. So I wrote this quick Python script to convert all the chapter files given on the command line to UTF-8 in /tmp. Save it as "convert_chapters.py" and run with "python convert_chapters.py *.chapters.xml"
#!/usr/bin/python
# Convert all the files given on command line from utf-16 to utf-8 in /tmp

import os.path, sys, codecs

for filename in sys.argv[1:]:
    f = codecs.open(filename, encoding="utf-16")
    data = f.read()
    f.close()
    newfilename = "/tmp/%s" % os.path.basename(filename)
    f = codecs.open(newfilename, "w", encoding="utf-8")
    f.write(data)
    f.close()
The newly converted files work under XBMC now. They do not appear as chapters (chapter skip still does not use this information) but as sort of on-the-fly video editing information. When you reach the commercials while viewing a recorded video you'll see XBMC doing an automated skip to the end of the commercial block, so you don't even have to reach for the remote.

Note: on Windows there is no "/tmp" so you should replace that with an appropriate location. Or you can replace newfilename with filename in the codecs.open() call and have it overwrite the chapter files in place - but make sure to make a backup first in case things don't go as planned!

Sep 15, 2010

Creating time-lapse videos under Linux with EXIF stamps

Time-lapse videos accelerate an event tens or thousands of times faster than it really happened. They're usually created with a digital camera by shooting a sequence of photos taken at fixed intervals (say, every minute), hence, some time lapses between each "frame". To help this task, some digital cameras come with an intervalometer function where you set the start time, interval and number of pictures and leave the camera to do the rest.

Now we have a bunch of photos and need to make a movie out of it. In Linux, the easiest way is to use mencoder from the MPlayer package, or ffmpeg. Side note: to get the fully-capable mplayer package on Fedora you need to use the RPM Fusion repositories (both free and non-free).

The following command creates a high-quality 720p H264 AVI movie, suitable for uploading to YouTube, from all the jpg photos in the current folder and using a frame rate of 3 photos per second (fps=3).
mencoder "mf://frames/*.jpg" -mf fps=3 -ovc x264 -nosound \
          -x264encopts crf=16:me=umh:ssim:ref=1:b_adapt=2 \
          -vf scale=-11:720,hqdn3d,pp=al \
          -o video-720p.avi

Now I want to tag the video with the time, to get an idea of how fast it went, and add more data from the EXIF headers. We need two more tools to do this: convert and exiftool, for which you need to install the ImageMagick (or GraphicsMagick) and perl-Image-ExifTool packages.
Captioned time-lapse movie frame
A movie using the above format can be made with the following shell script. It is very customizable, you can add or remove text, change fonts and colors etc.  Here goes, save this in timelapse.sh:

#!/bin/bash
# Convert the pictures given on the command line into a time-lapse movie
# Author: Laurentiu Badea, Sep 15, 2010

FPS=3      # frame rate, in frames per second
VIDEO_HEIGHT = 720   # make a 720p movie. Other options: 1080, 480, 240...

start=$1
if [ ! -f "$start" ]; then echo "Usage $0 *.jpg"; exit 1; fi
[ -d "frames" ] || mkdir "frames" || exit 1

HEIGHT=`identify -format "%[height]" "$start"`
PTS=$[$HEIGHT*50/1000]   # calculate font size proportional to image size

for f in $*; do
  echo "$f"

  # format image timestamp. See "man strftime" for all format options.
  TEXT1=`exiftool -d "%l:%M%P" -p '$CreateDate' "$f"`

  # prepare EXIF data. See "exiftool -j yourimage.jpg" for more tags.
  TEXT2=`exiftool -p '$Model $FocalLength f/$Aperture ${ShutterSpeed}s ISO$EXIF:ISO' "$f"`

  convert "$f" \
    -gravity SouthWest \
    -font "Courier-Bold" \
    -fill "black" \
    -pointsize $PTS \
    -annotate +$PTS+$[3*$PTS/2] "$TEXT1" \
    -font "Times-Roman" \
    -fill "rgba(90%,90%,90%,0.4)" \
    -pointsize $[$PTS*2/3] \
    -annotate +$PTS+$[$PTS/2] "$TEXT2" \
    -gravity SouthEast \
    -annotate +$PTS+$[3*$PTS/2] "©2010" \
    -compress none \
    frames/${f/.*/.tif}
done

[ "$VIDEO_HEIGHT" -gt "$HEIGHT" ] && VIDEO_HEIGHT="$HEIGHT"
set -x
mencoder "mf://frames/*.tif" -mf fps=$FPS -ovc x264 -nosound \
         -x264encopts crf=16:me=umh:ssim:ref=1:b_adapt=2 \
         -vf scale=-11:$VIDEO_HEIGHT,hqdn3d,pp=al \
         -o video-${VIDEO_HEIGHT}p.avi
set +x
#rm -rf frames    # leave this commented out to try the mencoder line manually
I save the frames as TIF to avoid some quality loss if they were saved as jpeg again. That takes a lot of space so you may want to use jpg instead. Also they are saved at full resolution with convert, and I let mencoder to the scaling. This again wastes some space but avoids widths that make the video unplayable.

You'll notice sometimes the colors are specified as rgba(90%,90%,90%,0.4). That makes a transparent color. The first three numbers are R,G,B (0%-100%) and the last one is the opacity or alpha channel, from 0.0 (fully transparent) to 1.0 (completely opaque).

You can add more text, just remember that convert reads the commands sequentially, like a program (that is why I laid it out that way) and each -annotate command uses the most recent settings that appeared before it.

Don't make the text too obtrusive. It's hard to resist, but you don't want the viewers to be distracted or annoyed by stuff they don't care for. Transparency helps to let the text blend in more and disappear if you don't look specifically for it. Another option is add a black border below the image where to write. A better option is to leave the video stream untouched and put the data in a separate stream, for example, as subtitles. This may be the subject of a future post.