In general, MJPEG does not support audio.
Further, there really
is not a set standard for mjpeg. If you are hearing audio with your MJPEG, then it is probably inside an AVI or Quicktime wrapper. The only common use for MJPEG in this day and age is security cameras, and those usually do not provide audio, and usually are not wrapped in an AVI or Quicktime container.
I really like the idea of the "dynamic bitmap". Otherwise, I would say provide a way to process http response chunk by chunk, and let people write their own MJPEG handlers. The latter option certainly would push more of the work on to channel developers, but is probably the best option for supporting the widest range of mjpeg streams. The pseudoCode would look something like:
contents = createChunkByteArray()
for each chunk from continuosChunkStream
if isMJPEGBoundry(chunk) Then
bitmap = decodeJpegByteArray(contents)
displayBitmap(bitmap)
contents.empty()
else
contents.push(chunk)
end if
end for
Note that the above sample code assumes the decodeJpegByteArray() function. It would be possible to implement writing it out to a temp file and then reading it back in, but that is unlikely to be efficient. So there would need to be further additions to the API to fully support MJPEG.