A fellow developer has written me an email asking why ObjectiveFlickr uses temp file when preparing upload data. In the README file, I said such practice "allows ObjectiveFlickr to operate in memory-constrained settings". Especially on iPhone.
The first question that comes to mind is, of course, if we already have the image data, which is what
UIImage object represents, why should we bother to write it back to file, and read it back when uploading, which actually costs extra power consumption?
There are a few factors that have eventually led to the present design choice.
UIImagePickerController actually gives you a
UIImage object, not
NSData (for which you have to obtain using
UIImageJPEGRepresentation), so there is already some overhead. I haven't investigated this fully, but I have reasons to believe that
UIImageJPEGRepresentation does its own copy (duplicated image data) because it involves compression and
EXIF data stripping—even if what's saved in the flash drive is a jpeg image itself.
There is the overhead of combining the multipart HTTP POST data. A head and a tail must be added to the image binary data. In theory, we could have used a "virtualized"
NSInputStream subclass, so that we could first feed it with the head, then with a
NSInputStream for the input data, then another tail. I wouldn't say this if I hadn't tried—but
NSInputStream subclassing has an unsolved issue on Apple's part that makes the design totally unusable.
We could do the head and tail appending on the fly in memory, but it can be quite heavy, considering you're copying a 2MB data block a few times (
NSData to another
NSMutableData for combined POST data), with some apps that could mean multiple
didReceiveMemoryWarning received during the course of the process, which actually slows down the app, compared to the overhead of writing it to flash drive (in which case
didReceiveMemoryWarning is less often received).
The image is indeed almost immediately released after being written to the temp file, as it's an autoreleased
NSData object. We could even add autorelease pool before we get the image from image picker controller, so as to release the autoreleased
UIImage immediately after we obtained the
NSData (but we have to manually retain/release it once, of course).
The problem is less acute on desktop, although if the image is large (say 8 MB), using temp file allows us to free that block of memory in the course of uploading...
But I agree that power consumption can be an issue, although I suspect wifi/3G can be the bigger hog. And it can be a topic of further profiling and measurement, so as to give us a better idea if we do too much premature optimization. The avoidance/reduction of
didReceiveMemoryWarning, though, is my design goal here on iPhone. For desktop the purpose is to free up big chunk of memory.