To achieve it, I needed a fake FTP server that can receive the user id and password information. When the camera detect motion, it will try to access the fake FTP server. Then I can start VLC video recording until the motion stops. That's it. It sounds simple and easy but it took for long time for me to have a program for it.
It creates a thread safe message queue that will be used for inter-thread-communication.
A thread for VLC recording is created and it will be always ready in background. Whenever a new message is enqueued to the message queue, it will wake up and start recording the video.
A fake FTP server will keep listening a new connections. Whenever there is a valid new connection, it will store the data to the message queue so that the VLC thread can use them.
It is almost similar to my previous program. One of the differences is that it build up an output file name with the current time. I used three functions, time(), localtime_r() and strftime(). localtime_r is a platform dependent thread-safe version of localtime(). Output file name extension is fixed at ".avi"; I am not sure if other extensions are allowed for H.264 in VLC.
Three input values are given to VLC library through my wrapping classes, MyVLC and MyMediaPlayer. Although they are already explained in other article, I made little improvement, which I will explain in a moment.
The important part of the logic is that it has to keep sleep_for() until the message queue gets empty; or the request came from other IP. But since I have only one camera, this program is designed for only one camera. In fact, even if I have two foscam camera, I will have a dedicated Raspberry Pi for each. This logic was very difficult to program in shell script. First of all, there wasn't an easy way for me to do a safe message queue between two processes. I believe C++ program gets way more simpler than shell scripts in many cases.
my previous fake FTP server because I found a problem with the previous approach. When the message is sent to the client, the client didn't get it. I am sure the message needed to be flushed somehow but I couldn't find a way to flush. However, tcp::iostream provides a way to flush in a traditional way. And I liked that the stream operators << and >> looks much simpler and easier to read. It wouldn't always work because it doesn't let me control how many bytes I am expecting to receive but I guess it will be fine with casual use.
The reason why MyMediaPlayerInterface is inherited as virtual is because it is interface. Let me give you an example of the cases where it has to be inherited as virtual. Let's say there are two interfaces, A and B, and two implementations, C and D. The interface A inherits from the interface B. The implementation C implements the interface A. The implementation D implements the interface B. Now it turned out that the implementation has to inherit from the implementation C. D is screwed because it will end up inheriting the interface A twice. This is a common problem on multiple inheritance. Although we can treat a pure virtual class as an interface, they are still multiple inheritance in C++ world. As far as I know there is no way for us to predetermine which interface will be inherited only once. Interfaces are supposed to have no implementation so it should be safe to inherit more than once but there is no way for us to do it in C++. So whenever an interface is inherited, they will have to be inherited as virtual.
As I discussed in my previous article, I think it is a better idea to utilize stack memory than heap memory. A new template class, std::aligned_storage, in C++0x provides a way for us to do it without the complex calculation of alignment. It didn't work out nicely for me at first try. It took one hour for me to figure out that I had to use "::type" after aligned_storage. "aligned_storage" class itself has only 1 byte size. The member typedef, "::type", is what we want to use. Then the allocated stack memory space is used with placement new. Then the memory is assigned to unique_ptr in order to make sure it will call the destructor properly. Note that since the memory is allocated from stack memory, we should not "delete" it. We simply have to skip the memory de-allocation step yet the destructor must be called. That's what the class, PlacementDelete, does.
Now I am thinking of put back my foscam IP camera with the dedicated Raspberry Pi for video recording. But... it will fill up the SD memory card pretty quick. I am not sure what I should do about it.