We have a device with an analog camera. We have a map that displays it and digitizes. All this is done in directx. Replacing hardware is not an option at the moment, but we need to encode such code so that we can see that this video stream is in real time, regardless of any hardware or changes in the operating system in the future.
Along this line, we chose Qt to implement a graphical interface to view this camera channel. However, if we switch to Linux or another embedded platform in the future and change other equipment (including the physical device where the camera / video sampler lives), we will also need to change the software to display the camera, and this will be a pain because we need integrate it into our graphical interface.
I suggested switching to a more abstract model, where data is transmitted through a socket to a graphical interface, and the video is displayed in real time after analysis from the socket stream.
First of all, is this a good idea or a bad idea?
Secondly, how would you implement such a thing? How do video sequences usually provide useful information? How can I push this pin through a socket? As soon as I am on the receiving side, analyzing the output, how do I know what to do with the output (how to get the output for rendering)? The only thing I can think of is to write each sample to a file, and then display the contents of the file every time a new sample arrives. This seems like an inefficient solution for me if it works at all.
How do you recommend this to me? Are there cross-platform libraries for such a thing?
Thanks.
edit: I am ready to accept offers about something else, not about what is indicated above.
qt sockets ip-camera
San jacinto
source share