I think the most important question is how to draw and receive keyboard and mouse events.
As I can see, there are two approaches to drawing:
- Create an OpenGL context and draw your widgets using OpenGL. Like glui .
- Use your own drawing infrastructure. Like GDI + on windows, XLib on X11.
Of course, you will need to implement certain things for each platform. With OpenGL, you need to write context processing (WGL, GLX, ..) for each platform, while with your own drawing infrastructure you will need much more work. Since all the drawn infrastructures are unique, you probably want to write an abstraction for the drawing, and then implement your widgets using the level of abstraction of the drawing.
As for event handling, I think you will also need to write your own abstraction, because event processing is unique to each platform.
Finally, you must also have an abstraction layer to create the main window in which you draw your widgets and from which you get events.
When working with OpenGL, you can start with glut , which already handles window creation and event handling.
Remember, I have never implemented anything like this. Nevertheless, I would probably try to use the OpenGL approach, because I believe that there is less effort to achieve the goal.
Michael pfeuti
source share