Design for HDMI port on Linux

How would it be possible to exclusively control the HDMI output from the application, not allowing the OS to automatically configure it to be displayed on the screen?

For example, using standard DVI / VGA as the main display, but sending the Mplayer video signal to HDMI using the device file.

This is a tough question to answer through Google. Almost every result is connected with the work of audio via HDMI.

(edited here)

The comment below is mentioned using separate Xorg servers. Although this is a useful idea, it does not answer the one question I asked, and the one I meant:

1) How can I make Linux refuse to place the console on this display if it is loaded in front of another display or if it is the only screen (when only SSH is used for login)? 2) What if there is no X? I want to connect graphics directly to the adapter. Can I do this from code using standard functions without interacting directly with drivers (possibly outdated, but using SVGALib or some other non-X-graphic layer)?

(edited here)

I looked at SVGALib (old) and SDL. The latter works both inside and outside X and even provides access to OpenGL. I found version 1.3 through a link to a forum somewhere, but both on the site and FTP seem to be up to 1.2. SDL is a great solution, in general, but it has the following two special drawbacks:

1) The general create-device call accepts the device index, but completely ignores it:

(src/video/bwindow/SDL_bvideo.cc) BE_CreateDevice(int devindex) 

The specific driver seems to have the same drawback. For example, DirectFB (which, I suppose, provides graphics under the console):

 (src/video/directfb/SDL_DirectFB_video.c) DirectFB_CreateDevice(int devindex) 

None of the bodies of these functions has a place to set the device index ... Without a doubt, due to the lack of support in the standard interface for which they are created.

2) On any adapter that needs to be selected, SDL seems to automatically attach all displays together. The example "testsprite2.c" (supplied with the library) accepts the parameter "--display", which is processed in "common.c" (common functionality for all examples). You can see that everything he does with the --display option calculates the X / Y coordinate of this screen within one large combined canvas:

 if (SDL_strcasecmp(argv[index], "--display") == 0) { ++index; if (!argv[index]) { return -1; } state->display = SDL_atoi(argv[index]); if (SDL_WINDOWPOS_ISUNDEFINED(state->window_x)) { state->window_x = SDL_WINDOWPOS_UNDEFINED_DISPLAY(state->display); state->window_y = SDL_WINDOWPOS_UNDEFINED_DISPLAY(state->display); } if (SDL_WINDOWPOS_ISCENTERED(state->window_x)) { state->window_x = SDL_WINDOWPOS_CENTERED_DISPLAY(state->display); state->window_y = SDL_WINDOWPOS_CENTERED_DISPLAY(state->display); } return 2; } 

So, there is no way to isolate one display from another if they are on the same adapter. SDL will not work.

If there is no comparable solution for the SDL, or it turns out to be trivial to install a specific device (devindex) in the appropriate place (which is probably not the case, and therefore probably the reason why this remains unfulfilled), it seems that the best option An exclusive and fully dedicated use of the screen is to create your own window manager under a separate Xorg instance assigned to your second device.

+6
source share
1 answer

You can directly write framebuffer / dev / fb to the device (assuming your console uses it by default). To prevent the console from showing, just turn off all your virtual terminals (then you can log in remotely). If you have several adapters (you need to confirm this), you should get more than one framebuff device.

Example C, which draws a rectangle on a framebuffer, is here:

Color pixels on screen via Linux FrameBuffer

 #include <stdlib.h> #include <unistd.h> #include <stdio.h> #include <fcntl.h> #include <linux/fb.h> #include <sys/mman.h> #include <sys/ioctl.h> int main() { int fbfd = 0; struct fb_var_screeninfo vinfo; struct fb_fix_screeninfo finfo; long int screensize = 0; char *fbp = 0; int x = 0, y = 0; long int location = 0; // Open the file for reading and writing fbfd = open("/dev/fb0", O_RDWR); if (fbfd == -1) { perror("Error: cannot open framebuffer device"); exit(1); } printf("The framebuffer device was opened successfully.\n"); // Get fixed screen information if (ioctl(fbfd, FBIOGET_FSCREENINFO, &finfo) == -1) { perror("Error reading fixed information"); exit(2); } // Get variable screen information if (ioctl(fbfd, FBIOGET_VSCREENINFO, &vinfo) == -1) { perror("Error reading variable information"); exit(3); } printf("%dx%d, %dbpp\n", vinfo.xres, vinfo.yres, vinfo.bits_per_pixel); // Figure out the size of the screen in bytes screensize = vinfo.xres * vinfo.yres * vinfo.bits_per_pixel / 8; // Map the device to memory fbp = (char *)mmap(0, screensize, PROT_READ | PROT_WRITE, MAP_SHARED, fbfd, 0); if ((int)fbp == -1) { perror("Error: failed to map framebuffer device to memory"); exit(4); } printf("The framebuffer device was mapped to memory successfully.\n"); x = 100; y = 100; // Where we are going to put the pixel // Figure out where in memory to put the pixel for (y = 100; y < 300; y++) for (x = 100; x < 300; x++) { location = (x+vinfo.xoffset) * (vinfo.bits_per_pixel/8) + (y+vinfo.yoffset) * finfo.line_length; if (vinfo.bits_per_pixel == 32) { *(fbp + location) = 100; // Some blue *(fbp + location + 1) = 15+(x-100)/2; // A little green *(fbp + location + 2) = 200-(y-100)/5; // A lot of red *(fbp + location + 3) = 0; // No transparency //location += 4; } else { //assume 16bpp int b = 10; int g = (x-100)/6; // A little green int r = 31-(y-100)/16; // A lot of red unsigned short int t = r<<11 | g << 5 | b; *((unsigned short int*)(fbp + location)) = t; } } munmap(fbp, screensize); close(fbfd); return 0; } 

As long as you have the build tools available, as well as the headers for your system, it should compile. For excitement, launch it from SSH and see how it is drawn on a physical screen that you are not logged into.

It should be noted that there is a wide range of tools that work with the framebuffer outside of X11, but they will not access the framebuffer directly. Instead, they work through an extra layer of abstraction called DirectFB. DirectFB will allow the same applications to work both inside and outside X11 ... including MPlayer, GStreamer, any application that includes SDL (which calls DirectFB), as well as a lightweight, popular fake X11 container called XDirectFB (I consider that it should run X11 applications, but not be overloaded like a regular window manager).

+1
source

All Articles