Linux kernel: an example of connecting to a system call

I am trying to write a simple test code as a demonstration of connecting a system call table.

"sys_call_table" is no longer exported to 2.6, so I just grab the address from the System.map file and I see that it is correct (looking at the memory at the specified address, I see pointers to system calls).

However, when I try to modify this table, the kernel gives β€œOops” with β€œunable to handle kernel swap request with virtual address c061e4f4”, and the machine reboots.

This is CentOS 5.4 running 2.6.18-164.10.1.el5. Is there any protection or am I having a mistake? I know that it comes with SELinux, and I tried to enable it in permissive mode, but that doesn't make any difference.

Here is my code:

#include <linux/kernel.h> #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/unistd.h> void **sys_call_table; asmlinkage int (*original_call) (const char*, int, int); asmlinkage int our_sys_open(const char* file, int flags, int mode) { printk("A file was opened\n"); return original_call(file, flags, mode); } int init_module() { // sys_call_table address in System.map sys_call_table = (void*)0xc061e4e0; original_call = sys_call_table[__NR_open]; // Hook: Crashes here sys_call_table[__NR_open] = our_sys_open; } void cleanup_module() { // Restore the original call sys_call_table[__NR_open] = original_call; } 
+65
c linux-kernel hook
Jan 20 '10 at 17:08
source share
5 answers

I finally found the answer.

http://www.linuxforums.org/forum/linux-kernel/133982-cannot-modify-sys_call_table.html

The kernel was changed at some point, so the system call table is read only.

cipher bank:

Even if it's late, but others may also be interested in the solution: in the entry.S file, you will find: Code:

 .section .rodata,"a" #include "syscall_table_32.S" 

sys_call_table β†’ ReadOnly You must compile a new kernel if you want to β€œcrack” with sys_call_table ...

The link also has an example of changing the memory for writing.

nasekomoe:

Hello to all. Thanks for answers. I have long solved the problem of changing access to memory pages. I implemented two functions that are for my top-level code:
 #include <asm/cacheflush.h> #ifdef KERN_2_6_24 #include <asm/semaphore.h> int set_page_rw(long unsigned int _addr) { struct page *pg; pgprot_t prot; pg = virt_to_page(_addr); prot.pgprot = VM_READ | VM_WRITE; return change_page_attr(pg, 1, prot); } int set_page_ro(long unsigned int _addr) { struct page *pg; pgprot_t prot; pg = virt_to_page(_addr); prot.pgprot = VM_READ; return change_page_attr(pg, 1, prot); } #else #include <linux/semaphore.h> int set_page_rw(long unsigned int _addr) { return set_memory_rw(_addr, 1); } int set_page_ro(long unsigned int _addr) { return set_memory_ro(_addr, 1); } #endif // KERN_2_6_24 

Here's the version of the source code that works for me.

 #include <linux/kernel.h> #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/unistd.h> #include <asm/semaphore.h> #include <asm/cacheflush.h> void **sys_call_table; asmlinkage int (*original_call) (const char*, int, int); asmlinkage int our_sys_open(const char* file, int flags, int mode) { printk("A file was opened\n"); return original_call(file, flags, mode); } int set_page_rw(long unsigned int _addr) { struct page *pg; pgprot_t prot; pg = virt_to_page(_addr); prot.pgprot = VM_READ | VM_WRITE; return change_page_attr(pg, 1, prot); } int init_module() { // sys_call_table address in System.map sys_call_table = (void*)0xc061e4e0; original_call = sys_call_table[__NR_open]; set_page_rw(sys_call_table); sys_call_table[__NR_open] = our_sys_open; } void cleanup_module() { // Restore the original call sys_call_table[__NR_open] = original_call; } 
+54
Jan 20
source share

Thanks to Stephen, your research here was useful to me. I had several problems, although I tried to do this on the 2.6.32 kernel and get WARNING: at arch/x86/mm/pageattr.c:877 change_page_attr_set_clr+0x343/0x530() (Not tainted) followed by the OOPS of the kernel, about the impossibility of writing to the memory address.

The comment above the line reads:

 // People should not be passing in unaligned addresses 

The following modified code works:

 int set_page_rw(long unsigned int _addr) { return set_memory_rw(PAGE_ALIGN(_addr) - PAGE_SIZE, 1); } int set_page_ro(long unsigned int _addr) { return set_memory_ro(PAGE_ALIGN(_addr) - PAGE_SIZE, 1); } 

Note that this still does not set the page as read / write in some situations. The static_protections() function, which is called inside set_memory_rw() , removes the _PAGE_RW flag if:

  • In the field of BIOS
  • The address is inside .rodata li>
  • CONFIG_DEBUG_RODATA is set, and the kernel is configured to read only

I found this after debugging, why I still "could not handle the kernel swap request" when I tried to change the address of the kernel functions. In the end, I was able to solve this problem by finding the record in the table for the address myself and manually setting it to record. Fortunately, the lookup_address() function is exported in version 2.6.26+. Here is the code I wrote for this:

 void set_addr_rw(unsigned long addr) { unsigned int level; pte_t *pte = lookup_address(addr, &level); if (pte->pte &~ _PAGE_RW) pte->pte |= _PAGE_RW; } void set_addr_ro(unsigned long addr) { unsigned int level; pte_t *pte = lookup_address(addr, &level); pte->pte = pte->pte &~_PAGE_RW; } 

Finally, while Mark's answer is technically correct, a problem will occur when starting inside Xen. If you want to disable write protection, use the cr0 read / write functions. I am making a macro this way:

 #define GPF_DISABLE write_cr0(read_cr0() & (~ 0x10000)) #define GPF_ENABLE write_cr0(read_cr0() | 0x10000) 

Hope this helps someone else who stumbles upon this question.

+24
Jul 19 2018-11-11T00:
source share

Note that instead of using change_page_attr, the following will also work:

 static void disable_page_protection(void) { unsigned long value; asm volatile("mov %%cr0,%0" : "=r" (value)); if (value & 0x00010000) { value &= ~0x00010000; asm volatile("mov %0,%%cr0": : "r" (value)); } } static void enable_page_protection(void) { unsigned long value; asm volatile("mov %%cr0,%0" : "=r" (value)); if (!(value & 0x00010000)) { value |= 0x00010000; asm volatile("mov %0,%%cr0": : "r" (value)); } } 
+18
Oct. 22 2018-10-21
source share

If you are dealing with kernel 3.4 and later (it can also work with earlier kernels, I have not tested it), I would recommend a more reasonable way to get the location of the system call table.

for example

 #include <linux/module.h> #include <linux/kallsyms.h> static unsigned long **p_sys_call_table; /* Aquire system calls table address */ p_sys_call_table = (void *) kallsyms_lookup_name("sys_call_table"); 

What is it. There are no addresses, it works great with every core under test.

Similarly, you can use the non-exported Kernel function from your module:

 static int (*ref_access_remote_vm)(struct mm_struct *mm, unsigned long addr, void *buf, int len, int write); ref_access_remote_vm = (void *)kallsyms_lookup_name("access_remote_vm"); 

Enjoy it!

+10
Aug 29 '16 at 8:51 on
source share

Using the following

 address = (void *) kallsyms_lookup_name("linux_banner"); printk(KERN_INFO "Address of linux_banner is [0x%lx]\n", (unsigned long) address); address = (void *) kallsyms_lookup_name("sys_call_table"); printk(KERN_INFO "Address of sys_call_table is [0x%lx]\n", (unsigned long) address); address = (void *) kallsyms_lookup_name("vdso_image_64"); printk(KERN_INFO "Address of vdso_image_64 is [0x%lx]\n", (unsigned long) address); 

seal

 Address of linux_banner is [0xffffffff95c00120] Address of sys_call_table is [0xffffffff95c00220] Address of vdso_image_64 is [0xffffffff95c013c0] 

whereas their respective addresses (according to the System.map file)

 ffffffff81e00120 R linux_banner ffffffff81e00220 R sys_call_table ffffffff81e013c0 R vdso_image_64 

Any idea why this is a constant bias in addresses?

0
Jun 22 '19 at
source share



All Articles