Using Strange Python memory with Scapy

I wrote scripts that register MAC addresses from pcapy to mysql through SQLAlchemy, I initially used direct sqlite3, but soon realized that something better was needed, so last weekend I rewrote all the conversations with the database to match SQLAlchemy. Everything works fine, data arrives and reappears. I though sessionmaker () would be very useful for managing all sessions in the database for me.

I see a strange phenomenon regarding memory consumption. I run a script ... it collects and writes everything to DB ... but for every 2-4 seconds I have a megabyte in the amount of an increase in memory consumption. At the moment, I'm talking about very few records, under 100 rows.

Script Sequence:

  • Script Runs
  • SQLAlchemy reads the mac_addr column in maclist [].
  • scapy gets package> if new_mac is in maclist []?

if true? just write the timestamp in the timestamp column, where mac = newmac. return to step 2.

if false? then write the new mac in DB. clear maclist [] and call step 2 again.

After 1h30m, I have a memory size of 1027MB (RES) and 1198MB (VIRT) with 124 rows in 1 database table (MySQL).

Q: Could this be listed as maclist [], which is cleared and re-populated from the database each time?

Q: What happens when it reaches maximum system memory?

Any ideas or advice would be greatly appreciated.

memory_profiler output for the segment in question, where the list [] is populated from the mac_addr database column.

Line #    Mem usage    Increment   Line Contents
================================================
   123 1025.434 MiB    0.000 MiB   @profile
   124                             def sniffmgmt(p):
   125                              global __mac_reel
   126                              global _blacklist
   127 1025.434 MiB    0.000 MiB    stamgmtstypes = (0, 2, 4)
   128 1025.434 MiB    0.000 MiB    tmplist = []
   129 1025.434 MiB    0.000 MiB    matching = []
   130 1025.434 MiB    0.000 MiB    observedclients = []
   131 1025.434 MiB    0.000 MiB    tmplist = populate_observed_list()
   132 1025.477 MiB    0.043 MiB    for i in tmplist:
   133 1025.477 MiB    0.000 MiB          observedclients.append(i[0])
   134 1025.477 MiB    0.000 MiB    _mac_address = str(p.addr2)
   135 1025.477 MiB    0.000 MiB    if p.haslayer(Dot11):
   136 1025.477 MiB    0.000 MiB        if p.type == 0 and p.subtype in stamgmtstypes:
   137 1024.309 MiB   -1.168 MiB            _timestamp = atimer()
   138 1024.309 MiB    0.000 MiB            if p.info == "":
   139 1021.520 MiB   -2.789 MiB                        _SSID = "hidden"
   140                                          else:
   141 1024.309 MiB    2.789 MiB                        _SSID = p.info
   142                                      
   143 1024.309 MiB    0.000 MiB            if p.addr2 not in observedclients:
   144 1018.184 MiB   -6.125 MiB                    db_add(_mac_address, _timestamp, _SSID)
   145 1018.184 MiB    0.000 MiB                    greetings()
   146                                      else:
   147 1024.309 MiB    6.125 MiB                add_time(_mac_address, _timestamp)
   148 1024.309 MiB    0.000 MiB                observedclients = [] #clear the list
   149 1024.309 MiB    0.000 MiB                observedclients = populate_observed_list() #repopulate the list
   150 1024.309 MiB    0.000 MiB                greetings()

You will see that the observed clients are a list.

+4
4

. . Scapy . .

:

sniff(iface=interface, prn=sniffmgmt, store=0)

:

sniff(iface=interface, prn=sniffmgmt, store=1)

BitBucket

+4

, , , .

: 1) add_time ( ?) 2) db_add ( ? ? / db-? ?) 3) populate_observed_list ( ? - , ?)

, , , , ?

3 /.

+1

- , , SQLAlchemy scapy, ( ).

, , , , , .

python , , , MemoryError.

0
source

Thanks for leading everyone. I think I managed to solve the growing memory consumption.

A: Code logic plays a very important role in memory consumption, as I found out. If you look at the output of memory_profiler in my original question, I moved lines 131-133 to the IF statement on line 136. This does not seem to increase memory as often. Now I need to clarify that populate_observedlist () is slightly larger so as not to waste so much memory.

0
source

All Articles