Base insert

In a python script, I need to run a query on one data source and insert each row from this query into a table on another data source. I usually do this with a single insert / select statement with a connection to the connected tsql server, but I don't have a connected connection to the server with this particular data source.

I am having trouble finding a simple pyobbc example. Here is how I would do it, but I assume that executing the insert statement inside the loop is pretty slow.

result = ds1Cursor.execute(selectSql) for row in result: insertSql = "insert into TableName (Col1, Col2, Col3) values (?, ?, ?)" ds2Cursor.execute(insertSql, row[0], row[1], row[2]) ds2Cursor.commit() 

Is there a better way to insert records with pyodbc? Or is it a relatively effective way to do it anyway. I am using SqlServer 2012 and the latest versions of pyopbc and python.

+10
source share
3 answers

The best way to handle this is to use the pyodbc executemany function.

 ds1Cursor.execute(selectSql) result = ds1Cursor.fetchall() ds2Cursor.executemany('INSERT INTO [TableName] (Col1, Col2, Col3) VALUES (?, ?, ?)', result) ds2Cursor.commit() 
+9
source

Here's a feature that can do bulk insertion into a SQL Server database.

 import pyodbc import contextlib def bulk_insert(table_name, file_path): string = "BULK INSERT {} FROM '{}' (WITH FORMAT = 'CSV');" with contextlib.closing(pyodbc.connect("MYCONN")) as conn: with contextlib.closing(conn.cursor()) as cursor: cursor.execute(string.format(table_name, file_path)) conn.commit() conn.close() 

It definitely works.

UPDATE: I noticed in the comments and also regularly write that pyodbc is better supported than pypyodbc.

+7
source

Use the library 'turbodbc', it was the only way to quickly boot.

0
source

All Articles