Use your own pandas code next to the Django model that maps to the same SQL table
I do not know any explicit support for writing a pandas data frame to a Django model. However, in a Django application, you can still use your own code to read or write to the database, in addition to using ORM (for example, through your Django model).
And given that you most likely have data in the database previously written by pandas' to_sql , you can continue to use the same database and the same pandas code and just create a Django model that can access this table
eg. if your pandas code was written to mytable SQL mytable , just create this model:
class MyModel(Model): class Meta: db_table = 'mytable'
Now you can use this model from Django simultaneously with existing pandas code (possibly in one Django application)
Django Database Settings
To get the same database credentials in the pandas SQL function, just read the fields from the Django settings, for example:
from django.conf import settings user = settings.DATABASES['default']['USER'] password = settings.DATABASES['default']['PASSWORD'] database_name = settings.DATABASES['default']['NAME'] # host = settings.DATABASES['default']['HOST'] # port = settings.DATABASES['default']['PORT'] database_url = 'postgresql://{user}:{password}@localhost:5432/{database_name}'.format( user=user, password=password, database_name=database_name, ) engine = create_engine(database_url, echo=False)
The alternative is not recommended as it is ineffective
I really donβt see a way to read a data row row by row, and then instantiate the model and save it, which is very slow. You can get away with some insertion operations in the package, but why bother, since pandas' to_sql already does this for us. And reading Django requests into the pandas framework is simply inefficient when pandas can do it faster for us too.
# Doing it like this is slow for index, row in df.iterrows(): model = MyModel() model.field_1 = row['field_1'] model.save()