ipython-sql introduces the %sql or %%sql magics for iPython and Jupyter notebooks.


You can easily install ipython-sql in your Jupyter kernel with:

$ pipenv install ipython-sql

First steps

  1. First, ipython-sql is activated in your notebook with

    In [1]: %load_ext sql
  2. The SQLAlchemy URL is used to connect to the database:

    In [2]: %sql postgresql://
  3. Then you can create a table, for example:

    In [3]: %%sql postgresql://
       ....: CREATE TABLE accounts (login, name, email)
       ....: INSERT INTO accounts VALUES ('veit', 'Veit Schiele', veit@example.org);
  4. You can query the contents of the accounts table with

    In [4]: result = %sql select * from accounts


Query results are loaded as a list, so very large amounts of data can occupy memory. Usually there is no automatic limit, but with Autolimit you can limit the amount of results.


displaylimit only limits the amount of results displayed, but not the amount of memory required.

With %config SqlMagic you can display the current configuration:

In [4]: %config SqlMagic
SqlMagic options
    Current: True
    Set autocommit mode
    Current: 0
    Automatically limit the size of the returned result sets
    Current: False
    Return Pandas DataFrames instead of regular result sets


If autopandas is set to True, displaylimit is not applied. In this case, the max_rows option of pandas can be used as described in the pandas documentation.


If pandas is installed, the DataFrame method can be used:

In [5]: result = %sql SELECT * FROM accounts

In [6]: dataframe = result.DataFrame()

In [7]: %sql --persist dataframe

In [8]: %sql SELECT * FROM dataframe;

Argument with the name of a DataFrame object, creates a table name in the database from this.


Argument to add rows with this name to an existing table.

PostgreSQL features

Meta-commands from psql can also be used in ipython-sql:

-l, --connections

lists all active connections

-x, --close SESSION-NAME

close named connection

-c, --creator CREATOR-FUNCTION

specifies the creator function for a new connection

-s, --section SECTION-NAME

specifies section of dsn_file to be used in a connection

-p, --persist

creates a table in the database from a named DataFrame


similar to --persist, but the contents are appended to the table

-a, --connection_arguments "{CONNECTION-ARGUMENTS}"

specifies a dict of connection arguments to be passed to the SQL driver

-f, --file PATH

executes SQL from the file under this path

See also


Since ipython-sql processes -- options such as -persist, and at the same time accepts -- as a SQL comment, the parser has to make some assumptions: for example, --persist is great in the first line is processed as an argument and not as a comment.