Which of the following filing methods is considered the most efficient group of answer choices?

are used to form the column index, if multiple rows are contained within then a MultiIndex is created); if specified, the header row is taken from the data minus the parsed header elements (

The pandas I/O API is a set of top level reader functions accessed like pandas.read_csv() that generally return a pandas object. The corresponding writer functions are object methods that are accessed like DataFrame.to_csv(). Below is a table containing available readers and writers.

Show

Format Type

Data Description

Reader

Writer

text

CSV

read_csv

to_csv

text

Fixed-Width Text File

read_fwf

text

JSON

read_json

to_json

text

HTML

read_html

to_html

text

LaTeX

Styler.to_latex

text

XML

read_xml

to_xml

text

Local clipboard

read_clipboard

to_clipboard

binary

MS Excel

read_excel

to_excel

binary

OpenDocument

read_excel

binary

HDF5 Format

read_hdf

to_hdf

binary

Feather Format

read_feather

to_feather

binary

Parquet Format

read_parquet

to_parquet

binary

ORC Format

read_orc

to_orc

binary

Stata

read_stata

to_stata

binary

SAS

read_sas

binary

SPSS

read_spss

binary

Python Pickle Format

read_pickle

to_pickle

SQL

SQL

read_sql

to_sql

SQL

Google BigQuery

read_gbq

to_gbq

Here is an informal performance comparison for some of these IO methods.

Note

For examples that use the StringIO class, make sure you import it with from io import StringIO for Python 3.

CSV & text files#

The workhorse function for reading text files (a.k.a. flat files) is read_csv(). See the cookbook for some advanced strategies.

Parsing options#

read_csv() accepts the following common arguments:

Basic#

filepath_or_buffervarious

Either a path to a file (a str, pathlib.Path, or py:py._path.local.LocalPath), URL (including http, ftp, and S3 locations), or any object with a read() method (such as an open file or StringIO).

sepstr, defaults to ',' for read_csv(), \t for read_table()

Delimiter to use. If sep is None, the C engine cannot automatically detect the separator, but the Python parsing engine can, meaning the latter will be used and automatically detect the separator by Python’s builtin sniffer tool, csv.Sniffer. In addition, separators longer than 1 character and different from '\s+' will be interpreted as regular expressions and will also force the use of the Python parsing engine. Note that regex delimiters are prone to ignoring quoted data. Regex example: '\\r\\t'.

delimiterstr, default None

Alternative argument name for sep.

delim_whitespaceboolean, default False

Specifies whether or not whitespace (e.g. ' ' or '\t') will be used as the delimiter. Equivalent to setting sep='\s+'. If this option is set to True, nothing should be passed in for the delimiter parameter.

Column and index locations and names#

headerint or list of ints, default 'infer'

Row number(s) to use as the column names, and the start of the data. Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0 and column names are inferred from the first line of the file, if column names are passed explicitly then the behavior is identical to header=None. Explicitly pass header=0 to be able to replace existing names.

The header can be a list of ints that specify row locations for a MultiIndex on the columns e.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example is skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.

namesarray-like, default None

List of column names to use. If file contains no header row, then you should explicitly pass header=None. Duplicates in this list are not allowed.

index_colint, str, sequence of int / str, or False, optional, default None

Column(s) to use as the row labels of the DataFrame, either given as string name or column index. If a sequence of int / str is given, a MultiIndex is used.

Note

index_col=False can be used to force pandas to not use the first column as the index, e.g. when you have a malformed file with delimiters at the end of each line.

The default value of None instructs pandas to guess. If the number of fields in the column header row is equal to the number of fields in the body of the data file, then a default index is used. If it is larger, then the first columns are used as index so that the remaining number of fields in the body are equal to the number of fields in the header.

The first row after the header is used to determine the number of columns, which will go into the index. If the subsequent rows contain less columns than the first row, they are filled with NaN.

This can be avoided through usecols. This ensures that the columns are taken as is and the trailing data are ignored.

usecolslist-like or callable, default None

Return a subset of the columns. If list-like, all elements must either be positional (i.e. integer indices into the document columns) or strings that correspond to column names provided either by the user in names or inferred from the document header row(s). If names are given, the document header row(s) are not taken into account. For example, a valid list-like usecols parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].

Element order is ignored, so usecols=[0, 1] is the same as [1, 0]. To instantiate a DataFrame from data with element order preserved use pd.read_csv(data, usecols=['foo', 'bar'])[['foo', 'bar']] for columns in ['foo', 'bar'] order or pd.read_csv(data, usecols=['foo', 'bar'])[['bar', 'foo']] for ['bar', 'foo'] order.

If callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to True:

In [1]: import pandas as pd In [2]: from io import StringIO In [3]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3" In [4]: pd.read_csv(StringIO(data)) Out[4]: col1 col2 col3 0 a b 1 1 a b 2 2 c d 3 In [5]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["COL1", "COL3"]) Out[5]: col1 col3 0 a 1 1 a 2 2 c 3

Using this parameter results in much faster parsing time and lower memory usage when using the c engine. The Python engine loads the data first before deciding which columns to drop.

squeezeboolean, default False

If the parsed data only contains one column then return a Series.

Deprecated since version 1.4.0: Append .squeeze("columns") to the call to {func_name} to squeeze the data.

prefixstr, default None

Prefix to add to column numbers when no header, e.g. ‘X’ for X0, X1, …

Deprecated since version 1.4.0: Use a list comprehension on the DataFrame’s columns after calling read_csv.

In [6]: data = "col1,col2,col3\na,b,1" In [7]: df = pd.read_csv(StringIO(data)) In [8]: df.columns = [f"pre_{col}" for col in df.columns] In [9]: df Out[9]: pre_col1 pre_col2 pre_col3 0 a b 1

mangle_dupe_colsboolean, default True

Duplicate columns will be specified as ‘X’, ‘X.1’…’X.N’, rather than ‘X’…’X’. Passing in False will cause data to be overwritten if there are duplicate names in the columns.

Deprecated since version 1.5.0: The argument was never implemented, and a new argument where the renaming pattern can be specified will be added instead.

General parsing configuration#

dtypeType name or dict of column -> type, default None

Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32, 'c': 'Int64'} Use str or object together with suitable na_values settings to preserve and not interpret dtype. If converters are specified, they will be applied INSTEAD of dtype conversion.

New in version 1.5.0: Support for defaultdict was added. Specify a defaultdict as input where the default determines the dtype of the columns which are not explicitly listed.

engine{'c', 'python', 'pyarrow'}

Parser engine to use. The C and pyarrow engines are faster, while the python engine is currently more feature-complete. Multithreading is currently only supported by the pyarrow engine.

New in version 1.4.0: The “pyarrow” engine was added as an experimental engine, and some features are unsupported, or may not work correctly, with this engine.

convertersdict, default None

Dict of functions for converting values in certain columns. Keys can either be integers or column labels.

true_valueslist, default None

Values to consider as True.

false_valueslist, default None

Values to consider as False.

skipinitialspaceboolean, default False

Skip spaces after delimiter.

skiprowslist-like or integer, default None

Line numbers to skip (0-indexed) or number of lines to skip (int) at the start of the file.

If callable, the callable function will be evaluated against the row indices, returning True if the row should be skipped and False otherwise:

In [10]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3" In [11]: pd.read_csv(StringIO(data)) Out[11]: col1 col2 col3 0 a b 1 1 a b 2 2 c d 3 In [12]: pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0) Out[12]: col1 col2 col3 0 a b 2

skipfooterint, default 0

Number of lines at bottom of file to skip (unsupported with engine=’c’).

nrowsint, default None

Number of rows of file to read. Useful for reading pieces of large files.

low_memoryboolean, default True

Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the dtype parameter. Note that the entire file is read into a single DataFrame regardless, use the chunksize or iterator parameter to return the data in chunks. (Only valid with C parser)

memory_mapboolean, default False

If a filepath is provided for filepath_or_buffer, map the file object directly onto memory and access the data directly from there. Using this option can improve performance because there is no longer any I/O overhead.

NA and missing data handling#

na_valuesscalar, str, list-like, or dict, default None

Additional strings to recognize as NA/NaN. If dict passed, specific per-column NA values. See na values const below for a list of the values interpreted as NaN by default.

keep_default_naboolean, default True

Whether or not to include the default NaN values when parsing the data. Depending on whether na_values is passed in, the behavior is as follows:

  • If keep_default_na is True, and na_values are specified, na_values is appended to the default NaN values used for parsing.

  • If keep_default_na is True, and na_values are not specified, only the default NaN values are used for parsing.

  • If keep_default_na is False, and na_values are specified, only the NaN values specified na_values are used for parsing.

  • If keep_default_na is False, and na_values are not specified, no strings will be parsed as NaN.

Note that if na_filter is passed in as False, the keep_default_na and na_values parameters will be ignored.

na_filterboolean, default True

Detect missing value markers (empty strings and the value of na_values). In data without any NAs, passing na_filter=False can improve the performance of reading a large file.

verboseboolean, default False

Indicate number of NA values placed in non-numeric columns.

skip_blank_linesboolean, default True

If True, skip over blank lines rather than interpreting as NaN values.

Datetime handling#

parse_datesboolean or list of ints or names or list of lists or dict, default False.
  • If True -> try parsing the index.

  • If [1, 2, 3] -> try parsing columns 1, 2, 3 each as a separate date column.

  • If [[1, 3]] -> combine columns 1 and 3 and parse as a single date column.

  • If {'foo': [1, 3]} -> parse columns 1, 3 as date and call result ‘foo’.

Note

A fast-path exists for iso8601-formatted dates.

infer_datetime_formatboolean, default False

If True and parse_dates is enabled for a column, attempt to infer the datetime format to speed up the processing.

keep_date_colboolean, default False

If True and parse_dates specifies combining multiple columns then keep the original columns.

date_parserfunction, default None

Function to use for converting a sequence of string columns to an array of datetime instances. The default uses dateutil.parser.parser to do the conversion. pandas will try to call date_parser in three different ways, advancing to the next if an exception occurs: 1) Pass one or more arrays (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the string values from the columns defined by parse_dates into a single array and pass that; and 3) call date_parser once for each row using one or more strings (corresponding to the columns defined by parse_dates) as arguments.

dayfirstboolean, default False

DD/MM format dates, international and European format.

cache_datesboolean, default True

If True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets.

New in version 0.25.0.

Iteration#

iteratorboolean, default False

Return TextFileReader object for iteration or getting chunks with get_chunk().

chunksizeint, default None

Return TextFileReader object for iteration. See iterating and chunking below.

Quoting, compression, and file format#

compression{'infer', 'gzip', 'bz2', 'zip', 'xz', 'zstd', None, dict}, default 'infer'

For on-the-fly decompression of on-disk data. If ‘infer’, then use gzip, bz2, zip, xz, or zstandard if filepath_or_buffer is path-like ending in ‘.gz’, ‘.bz2’, ‘.zip’, ‘.xz’, ‘.zst’, respectively, and no decompression otherwise. If using ‘zip’, the ZIP file must contain only one data file to be read in. Set to None for no decompression. Can also be a dict with key 'method' set to one of {'zip', 'gzip', 'bz2', 'zstd'} and other key-value pairs are forwarded to zipfile.ZipFile, gzip.GzipFile, bz2.BZ2File, or zstandard.ZstdDecompressor. As an example, the following could be passed for faster compression and to create a reproducible gzip archive: compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}.

Changed in version 1.1.0: dict option extended to support gzip and bz2.

Changed in version 1.2.0: Previous versions forwarded dict entries for ‘gzip’ to gzip.open.

thousandsstr, default None

Thousands separator.

decimalstr, default '.'

Character to recognize as decimal point. E.g. use ',' for European data.

float_precisionstring, default None

Specifies which converter the C engine should use for floating-point values. The options are None for the ordinary converter, high for the high-precision converter, and round_trip for the round-trip converter.

lineterminatorstr (length 1), default None

Character to break file into lines. Only valid with C parser.

quotecharstr (length 1)

The character used to denote the start and end of a quoted item. Quoted items can include the delimiter and it will be ignored.

quotingint or csv.QUOTE_* instance, default 0

Control field quoting behavior per csv.QUOTE_* constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).

doublequoteboolean, default True

When quotechar is specified and quoting is not QUOTE_NONE, indicate whether or not to interpret two consecutive quotechar elements inside a field as a single quotechar element.

escapecharstr (length 1), default None

One-character string used to escape delimiter when quoting is QUOTE_NONE.

commentstr, default None

Indicates remainder of line should not be parsed. If found at the beginning of a line, the line will be ignored altogether. This parameter must be a single character. Like empty lines (as long as skip_blank_lines=True), fully commented lines are ignored by the parameter header but not by skiprows. For example, if comment='#', parsing ‘#empty\na,b,c\n1,2,3’ with header=0 will result in ‘a,b,c’ being treated as the header.

encodingstr, default None

Encoding to use for UTF when reading/writing (e.g. 'utf-8'). List of Python standard encodings.

dialectstr or csv.Dialect instance, default None

If provided, this parameter will override values (default or not) for the following parameters: delimiter, doublequote, escapechar, skipinitialspace, quotechar, and quoting. If it is necessary to override values, a ParserWarning will be issued. See csv.Dialect documentation for more details.

Error handling#

error_bad_linesboolean, optional, default None

Lines with too many fields (e.g. a csv line with too many commas) will by default cause an exception to be raised, and no DataFrame will be returned. If False, then these “bad lines” will dropped from the DataFrame that is returned. See bad lines below.

Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead.

warn_bad_linesboolean, optional, default None

If error_bad_lines is False, and warn_bad_lines is True, a warning for each “bad line” will be output.

Deprecated since version 1.3.0: The on_bad_lines parameter should be used instead to specify behavior upon encountering a bad line instead.

on_bad_lines(‘error’, ‘warn’, ‘skip’), default ‘error’

Specifies what to do upon encountering a bad line (a line with too many fields). Allowed values are :

  • ‘error’, raise an ParserError when a bad line is encountered.

  • ‘warn’, print a warning when a bad line is encountered and skip that line.

  • ‘skip’, skip bad lines without raising or warning when they are encountered.

New in version 1.3.0.

Specifying column data types#

You can indicate the data type for the whole DataFrame or individual columns:

In [13]: import numpy as np In [14]: data = "a,b,c,d\n1,2,3,4\n5,6,7,8\n9,10,11" In [15]: print(data) a,b,c,d 1,2,3,4 5,6,7,8 9,10,11 In [16]: df = pd.read_csv(StringIO(data), dtype=object) In [17]: df Out[17]: a b c d 0 1 2 3 4 1 5 6 7 8 2 9 10 11 NaN In [18]: df["a"][0] Out[18]: '1' In [19]: df = pd.read_csv(StringIO(data), dtype={"b": object, "c": np.float64, "d": "Int64"}) In [20]: df.dtypes Out[20]: a int64 b object c float64 d Int64 dtype: object

Fortunately, pandas offers more than one way to ensure that your column(s) contain only one dtype. If you’re unfamiliar with these concepts, you can see here to learn more about dtypes, and here to learn more about object conversion in pandas.

For instance, you can use the converters argument of read_csv():

In [21]: data = "col_1\n1\n2\n'A'\n4.22" In [22]: df = pd.read_csv(StringIO(data), converters={"col_1": str}) In [23]: df Out[23]: col_1 0 1 1 2 2 'A' 3 4.22 In [24]: df["col_1"].apply(type).value_counts() Out[24]: 4 Name: col_1, dtype: int64

Or you can use the to_numeric() function to coerce the dtypes after reading in the data,

In [25]: df2 = pd.read_csv(StringIO(data)) In [26]: df2["col_1"] = pd.to_numeric(df2["col_1"], errors="coerce") In [27]: df2 Out[27]: col_1 0 1.00 1 2.00 2 NaN 3 4.22 In [28]: df2["col_1"].apply(type).value_counts() Out[28]: 4 Name: col_1, dtype: int64

which will convert all valid parsing to floats, leaving the invalid parsing as NaN.

Ultimately, how you deal with reading in columns containing mixed dtypes depends on your specific needs. In the case above, if you wanted to NaN out the data anomalies, then to_numeric() is probably your best option. However, if you wanted for all the data to be coerced, no matter the type, then using the converters argument of read_csv() would certainly be worth trying.

Note

In some cases, reading in abnormal data with columns containing mixed dtypes will result in an inconsistent dataset. If you rely on pandas to infer the dtypes of your columns, the parsing engine will go and infer the dtypes for different chunks of the data, rather than the whole dataset at once. Consequently, you can end up with column(s) with mixed dtypes. For example,

In [29]: col_1 = list(range(500000)) + ["a", "b"] + list(range(500000)) In [30]: df = pd.DataFrame({"col_1": col_1}) In [31]: df.to_csv("foo.csv") In [32]: mixed_df = pd.read_csv("foo.csv") In [33]: mixed_df["col_1"].apply(type).value_counts() Out[33]: 737858 262144 Name: col_1, dtype: int64 In [34]: mixed_df["col_1"].dtype Out[34]: dtype('O')

will result with mixed_df containing an int dtype for certain chunks of the column, and str for others due to the mixed dtypes from the data that was read in. It is important to note that the overall column will be marked with a dtype of object, which is used for columns with mixed dtypes.

Specifying categorical dtype#

Categorical columns can be parsed directly by specifying dtype='category' or dtype=CategoricalDtype(categories, ordered).

In [35]: data = "col1,col2,col3\na,b,1\na,b,2\nc,d,3" In [36]: pd.read_csv(StringIO(data)) Out[36]: col1 col2 col3 0 a b 1 1 a b 2 2 c d 3 In [37]: pd.read_csv(StringIO(data)).dtypes Out[37]: col1 object col2 object col3 int64 dtype: object In [38]: pd.read_csv(StringIO(data), dtype="category").dtypes Out[38]: col1 category col2 category col3 category dtype: object

Individual columns can be parsed as a Categorical using a dict specification:

In [39]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes Out[39]: col1 category col2 object col3 int64 dtype: object

Specifying dtype='category' will result in an unordered Categorical whose categories are the unique values observed in the data. For more control on the categories and order, create a CategoricalDtype ahead of time, and pass that for that column’s dtype.

In [40]: from pandas.api.types import CategoricalDtype In [41]: dtype = CategoricalDtype(["d", "c", "b", "a"], ordered=True) In [42]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).dtypes Out[42]: col1 category col2 object col3 int64 dtype: object

When using dtype=CategoricalDtype, “unexpected” values outside of dtype.categories are treated as missing values.

In [43]: dtype = CategoricalDtype(["a", "b", "d"]) # No 'c' In [44]: pd.read_csv(StringIO(data), dtype={"col1": dtype}).col1 Out[44]: 0 a 1 a 2 NaN Name: col1, dtype: category Categories (3, object): ['a', 'b', 'd']

This matches the behavior of Categorical.set_categories().

Note

With dtype='category', the resulting categories will always be parsed as strings (object dtype). If the categories are numeric they can be converted using the to_numeric() function, or as appropriate, another converter such as to_datetime().

When dtype is a CategoricalDtype with homogeneous categories ( all numeric, all datetimes, etc.), the conversion is done automatically.

In [45]: df = pd.read_csv(StringIO(data), dtype="category") In [46]: df.dtypes Out[46]: col1 category col2 category col3 category dtype: object In [47]: df["col3"] Out[47]: 0 1 1 2 2 3 Name: col3, dtype: category Categories (3, object): ['1', '2', '3'] In [48]: new_categories = pd.to_numeric(df["col3"].cat.categories) In [49]: df["col3"] = df["col3"].cat.rename_categories(new_categories) In [50]: df["col3"] Out[50]: 0 1 1 2 2 3 Name: col3, dtype: category Categories (3, int64): [1, 2, 3]

Naming and using columns#

Handling column names#

A file may or may not have a header row. pandas assumes the first row should be used as the column names:

In [51]: data = "a,b,c\n1,2,3\n4,5,6\n7,8,9" In [52]: print(data) a,b,c 1,2,3 4,5,6 7,8,9 In [53]: pd.read_csv(StringIO(data)) Out[53]: a b c 0 1 2 3 1 4 5 6 2 7 8 9

By specifying the names argument in conjunction with header you can indicate other names to use and whether or not to throw away the header row (if any):

In [54]: print(data) a,b,c 1,2,3 4,5,6 7,8,9 In [55]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=0) Out[55]: foo bar baz 0 1 2 3 1 4 5 6 2 7 8 9 In [56]: pd.read_csv(StringIO(data), names=["foo", "bar", "baz"], header=None) Out[56]: foo bar baz 0 a b c 1 1 2 3 2 4 5 6 3 7 8 9

If the header is in a row other than the first, pass the row number to header. This will skip the preceding rows:

In [57]: data = "skip this skip it\na,b,c\n1,2,3\n4,5,6\n7,8,9" In [58]: pd.read_csv(StringIO(data), header=1) Out[58]: a b c 0 1 2 3 1 4 5 6 2 7 8 9

Note

Default behavior is to infer the column names: if no names are passed the behavior is identical to header=0 and column names are inferred from the first non-blank line of the file, if column names are passed explicitly then the behavior is identical to header=None.

Duplicate names parsing#

Deprecated since version 1.5.0: mangle_dupe_cols was never implemented, and a new argument where the renaming pattern can be specified will be added instead.

If the file or header contains duplicate names, pandas will by default distinguish between them so as to prevent overwriting data:

In [59]: data = "a,b,a\n0,1,2\n3,4,5" In [60]: pd.read_csv(StringIO(data)) Out[60]: a b a.1 0 0 1 2 1 3 4 5

There is no more duplicate data because mangle_dupe_cols=True by default, which modifies a series of duplicate columns ‘X’, …, ‘X’ to become ‘X’, ‘X.1’, …, ‘X.N’.

Filtering columns (usecols)#

The usecols argument allows you to select any subset of the columns in a file, either using the column names, position numbers or a callable:

In [61]: data = "a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz" In [62]: pd.read_csv(StringIO(data)) Out[62]: a b c d 0 1 2 3 foo 1 4 5 6 bar 2 7 8 9 baz In [63]: pd.read_csv(StringIO(data), usecols=["b", "d"]) Out[63]: b d 0 2 foo 1 5 bar 2 8 baz In [64]: pd.read_csv(StringIO(data), usecols=[0, 2, 3]) Out[64]: a c d 0 1 3 foo 1 4 6 bar 2 7 9 baz In [65]: pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ["A", "C"]) Out[65]: a c 0 1 3 1 4 6 2 7 9

The usecols argument can also be used to specify which columns not to use in the final result:

In [66]: pd.read_csv(StringIO(data), usecols=lambda x: x not in ["a", "c"]) Out[66]: b d 0 2 foo 1 5 bar 2 8 baz

In this case, the callable is specifying that we exclude the “a” and “c” columns from the output.

Comments and empty lines#

Ignoring line comments and empty lines#

If the comment parameter is specified, then completely commented lines will be ignored. By default, completely blank lines will be ignored as well.

In [67]: data = "\na,b,c\n \n# commented line\n1,2,3\n\n4,5,6" In [68]: print(data) a,b,c # commented line 1,2,3 4,5,6 In [69]: pd.read_csv(StringIO(data), comment="#") Out[69]: a b c 0 1 2 3 1 4 5 6

If skip_blank_lines=False, then read_csv will not ignore blank lines:

In [70]: data = "a,b,c\n\n1,2,3\n\n\n4,5,6" In [71]: pd.read_csv(StringIO(data), skip_blank_lines=False) Out[71]: a b c 0 NaN NaN NaN 1 1.0 2.0 3.0 2 NaN NaN NaN 3 NaN NaN NaN 4 4.0 5.0 6.0

Warning

The presence of ignored lines might create ambiguities involving line numbers; the parameter header uses row numbers (ignoring commented/empty lines), while skiprows uses line numbers (including commented/empty lines):

In [72]: data = "#comment\na,b,c\nA,B,C\n1,2,3" In [73]: pd.read_csv(StringIO(data), comment="#", header=1) Out[73]: A B C 0 1 2 3 In [74]: data = "A,B,C\n#comment\na,b,c\n1,2,3" In [75]: pd.read_csv(StringIO(data), comment="#", skiprows=2) Out[75]: a b c 0 1 2 3

If both header and skiprows are specified, header will be relative to the end of skiprows. For example:

In [76]: data = ( ....: "# empty\n" ....: "# second empty line\n" ....: "# third emptyline\n" ....: "X,Y,Z\n" ....: "1,2,3\n" ....: "A,B,C\n" ....: "1,2.,4.\n" ....: "5.,NaN,10.0\n" ....: ) ....: In [77]: print(data) # empty # second empty line # third emptyline X,Y,Z 1,2,3 A,B,C 1,2.,4. 5.,NaN,10.0 In [78]: pd.read_csv(StringIO(data), comment="#", skiprows=4, header=1) Out[78]: A B C 0 1.0 2.0 4.0 1 5.0 NaN 10.0

Dealing with Unicode data#

The encoding argument should be used for encoded unicode data, which will result in byte strings being decoded to unicode in the result:

In [84]: from io import BytesIO In [85]: data = b"word,length\n" b"Tr\xc3\xa4umen,7\n" b"Gr\xc3\xbc\xc3\x9fe,5" In [86]: data = data.decode("utf8").encode("latin-1") In [87]: df = pd.read_csv(BytesIO(data), encoding="latin-1") In [88]: df Out[88]: word length 0 Träumen 7 1 Grüße 5 In [89]: df["word"][1] Out[89]: 'Grüße'

Some formats which encode all characters as multiple bytes, like UTF-16, won’t parse correctly at all without specifying the encoding. Full list of Python standard encodings.

Index columns and trailing delimiters#

If a file has one more column of data than the number of column names, the first column will be used as the DataFrame’s row names:

In [90]: data = "a,b,c\n4,apple,bat,5.7\n8,orange,cow,10" In [91]: pd.read_csv(StringIO(data)) Out[91]: a b c 4 apple bat 5.7 8 orange cow 10.0

In [92]: data = "index,a,b,c\n4,apple,bat,5.7\n8,orange,cow,10" In [93]: pd.read_csv(StringIO(data), index_col=0) Out[93]: a b c index 4 apple bat 5.7 8 orange cow 10.0

Ordinarily, you can achieve this behavior using the index_col option.

There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False:

In [94]: data = "a,b,c\n4,apple,bat,\n8,orange,cow," In [95]: print(data) a,b,c 4,apple,bat, 8,orange,cow, In [96]: pd.read_csv(StringIO(data)) Out[96]: a b c 4 apple bat NaN 8 orange cow NaN In [97]: pd.read_csv(StringIO(data), index_col=False) Out[97]: a b c 0 4 apple bat 1 8 orange cow

If a subset of data is being parsed using the usecols option, the index_col specification is based on that subset, not the original data.

In [98]: data = "a,b,c\n4,apple,bat,\n8,orange,cow," In [99]: print(data) a,b,c 4,apple,bat, 8,orange,cow, In [100]: pd.read_csv(StringIO(data), usecols=["b", "c"]) Out[100]: b c 4 bat NaN 8 cow NaN In [101]: pd.read_csv(StringIO(data), usecols=["b", "c"], index_col=0) Out[101]: b c 4 bat NaN 8 cow NaN

Date Handling#

Specifying date columns#

To better facilitate working with datetime data, read_csv() uses the keyword arguments parse_dates and date_parser to allow users to specify a variety of columns and date/time formats to turn the input text data into datetime objects.

The simplest case is to just pass in parse_dates=True:

In [102]: with open("foo.csv", mode="w") as f: .....: f.write("date,A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5") .....: # Use a column as an index, and parse it as dates. In [103]: df = pd.read_csv("foo.csv", index_col=0, parse_dates=True) In [104]: df Out[104]: A B C date 2009-01-01 a 1 2 2009-01-02 b 3 4 2009-01-03 c 4 5 # These are Python datetime objects In [105]: df.index Out[105]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', name='date', freq=None)

It is often the case that we may want to store date and time data separately, or store various date fields separately. the parse_dates keyword can be used to specify a combination of columns to parse the dates and/or times from.

You can specify a list of column lists to parse_dates, the resulting date columns will be prepended to the output (so as to not affect the existing column order) and the new column names will be the concatenation of the component column names:

In [106]: data = ( .....: "KORD,19990127, 19:00:00, 18:56:00, 0.8100\n" .....: "KORD,19990127, 20:00:00, 19:56:00, 0.0100\n" .....: "KORD,19990127, 21:00:00, 20:56:00, -0.5900\n" .....: "KORD,19990127, 21:00:00, 21:18:00, -0.9900\n" .....: "KORD,19990127, 22:00:00, 21:56:00, -0.5900\n" .....: "KORD,19990127, 23:00:00, 22:56:00, -0.5900" .....: ) .....: In [107]: with open("tmp.csv", "w") as fh: .....: fh.write(data) .....: In [108]: df = pd.read_csv("tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]]) In [109]: df Out[109]: 1_2 1_3 0 4 0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81 1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01 2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59 3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99 4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59 5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

By default the parser removes the component date columns, but you can choose to retain them via the keep_date_col keyword:

In [110]: df = pd.read_csv( .....: "tmp.csv", header=None, parse_dates=[[1, 2], [1, 3]], keep_date_col=True .....: ) .....: In [111]: df Out[111]: 1_2 1_3 0 ... 2 3 4 0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD ... 19:00:00 18:56:00 0.81 1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD ... 20:00:00 19:56:00 0.01 2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD ... 21:00:00 20:56:00 -0.59 3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD ... 21:00:00 21:18:00 -0.99 4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD ... 22:00:00 21:56:00 -0.59 5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD ... 23:00:00 22:56:00 -0.59 [6 rows x 7 columns]

Note that if you wish to combine multiple columns into a single date column, a nested list must be used. In other words, parse_dates=[1, 2] indicates that the second and third columns should each be parsed as separate date columns while parse_dates=[[1, 2]] means the two columns should be parsed into a single column.

You can also use a dict to specify custom name columns:

In [112]: date_spec = {"nominal": [1, 2], "actual": [1, 3]} In [113]: df = pd.read_csv("tmp.csv", header=None, parse_dates=date_spec) In [114]: df Out[114]: nominal actual 0 4 0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81 1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01 2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59 3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99 4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59 5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

It is important to remember that if multiple text columns are to be parsed into a single date column, then a new column is prepended to the data. The index_col specification is based off of this new set of columns rather than the original data columns:

In [115]: date_spec = {"nominal": [1, 2], "actual": [1, 3]} In [116]: df = pd.read_csv( .....: "tmp.csv", header=None, parse_dates=date_spec, index_col=0 .....: ) # index is the nominal column .....: In [117]: df Out[117]: actual 0 4 nominal 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

Note

If a column or index contains an unparsable date, the entire column or index will be returned unaltered as an object data type. For non-standard datetime parsing, use to_datetime() after pd.read_csv.

Note

read_csv has a fast_path for parsing datetime strings in iso8601 format, e.g “2000-01-01T00:01:02+00:00” and similar variations. If you can arrange for your data to store datetimes in this format, load times will be significantly faster, ~20x has been observed.

Date parsing functions#

Finally, the parser allows you to specify a custom date_parser function to take full advantage of the flexibility of the date parsing API:

In [118]: df = pd.read_csv( .....: "tmp.csv", header=None, parse_dates=date_spec, date_parser=pd.to_datetime .....: ) .....: In [119]: df Out[119]: nominal actual 0 4 0 1999-01-27 19:00:00 1999-01-27 18:56:00 KORD 0.81 1 1999-01-27 20:00:00 1999-01-27 19:56:00 KORD 0.01 2 1999-01-27 21:00:00 1999-01-27 20:56:00 KORD -0.59 3 1999-01-27 21:00:00 1999-01-27 21:18:00 KORD -0.99 4 1999-01-27 22:00:00 1999-01-27 21:56:00 KORD -0.59 5 1999-01-27 23:00:00 1999-01-27 22:56:00 KORD -0.59

pandas will try to call the date_parser function in three different ways. If an exception is raised, the next one is tried:

  1. date_parser is first called with one or more arrays as arguments, as defined using parse_dates (e.g., date_parser(['2013', '2013'], ['1', '2'])).

  2. If #1 fails, date_parser is called with all the columns concatenated row-wise into a single array (e.g., date_parser(['2013 1', '2013 2'])).

Note that performance-wise, you should try these methods of parsing dates in order:

  1. Try to infer the format using infer_datetime_format=True (see section below).

  2. If you know the format, use pd.to_datetime(): date_parser=lambda x: pd.to_datetime(x, format=...).

  3. If you have a really non-standard format, use a custom date_parser function. For optimal performance, this should be vectorized, i.e., it should accept arrays as arguments.

Parsing a CSV with mixed timezones#

pandas cannot natively represent a column or index with mixed timezones. If your CSV file contains columns with a mixture of timezones, the default result will be an object-dtype column with strings, even with parse_dates.

In [120]: content = """\ .....: a .....: 2000-01-01T00:00:00+05:00 .....: 2000-01-01T00:00:00+06:00""" .....: In [121]: df = pd.read_csv(StringIO(content), parse_dates=["a"]) In [122]: df["a"] Out[122]: 0 2000-01-01 00:00:00+05:00 1 2000-01-01 00:00:00+06:00 Name: a, dtype: object

To parse the mixed-timezone values as a datetime column, pass a partially-applied to_datetime() with utc=True as the date_parser.

In [123]: df = pd.read_csv( .....: StringIO(content), .....: parse_dates=["a"], .....: date_parser=lambda col: pd.to_datetime(col, utc=True), .....: ) .....: In [124]: df["a"] Out[124]: 0 1999-12-31 19:00:00+00:00 1 1999-12-31 18:00:00+00:00 Name: a, dtype: datetime64[ns, UTC]

Inferring datetime format#

If you have parse_dates enabled for some or all of your columns, and your datetime strings are all formatted the same way, you may get a large speed up by setting infer_datetime_format=True. If set, pandas will attempt to guess the format of your datetime strings, and then use a faster means of parsing the strings. 5-10x parsing speeds have been observed. pandas will fallback to the usual parsing if either the format cannot be guessed or the format that was guessed cannot properly parse the entire column of strings. So in general, infer_datetime_format should not have any negative consequences if enabled.

Here are some examples of datetime strings that can be guessed (All representing December 30th, 2011 at 00:00:00):

  • “20111230”

  • “2011/12/30”

  • “20111230 00:00:00”

  • “12/30/2011 00:00:00”

  • “30/Dec/2011 00:00:00”

  • “30/December/2011 00:00:00”

Note that infer_datetime_format is sensitive to dayfirst. With dayfirst=True, it will guess “01/12/2011” to be December 1st. With dayfirst=False (default) it will guess “01/12/2011” to be January 12th.

# Try to infer the format for the index column In [125]: df = pd.read_csv( .....: "foo.csv", .....: index_col=0, .....: parse_dates=True, .....: infer_datetime_format=True, .....: ) .....: In [126]: df Out[126]: A B C date 2009-01-01 a 1 2 2009-01-02 b 3 4 2009-01-03 c 4 5

International date formats#

While US date formats tend to be MM/DD/YYYY, many international formats use DD/MM/YYYY instead. For convenience, a dayfirst keyword is provided:

In [127]: data = "date,value,cat\n1/6/2000,5,a\n2/6/2000,10,b\n3/6/2000,15,c" In [128]: print(data) date,value,cat 1/6/2000,5,a 2/6/2000,10,b 3/6/2000,15,c In [129]: with open("tmp.csv", "w") as fh: .....: fh.write(data) .....: In [130]: pd.read_csv("tmp.csv", parse_dates=[0]) Out[130]: date value cat 0 2000-01-06 5 a 1 2000-02-06 10 b 2 2000-03-06 15 c In [131]: pd.read_csv("tmp.csv", dayfirst=True, parse_dates=[0]) Out[131]: date value cat 0 2000-06-01 5 a 1 2000-06-02 10 b 2 2000-06-03 15 c

Writing CSVs to binary file objects#

New in version 1.2.0.

df.to_csv(..., mode="wb") allows writing a CSV to a file object opened binary mode. In most cases, it is not necessary to specify mode as Pandas will auto-detect whether the file object is opened in text or binary mode.

In [132]: import io In [133]: data = pd.DataFrame([0, 1, 2]) In [134]: buffer = io.BytesIO() In [135]: data.to_csv(buffer, encoding="utf-8", compression="gzip")

Specifying method for floating-point conversion#

The parameter float_precision can be specified in order to use a specific floating-point converter during parsing with the C engine. The options are the ordinary converter, the high-precision converter, and the round-trip converter (which is guaranteed to round-trip values after writing to a file). For example:

In [136]: val = "0.3066101993807095471566981359501369297504425048828125" In [137]: data = "a,b,c\n1,2,{0}".format(val) In [138]: abs( .....: pd.read_csv( .....: StringIO(data), .....: engine="c", .....: float_precision=None, .....: )["c"][0] - float(val) .....: ) .....: Out[138]: 5.551115123125783e-17 In [139]: abs( .....: pd.read_csv( .....: StringIO(data), .....: engine="c", .....: float_precision="high", .....: )["c"][0] - float(val) .....: ) .....: Out[139]: 5.551115123125783e-17 In [140]: abs( .....: pd.read_csv(StringIO(data), engine="c", float_precision="round_trip")["c"][0] .....: - float(val) .....: ) .....: Out[140]: 0.0

Thousand separators#

For large numbers that have been written with a thousands separator, you can set the thousands keyword to a string of length 1 so that integers will be parsed correctly:

By default, numbers with a thousands separator will be parsed as strings:

In [141]: data = ( .....: "ID|level|category\n" .....: "Patient1|123,000|x\n" .....: "Patient2|23,000|y\n" .....: "Patient3|1,234,018|z" .....: ) .....: In [142]: with open("tmp.csv", "w") as fh: .....: fh.write(data) .....: In [143]: df = pd.read_csv("tmp.csv", sep="|") In [144]: df Out[144]: ID level category 0 Patient1 123,000 x 1 Patient2 23,000 y 2 Patient3 1,234,018 z In [145]: df.level.dtype Out[145]: dtype('O')

The thousands keyword allows integers to be parsed correctly:

In [146]: df = pd.read_csv("tmp.csv", sep="|", thousands=",") In [147]: df Out[147]: ID level category 0 Patient1 123000 x 1 Patient2 23000 y 2 Patient3 1234018 z In [148]: df.level.dtype Out[148]: dtype('int64')

NA values#

To control which values are parsed as missing values (which are signified by NaN), specify a string in na_values. If you specify a list of strings, then all values in it are considered to be missing values. If you specify a number (a float, like 5.0 or an integer like 5), the corresponding equivalent values will also imply a missing value (in this case effectively [5.0, 5] are recognized as NaN).

To completely override the default values that are recognized as missing, specify keep_default_na=False.

The default NaN recognized values are ['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'n/a', 'NA', '', '#NA', 'NULL', 'null', 'NaN', '-NaN', 'nan', '-nan', ''].

Let us consider some examples:

pd.read_csv("path_to_file.csv", na_values=[5])

In the example above 5 and 5.0 will be recognized as NaN, in addition to the defaults. A string will first be interpreted as a numerical 5, then as a NaN.

pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=[""])

Above, only an empty field will be recognized as NaN.

pd.read_csv("path_to_file.csv", keep_default_na=False, na_values=["NA", "0"])

Above, both NA and 0 as strings are NaN.

pd.read_csv("path_to_file.csv", na_values=["Nope"])

The default values, in addition to the string "Nope" are recognized as NaN.

Infinity#

inf like values will be parsed as np.inf (positive infinity), and -inf as -np.inf (negative infinity). These will ignore the case of the value, meaning Inf, will also be parsed as np.inf.

Returning Series#

Using the squeeze keyword, the parser will return output with a single column as a Series:

Deprecated since version 1.4.0: Users should append .squeeze("columns") to the DataFrame returned by read_csv instead.

In [149]: data = "level\nPatient1,123000\nPatient2,23000\nPatient3,1234018" In [150]: with open("tmp.csv", "w") as fh: .....: fh.write(data) .....: In [151]: print(open("tmp.csv").read()) level Patient1,123000 Patient2,23000 Patient3,1234018 In [152]: output = pd.read_csv("tmp.csv", squeeze=True) In [153]: output Out[153]: Patient1 123000 Patient2 23000 Patient3 1234018 Name: level, dtype: int64 In [154]: type(output) Out[154]: pandas.core.series.Series

Boolean values#

The common values True, False, TRUE, and FALSE are all recognized as boolean. Occasionally you might want to recognize other values as being boolean. To do this, use the true_values and false_values options as follows:

In [155]: data = "a,b,c\n1,Yes,2\n3,No,4" In [156]: print(data) a,b,c 1,Yes,2 3,No,4 In [157]: pd.read_csv(StringIO(data)) Out[157]: a b c 0 1 Yes 2 1 3 No 4 In [158]: pd.read_csv(StringIO(data), true_values=["Yes"], false_values=["No"]) Out[158]: a b c 0 1 True 2 1 3 False 4

Handling “bad” lines#

Some files may have malformed lines with too few fields or too many. Lines with too few fields will have NA values filled in the trailing fields. Lines with too many fields will raise an error by default:

In [159]: data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10" In [160]: pd.read_csv(StringIO(data)) --------------------------------------------------------------------------- ParserError Traceback (most recent call last) Cell In [160], line 1 ----> 1 pd.read_csv(StringIO(data)) File ~/work/pandas/pandas/pandas/util/_decorators.py:211, in deprecate_kwarg.._deprecate_kwarg..wrapper(*args, **kwargs) 209 else: 210 kwargs[new_arg_name] = new_arg_value --> 211 return func(*args, **kwargs) File ~/work/pandas/pandas/pandas/util/_decorators.py:317, in deprecate_nonkeyword_arguments..decorate..wrapper(*args, **kwargs) 311 if len(args) > num_allow_args: 312 warnings.warn( 313 msg.format(arguments=arguments), 314 FutureWarning, 315 stacklevel=find_stack_level(inspect.currentframe()), 316 ) --> 317 return func(*args, **kwargs) File ~/work/pandas/pandas/pandas/io/parsers/readers.py:950, in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, encoding_errors, dialect, error_bad_lines, warn_bad_lines, on_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options) 935 kwds_defaults = _refine_defaults_read( 936 dialect, 937 delimiter, (...) 946 defaults={"delimiter": ","}, 947 ) 948 kwds.update(kwds_defaults) --> 950 return _read(filepath_or_buffer, kwds) File ~/work/pandas/pandas/pandas/io/parsers/readers.py:611, in _read(filepath_or_buffer, kwds) 608 return parser 610 with parser: --> 611 return parser.read(nrows) File ~/work/pandas/pandas/pandas/io/parsers/readers.py:1772, in TextFileReader.read(self, nrows) 1765 nrows = validate_integer("nrows", nrows) 1766 try: 1767 # error: "ParserBase" has no attribute "read" 1768 ( 1769 index, 1770 columns, 1771 col_dict, -> 1772 ) = self._engine.read( # type: ignore[attr-defined] 1773 nrows 1774 ) 1775 except Exception: 1776 self.close() File ~/work/pandas/pandas/pandas/io/parsers/c_parser_wrapper.py:243, in CParserWrapper.read(self, nrows) 241 try: 242 if self.low_memory: --> 243 chunks = self._reader.read_low_memory(nrows) 244 # destructive to chunks 245 data = _concatenate_chunks(chunks) File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:808, in pandas._libs.parsers.TextReader.read_low_memory() File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:866, in pandas._libs.parsers.TextReader._read_rows() File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:852, in pandas._libs.parsers.TextReader._tokenize_rows() File ~/work/pandas/pandas/pandas/_libs/parsers.pyx:1973, in pandas._libs.parsers.raise_parser_error() ParserError: Error tokenizing data. C error: Expected 3 fields in line 3, saw 4

You can elect to skip bad lines:

In [29]: pd.read_csv(StringIO(data), on_bad_lines="warn") Skipping line 3: expected 3 fields, saw 4 Out[29]: a b c 0 1 2 3 1 8 9 10

Or pass a callable function to handle the bad line if engine="python". The bad line will be a list of strings that was split by the sep:

In [29]: external_list = [] In [30]: def bad_lines_func(line): ...: external_list.append(line) ...: return line[-3:] In [31]: pd.read_csv(StringIO(data), on_bad_lines=bad_lines_func, engine="python") Out[31]: a b c 0 1 2 3 1 5 6 7 2 8 9 10 In [32]: external_list Out[32]: [4, 5, 6, 7] .. versionadded:: 1.4.0

You can also use the usecols parameter to eliminate extraneous column data that appear in some lines but not others:

In [33]: pd.read_csv(StringIO(data), usecols=[0, 1, 2]) Out[33]: a b c 0 1 2 3 1 4 5 6 2 8 9 10

In case you want to keep all data including the lines with too many fields, you can specify a sufficient number of names. This ensures that lines with not enough fields are filled with NaN.

In [34]: pd.read_csv(StringIO(data), names=['a', 'b', 'c', 'd']) Out[34]: a b c d 0 1 2 3 NaN 1 4 5 6 7 2 8 9 10 NaN

Dialect#

The dialect keyword gives greater flexibility in specifying the file format. By default it uses the Excel dialect but you can specify either the dialect name or a csv.Dialect instance.

Suppose you had data with unenclosed quotes:

In [161]: data = "label1,label2,label3\n" 'index1,"a,c,e\n' "index2,b,d,f" In [162]: print(data) label1,label2,label3 index1,"a,c,e index2,b,d,f

By default, read_csv uses the Excel dialect and treats the double quote as the quote character, which causes it to fail when it finds a newline before it finds the closing double quote.

We can get around this using dialect:

In [163]: import csv In [164]: dia = csv.excel() In [165]: dia.quoting = csv.QUOTE_NONE In [166]: pd.read_csv(StringIO(data), dialect=dia) Out[166]: label1 label2 label3 index1 "a c e index2 b d f

All of the dialect options can be specified separately by keyword arguments:

In [167]: data = "a,b,c~1,2,3~4,5,6" In [168]: pd.read_csv(StringIO(data), lineterminator="~") Out[168]: a b c 0 1 2 3 1 4 5 6

Another common dialect option is skipinitialspace, to skip any whitespace after a delimiter:

In [169]: data = "a, b, c\n1, 2, 3\n4, 5, 6" In [170]: print(data) a, b, c 1, 2, 3 4, 5, 6 In [171]: pd.read_csv(StringIO(data), skipinitialspace=True) Out[171]: a b c 0 1 2 3 1 4 5 6

The parsers make every attempt to “do the right thing” and not be fragile. Type inference is a pretty big deal. If a column can be coerced to integer dtype without altering the contents, the parser will do so. Any non-numeric columns will come through as object dtype as with the rest of pandas objects.

Quoting and Escape Characters#

Quotes (and other escape characters) in embedded fields can be handled in any number of ways. One way is to use backslashes; to properly parse this data, you should pass the escapechar option:

In [172]: data = 'a,b\n"hello, \\"Bob\\", nice to see you",5' In [173]: print(data) a,b "hello, \"Bob\", nice to see you",5 In [174]: pd.read_csv(StringIO(data), escapechar="\\") Out[174]: a b 0 hello, "Bob", nice to see you 5

Files with fixed width columns#

While read_csv() reads delimited data, the read_fwf() function works with data files that have known and fixed column widths. The function parameters to read_fwf are largely the same as read_csv with two extra parameters, and a different usage of the delimiter parameter:

  • colspecs: A list of pairs (tuples) giving the extents of the fixed-width fields of each line as half-open intervals (i.e., [from, to[ ). String value ‘infer’ can be used to instruct the parser to try detecting the column specifications from the first 100 rows of the data. Default behavior, if not specified, is to infer.

  • widths: A list of field widths which can be used instead of ‘colspecs’ if the intervals are contiguous.

  • delimiter: Characters to consider as filler characters in the fixed-width file. Can be used to specify the filler character of the fields if it is not spaces (e.g., ‘~’).

Consider a typical fixed-width data file:

In [175]: data1 = ( .....: "id8141 360.242940 149.910199 11950.7\n" .....: "id1594 444.953632 166.985655 11788.4\n" .....: "id1849 364.136849 183.628767 11806.2\n" .....: "id1230 413.836124 184.375703 11916.8\n" .....: "id1948 502.953953 173.237159 12468.3" .....: ) .....: In [176]: with open("bar.csv", "w") as f: .....: f.write(data1) .....:

In order to parse this file into a DataFrame, we simply need to supply the column specifications to the read_fwf function along with the file name:

# Column specifications are a list of half-intervals In [177]: colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)] In [178]: df = pd.read_fwf("bar.csv", colspecs=colspecs, header=None, index_col=0) In [179]: df Out[179]: 1 2 3 0 id8141 360.242940 149.910199 11950.7 id1594 444.953632 166.985655 11788.4 id1849 364.136849 183.628767 11806.2 id1230 413.836124 184.375703 11916.8 id1948 502.953953 173.237159 12468.3

Note how the parser automatically picks column names X. when header=None argument is specified. Alternatively, you can supply just the column widths for contiguous columns:

# Widths are a list of integers In [180]: widths = [6, 14, 13, 10] In [181]: df = pd.read_fwf("bar.csv", widths=widths, header=None) In [182]: df Out[182]: 0 1 2 3 0 id8141 360.242940 149.910199 11950.7 1 id1594 444.953632 166.985655 11788.4 2 id1849 364.136849 183.628767 11806.2 3 id1230 413.836124 184.375703 11916.8 4 id1948 502.953953 173.237159 12468.3

The parser will take care of extra white spaces around the columns so it’s ok to have extra separation between the columns in the file.

By default, read_fwf will try to infer the file’s colspecs by using the first 100 rows of the file. It can do it only in cases when the columns are aligned and correctly separated by the provided delimiter (default delimiter is whitespace).

In [183]: df = pd.read_fwf("bar.csv", header=None, index_col=0) In [184]: df Out[184]: 1 2 3 0 id8141 360.242940 149.910199 11950.7 id1594 444.953632 166.985655 11788.4 id1849 364.136849 183.628767 11806.2 id1230 413.836124 184.375703 11916.8 id1948 502.953953 173.237159 12468.3

read_fwf supports the dtype parameter for specifying the types of parsed columns to be different from the inferred type.

In [185]: pd.read_fwf("bar.csv", header=None, index_col=0).dtypes Out[185]: 1 float64 2 float64 3 float64 dtype: object In [186]: pd.read_fwf("bar.csv", header=None, dtype={2: "object"}).dtypes Out[186]: 0 object 1 float64 2 object 3 float64 dtype: object

Indexes#

Files with an “implicit” index column#

Consider a file with one less entry in the header than the number of data column:

In [187]: data = "A,B,C\n20090101,a,1,2\n20090102,b,3,4\n20090103,c,4,5" In [188]: print(data) A,B,C 20090101,a,1,2 20090102,b,3,4 20090103,c,4,5 In [189]: with open("foo.csv", "w") as f: .....: f.write(data) .....:

In this special case, read_csv assumes that the first column is to be used as the index of the DataFrame:

In [190]: pd.read_csv("foo.csv") Out[190]: A B C 20090101 a 1 2 20090102 b 3 4 20090103 c 4 5

Note that the dates weren’t automatically parsed. In that case you would need to do as before:

In [191]: df = pd.read_csv("foo.csv", parse_dates=True) In [192]: df.index Out[192]: DatetimeIndex(['2009-01-01', '2009-01-02', '2009-01-03'], dtype='datetime64[ns]', freq=None)

Reading an index with a MultiIndex#

Suppose you have data indexed by two columns:

In [193]: data = 'year,indiv,zit,xit\n1977,"A",1.2,.6\n1977,"B",1.5,.5' In [194]: print(data) year,indiv,zit,xit 1977,"A",1.2,.6 1977,"B",1.5,.5 In [195]: with open("mindex_ex.csv", mode="w") as f: .....: f.write(data) .....:

The index_col argument to read_csv can take a list of column numbers to turn multiple columns into a MultiIndex for the index of the returned object:

In [196]: df = pd.read_csv("mindex_ex.csv", index_col=[0, 1]) In [197]: df Out[197]: zit xit year indiv 1977 A 1.2 0.6 B 1.5 0.5 In [198]: df.loc[1977] Out[198]: zit xit indiv A 1.2 0.6 B 1.5 0.5

Reading columns with a MultiIndex#

By specifying list of row locations for the header argument, you can read in a MultiIndex for the columns. Specifying non-consecutive rows will skip the intervening rows.

In [199]: from pandas._testing import makeCustomDataframe as mkdf In [200]: df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) In [201]: df.to_csv("mi.csv") In [202]: print(open("mi.csv").read()) C0,,C_l0_g0,C_l0_g1,C_l0_g2 C1,,C_l1_g0,C_l1_g1,C_l1_g2 C2,,C_l2_g0,C_l2_g1,C_l2_g2 C3,,C_l3_g0,C_l3_g1,C_l3_g2 R0,R1,,, R_l0_g0,R_l1_g0,R0C0,R0C1,R0C2 R_l0_g1,R_l1_g1,R1C0,R1C1,R1C2 R_l0_g2,R_l1_g2,R2C0,R2C1,R2C2 R_l0_g3,R_l1_g3,R3C0,R3C1,R3C2 R_l0_g4,R_l1_g4,R4C0,R4C1,R4C2 In [203]: pd.read_csv("mi.csv", header=[0, 1, 2, 3], index_col=[0, 1]) Out[203]: C0 C_l0_g0 C_l0_g1 C_l0_g2 C1 C_l1_g0 C_l1_g1 C_l1_g2 C2 C_l2_g0 C_l2_g1 C_l2_g2 C3 C_l3_g0 C_l3_g1 C_l3_g2 R0 R1 R_l0_g0 R_l1_g0 R0C0 R0C1 R0C2 R_l0_g1 R_l1_g1 R1C0 R1C1 R1C2 R_l0_g2 R_l1_g2 R2C0 R2C1 R2C2 R_l0_g3 R_l1_g3 R3C0 R3C1 R3C2 R_l0_g4 R_l1_g4 R4C0 R4C1 R4C2

read_csv is also able to interpret a more common format of multi-columns indices.

In [204]: data = ",a,a,a,b,c,c\n,q,r,s,t,u,v\none,1,2,3,4,5,6\ntwo,7,8,9,10,11,12" In [205]: print(data) ,a,a,a,b,c,c ,q,r,s,t,u,v one,1,2,3,4,5,6 two,7,8,9,10,11,12 In [206]: with open("mi2.csv", "w") as fh: .....: fh.write(data) .....: In [207]: pd.read_csv("mi2.csv", header=[0, 1], index_col=0) Out[207]: a b c q r s t u v one 1 2 3 4 5 6 two 7 8 9 10 11 12

Note

If an index_col is not specified (e.g. you don’t have an index, or wrote it with df.to_csv(..., index=False), then any names on the columns index will be lost.

Automatically “sniffing” the delimiter#

read_csv is capable of inferring delimited (not necessarily comma-separated) files, as pandas uses the csv.Sniffer class of the csv module. For this, you have to specify sep=None.

In [208]: df = pd.DataFrame(np.random.randn(10, 4)) In [209]: df.to_csv("tmp.csv", sep="|") In [210]: df.to_csv("tmp2.csv", sep=":") In [211]: pd.read_csv("tmp2.csv", sep=None, engine="python") Out[211]: Unnamed: 0 0 1 2 3 0 0 0.469112 -0.282863 -1.509059 -1.135632 1 1 1.212112 -0.173215 0.119209 -1.044236 2 2 -0.861849 -2.104569 -0.494929 1.071804 3 3 0.721555 -0.706771 -1.039575 0.271860 4 4 -0.424972 0.567020 0.276232 -1.087401 5 5 -0.673690 0.113648 -1.478427 0.524988 6 6 0.404705 0.577046 -1.715002 -1.039268 7 7 -0.370647 -1.157892 -1.344312 0.844885 8 8 1.075770 -0.109050 1.643563 -1.469388 9 9 0.357021 -0.674600 -1.776904 -0.968914

Reading multiple files to create a single DataFrame#

It’s best to use concat() to combine multiple files. See the cookbook for an example.

Iterating through files chunk by chunk#

Suppose you wish to iterate through a (potentially very large) file lazily rather than reading the entire file into memory, such as the following:

In [212]: df = pd.DataFrame(np.random.randn(10, 4)) In [213]: df.to_csv("tmp.csv", sep="|") In [214]: table = pd.read_csv("tmp.csv", sep="|") In [215]: table Out[215]: Unnamed: 0 0 1 2 3 0 0 -1.294524 0.413738 0.276662 -0.472035 1 1 -0.013960 -0.362543 -0.006154 -0.923061 2 2 0.895717 0.805244 -1.206412 2.565646 3 3 1.431256 1.340309 -1.170299 -0.226169 4 4 0.410835 0.813850 0.132003 -0.827317 5 5 -0.076467 -1.187678 1.130127 -1.436737 6 6 -1.413681 1.607920 1.024180 0.569605 7 7 0.875906 -2.211372 0.974466 -2.006747 8 8 -0.410001 -0.078638 0.545952 -1.219217 9 9 -1.226825 0.769804 -1.281247 -0.727707

By specifying a chunksize to read_csv, the return value will be an iterable object of type TextFileReader:

In [216]: with pd.read_csv("tmp.csv", sep="|", chunksize=4) as reader: .....: reader .....: for chunk in reader: .....: print(chunk) .....: Unnamed: 0 0 1 2 3 0 0 -1.294524 0.413738 0.276662 -0.472035 1 1 -0.013960 -0.362543 -0.006154 -0.923061 2 2 0.895717 0.805244 -1.206412 2.565646 3 3 1.431256 1.340309 -1.170299 -0.226169 Unnamed: 0 0 1 2 3 4 4 0.410835 0.813850 0.132003 -0.827317 5 5 -0.076467 -1.187678 1.130127 -1.436737 6 6 -1.413681 1.607920 1.024180 0.569605 7 7 0.875906 -2.211372 0.974466 -2.006747 Unnamed: 0 0 1 2 3 8 8 -0.410001 -0.078638 0.545952 -1.219217 9 9 -1.226825 0.769804 -1.281247 -0.727707

Changed in version 1.2: read_csv/json/sas return a context-manager when iterating through a file.

Specifying iterator=True will also return the TextFileReader object:

In [217]: with pd.read_csv("tmp.csv", sep="|", iterator=True) as reader: .....: reader.get_chunk(5) .....:

Specifying the parser engine#

Pandas currently supports three engines, the C engine, the python engine, and an experimental pyarrow engine (requires the pyarrow package). In general, the pyarrow engine is fastest on larger workloads and is equivalent in speed to the C engine on most other workloads. The python engine tends to be slower than the pyarrow and C engines on most workloads. However, the pyarrow engine is much less robust than the C engine, which lacks a few features compared to the Python engine.

Where possible, pandas uses the C parser (specified as engine='c'), but it may fall back to Python if C-unsupported options are specified.

Currently, options unsupported by the C and pyarrow engines include:

  • sep other than a single character (e.g. regex separators)

  • skipfooter

  • sep=None with delim_whitespace=False

Specifying any of the above options will produce a ParserWarning unless the python engine is selected explicitly using engine='python'.

Options that are unsupported by the pyarrow engine which are not covered by the list above include:

  • float_precision

  • chunksize

  • comment

  • nrows

  • thousands

  • memory_map

  • dialect

  • warn_bad_lines

  • error_bad_lines

  • on_bad_lines

  • delim_whitespace

  • quoting

  • lineterminator

  • converters

  • decimal

  • iterator

  • dayfirst

  • infer_datetime_format

  • verbose

  • skipinitialspace

  • low_memory

Specifying these options with engine='pyarrow' will raise a ValueError.

Reading/writing remote files#

You can pass in a URL to read or write remote files to many of pandas’ IO functions - the following example shows reading a CSV file:

df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")

New in version 1.3.0.

A custom header can be sent alongside HTTP(s) requests by passing a dictionary of header key value mappings to the storage_options keyword argument as shown below:

headers = {"User-Agent": "pandas"} df = pd.read_csv( "https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t", storage_options=headers )

All URLs which are not local files or HTTP(s) are handled by fsspec, if installed, and its various filesystem implementations (including Amazon S3, Google Cloud, SSH, FTP, webHDFS…). Some of these implementations will require additional packages to be installed, for example S3 URLs require the s3fs library:

df = pd.read_json("s3://pandas-test/adatafile.json")

When dealing with remote storage systems, you might need extra configuration with environment variables or config files in special locations. For example, to access data in your S3 bucket, you will need to define credentials in one of the several ways listed in the S3Fs documentation. The same is true for several of the storage backends, and you should follow the links at fsimpl1 for implementations built into fsspec and fsimpl2 for those not included in the main fsspec distribution.

You can also pass parameters directly to the backend driver. For example, if you do not have S3 credentials, you can still access public data by specifying an anonymous connection, such as

New in version 1.2.0.

pd.read_csv( "s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/SaKe2013" "-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv", storage_options={"anon": True}, )

fsspec also allows complex URLs, for accessing data in compressed archives, local caching of files, and more. To locally cache the above example, you would modify the call to

pd.read_csv( "simplecache::s3://ncei-wcsd-archive/data/processed/SH1305/18kHz/" "SaKe2013-D20130523-T080854_to_SaKe2013-D20130523-T085643.csv", storage_options={"s3": {"anon": True}}, )

where we specify that the “anon” parameter is meant for the “s3” part of the implementation, not to the caching implementation. Note that this caches to a temporary directory for the duration of the session only, but you can also specify a permanent store.

Writing out data#

Writing to CSV format#

The Series and DataFrame objects have an instance method to_csv which allows storing the contents of the object as a comma-separated-values file. The function takes a number of arguments. Only the first is required.

  • path_or_buf: A string path to the file to write or a file object. If a file object it must be opened with newline=''

  • sep : Field delimiter for the output file (default “,”)

  • na_rep: A string representation of a missing value (default ‘’)

  • float_format: Format string for floating point numbers

  • columns: Columns to write (default None)

  • header: Whether to write out the column names (default True)

  • index: whether to write row (index) names (default True)

  • index_label: Column label(s) for index column(s) if desired. If None (default), and header and index are True, then the index names are used. (A sequence should be given if the DataFrame uses MultiIndex).

  • mode : Python write mode, default ‘w’

  • encoding: a string representing the encoding to use if the contents are non-ASCII, for Python versions prior to 3

  • lineterminator: Character sequence denoting line end (default os.linesep)

  • quoting: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a float_format then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric

  • quotechar: Character used to quote fields (default ‘”’)

  • doublequote: Control quoting of quotechar in fields (default True)

  • escapechar: Character used to escape sep and quotechar when appropriate (default None)

  • chunksize: Number of rows to write at a time

  • date_format: Format string for datetime objects

Writing a formatted string#

The DataFrame object has an instance method to_string which allows control over the string representation of the object. All arguments are optional:

  • buf default None, for example a StringIO object

  • columns default None, which columns to write

  • col_space default None, minimum width of each column.

  • na_rep default NaN, representation of NA value

  • formatters default None, a dictionary (by column) of functions each of which takes a single argument and returns a formatted string

  • float_format default None, a function which takes a single (float) argument and returns a formatted string; to be applied to floats in the DataFrame.

  • sparsify default True, set to False for a DataFrame with a hierarchical index to print every MultiIndex key at each row.

  • index_names default True, will print the names of the indices

  • index default True, will print the index (ie, row labels)

  • header default True, will print the column labels

  • justify default left, will print column headers left- or right-justified

The Series object also has a to_string method, but with only the buf, na_rep, float_format arguments. There is also a length argument which, if set to True, will additionally output the length of the Series.

JSON#

Read and write JSON format files and strings.

Writing JSON#

A Series or DataFrame can be converted to a valid JSON string. Use to_json with optional parameters:

  • path_or_buf : the pathname or buffer to write the output This can be None in which case a JSON string is returned

  • orient :

    Series:
    • default is index

    • allowed values are {split, records, index}

    DataFrame:
    • default is columns

    • allowed values are {split, records, index, columns, values, table}

    The format of the JSON string

    split

    dict like {index -> [index], columns -> [columns], data -> [values]}

    records

    list like [{column -> value}, … , {column -> value}]

    index

    dict like {index -> {column -> value}}

    columns

    dict like {column -> {index -> value}}

    values

    just the values array

    table

    adhering to the JSON Table Schema

  • date_format : string, type of date conversion, ‘epoch’ for timestamp, ‘iso’ for ISO8601.

  • double_precision : The number of decimal places to use when encoding floating point values, default 10.

  • force_ascii : force encoded string to be ASCII, default True.

  • date_unit : The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’ or ‘ns’ for seconds, milliseconds, microseconds and nanoseconds respectively. Default ‘ms’.

  • default_handler : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.

  • lines : If records orient, then will write each record per line as json.

Note NaN’s, NaT’s and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.

In [218]: dfj = pd.DataFrame(np.random.randn(5, 2), columns=list("AB")) In [219]: json = dfj.to_json() In [220]: json Out[220]: '{"A":{"0":-0.1213062281,"1":0.6957746499,"2":0.9597255933,"3":-0.6199759194,"4":-0.7323393705},"B":{"0":-0.0978826728,"1":0.3417343559,"2":-1.1103361029,"3":0.1497483186,"4":0.6877383895}}'

Orient options#

There are a number of different options for the format of the resulting JSON file / string. Consider the following DataFrame and Series:

In [221]: dfjo = pd.DataFrame( .....: dict(A=range(1, 4), B=range(4, 7), C=range(7, 10)), .....: columns=list("ABC"), .....: index=list("xyz"), .....: ) .....: In [222]: dfjo Out[222]: A B C x 1 4 7 y 2 5 8 z 3 6 9 In [223]: sjo = pd.Series(dict(x=15, y=16, z=17), name="D") In [224]: sjo Out[224]: x 15 y 16 z 17 Name: D, dtype: int64

Column oriented (the default for DataFrame) serializes the data as nested JSON objects with column labels acting as the primary index:

In [225]: dfjo.to_json(orient="columns") Out[225]: '{"A":{"x":1,"y":2,"z":3},"B":{"x":4,"y":5,"z":6},"C":{"x":7,"y":8,"z":9}}' # Not available for Series

Index oriented (the default for Series) similar to column oriented but the index labels are now primary:

In [226]: dfjo.to_json(orient="index") Out[226]: '{"x":{"A":1,"B":4,"C":7},"y":{"A":2,"B":5,"C":8},"z":{"A":3,"B":6,"C":9}}' In [227]: sjo.to_json(orient="index") Out[227]: '{"x":15,"y":16,"z":17}'

Record oriented serializes the data to a JSON array of column -> value records, index labels are not included. This is useful for passing DataFrame data to plotting libraries, for example the JavaScript library d3.js:

In [228]: dfjo.to_json(orient="records") Out[228]: '[{"A":1,"B":4,"C":7},{"A":2,"B":5,"C":8},{"A":3,"B":6,"C":9}]' In [229]: sjo.to_json(orient="records") Out[229]: '[15,16,17]'

Value oriented is a bare-bones option which serializes to nested JSON arrays of values only, column and index labels are not included:

In [230]: dfjo.to_json(orient="values") Out[230]: '[[1,4,7],[2,5,8],[3,6,9]]' # Not available for Series

Split oriented serializes to a JSON object containing separate entries for values, index and columns. Name is also included for Series:

In [231]: dfjo.to_json(orient="split") Out[231]: '{"columns":["A","B","C"],"index":["x","y","z"],"data":[[1,4,7],[2,5,8],[3,6,9]]}' In [232]: sjo.to_json(orient="split") Out[232]: '{"name":"D","index":["x","y","z"],"data":[15,16,17]}'

Table oriented serializes to the JSON Table Schema, allowing for the preservation of metadata including but not limited to dtypes and index names.

Note

Any orient option that encodes to a JSON object will not preserve the ordering of index and column labels during round-trip serialization. If you wish to preserve label ordering use the split option as it uses ordered containers.

Date handling#

Writing in ISO date format:

In [233]: dfd = pd.DataFrame(np.random.randn(5, 2), columns=list("AB")) In [234]: dfd["date"] = pd.Timestamp("20130101") In [235]: dfd = dfd.sort_index(axis=1, ascending=False) In [236]: json = dfd.to_json(date_format="iso") In [237]: json Out[237]: '{"date":{"0":"2013-01-01T00:00:00.000","1":"2013-01-01T00:00:00.000","2":"2013-01-01T00:00:00.000","3":"2013-01-01T00:00:00.000","4":"2013-01-01T00:00:00.000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'

Writing in ISO date format, with microseconds:

In [238]: json = dfd.to_json(date_format="iso", date_unit="us") In [239]: json Out[239]: '{"date":{"0":"2013-01-01T00:00:00.000000","1":"2013-01-01T00:00:00.000000","2":"2013-01-01T00:00:00.000000","3":"2013-01-01T00:00:00.000000","4":"2013-01-01T00:00:00.000000"},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'

Epoch timestamps, in seconds:

In [240]: json = dfd.to_json(date_format="epoch", date_unit="s") In [241]: json Out[241]: '{"date":{"0":1356998400,"1":1356998400,"2":1356998400,"3":1356998400,"4":1356998400},"B":{"0":0.403309524,"1":0.3016244523,"2":-1.3698493577,"3":1.4626960492,"4":-0.8265909164},"A":{"0":0.1764443426,"1":-0.1549507744,"2":-2.1798606054,"3":-0.9542078401,"4":-1.7431609117}}'

Writing to a file, with a date index and a date column:

In [242]: dfj2 = dfj.copy() In [243]: dfj2["date"] = pd.Timestamp("20130101") In [244]: dfj2["ints"] = list(range(5)) In [245]: dfj2["bools"] = True In [246]: dfj2.index = pd.date_range("20130101", periods=5) In [247]: dfj2.to_json("test.json") In [248]: with open("test.json") as fh: .....: print(fh.read()) .....: {"A":{"1356998400000":-0.1213062281,"1357084800000":0.6957746499,"1357171200000":0.9597255933,"1357257600000":-0.6199759194,"1357344000000":-0.7323393705},"B":{"1356998400000":-0.0978826728,"1357084800000":0.3417343559,"1357171200000":-1.1103361029,"1357257600000":0.1497483186,"1357344000000":0.6877383895},"date":{"1356998400000":1356998400000,"1357084800000":1356998400000,"1357171200000":1356998400000,"1357257600000":1356998400000,"1357344000000":1356998400000},"ints":{"1356998400000":0,"1357084800000":1,"1357171200000":2,"1357257600000":3,"1357344000000":4},"bools":{"1356998400000":true,"1357084800000":true,"1357171200000":true,"1357257600000":true,"1357344000000":true}}

Fallback behavior#

If the JSON serializer cannot handle the container contents directly it will fall back in the following manner:

  • if the dtype is unsupported (e.g. np.complex_) then the default_handler, if provided, will be called for each value, otherwise an exception is raised.

  • if an object is unsupported it will attempt the following:

    • check if the object has defined a toDict method and call it. A toDict method should return a dict which will then be JSON serialized.

    • invoke the default_handler if one was provided.

    • convert the object to a dict by traversing its contents. However this will often fail with an OverflowError or give unexpected results.

In general the best approach for unsupported objects or dtypes is to provide a default_handler. For example:

>>> DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json() # raises RuntimeError: Unhandled numpy dtype 15

can be dealt with by specifying a simple default_handler:

In [249]: pd.DataFrame([1.0, 2.0, complex(1.0, 2.0)]).to_json(default_handler=str) Out[249]: '{"0":{"0":"(1+0j)","1":"(2+0j)","2":"(1+2j)"}}'

Reading JSON#

Reading a JSON string to pandas object can take a number of parameters. The parser will try to parse a DataFrame if typ is not supplied or is None. To explicitly force Series parsing, pass typ=series

  • filepath_or_buffer : a VALID JSON string or file handle / StringIO. The string could be a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host is expected. For instance, a local file could be file ://localhost/path/to/table.json

  • typ : type of object to recover (series or frame), default ‘frame’

  • orient :

    Series :
    • default is index

    • allowed values are {split, records, index}

    DataFrame
    • default is columns

    • allowed values are {split, records, index, columns, values, table}

    The format of the JSON string

    split

    dict like {index -> [index], columns -> [columns], data -> [values]}

    records

    list like [{column -> value}, … , {column -> value}]

    index

    dict like {index -> {column -> value}}

    columns

    dict like {column -> {index -> value}}

    values

    just the values array

    table

    adhering to the JSON Table Schema

  • dtype : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don’t infer dtypes at all, default is True, apply only to the data.

  • convert_axes : boolean, try to convert the axes to the proper dtypes, default is True

  • convert_dates : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True.

  • keep_default_dates : boolean, default True. If parsing dates, then parse the default date-like columns.

  • numpy : direct decoding to NumPy arrays. default is False; Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering MUST be the same for each term if numpy=True.

  • precise_float : boolean, default False. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (False) is to use fast but less precise builtin functionality.

  • date_unit : string, the timestamp unit to detect if converting dates. Default None. By default the timestamp precision will be detected, if this is not desired then pass one of ‘s’, ‘ms’, ‘us’ or ‘ns’ to force timestamp precision to seconds, milliseconds, microseconds or nanoseconds respectively.

  • lines : reads file as one json object per line.

  • encoding : The encoding to use to decode py3 bytes.

  • chunksize : when used in combination with lines=True, return a JsonReader which reads in chunksize lines per iteration.

The parser will raise one of ValueError/TypeError/AssertionError if the JSON is not parseable.

If a non-default orient was used when encoding to JSON be sure to pass the same option here so that decoding produces sensible results, see Orient Options for an overview.

Data conversion#

The default of convert_axes=True, dtype=True, and convert_dates=True will try to parse the axes, and all of the data into appropriate types, including dates. If you need to override specific dtypes, pass a dict to dtype. convert_axes should only be set to False if you need to preserve string-like numbers (e.g. ‘1’, ‘2’) in an axes.

Note

Large integer values may be converted to dates if convert_dates=True and the data and / or column labels appear ‘date-like’. The exact threshold depends on the date_unit specified. ‘date-like’ means that the column label meets one of the following criteria:

  • it ends with '_at'

  • it ends with '_time'

  • it begins with 'timestamp'

  • it is 'modified'

  • it is 'date'

Warning

When reading JSON data, automatic coercing into dtypes has some quirks:

  • an index can be reconstructed in a different order from serialization, that is, the returned order is not guaranteed to be the same as before serialization

  • a column that was float data will be converted to integer if it can be done safely, e.g. a column of 1.

  • bool columns will be converted to integer on reconstruction

Thus there are times where you may want to specify specific dtypes via the dtype keyword argument.

Reading from a JSON string:

In [250]: pd.read_json(json) Out[250]: date B A 0 2013-01-01 0.403310 0.176444 1 2013-01-01 0.301624 -0.154951 2 2013-01-01 -1.369849 -2.179861 3 2013-01-01 1.462696 -0.954208 4 2013-01-01 -0.826591 -1.743161

Reading from a file:

In [251]: pd.read_json("test.json") Out[251]: A B date ints bools 2013-01-01 -0.121306 -0.097883 2013-01-01 0 True 2013-01-02 0.695775 0.341734 2013-01-01 1 True 2013-01-03 0.959726 -1.110336 2013-01-01 2 True 2013-01-04 -0.619976 0.149748 2013-01-01 3 True 2013-01-05 -0.732339 0.687738 2013-01-01 4 True

Don’t convert any data (but still convert axes and dates):

In [252]: pd.read_json("test.json", dtype=object).dtypes Out[252]: A object B object date object ints object bools object dtype: object

Specify dtypes for conversion:

In [253]: pd.read_json("test.json", dtype={"A": "float32", "bools": "int8"}).dtypes Out[253]: A float32 B float64 date datetime64[ns] ints int64 bools int8 dtype: object

Preserve string indices:

In [254]: si = pd.DataFrame( .....: np.zeros((4, 4)), columns=list(range(4)), index=[str(i) for i in range(4)] .....: ) .....: In [255]: si Out[255]: 0 1 2 3 0 0.0 0.0 0.0 0.0 1 0.0 0.0 0.0 0.0 2 0.0 0.0 0.0 0.0 3 0.0 0.0 0.0 0.0 In [256]: si.index Out[256]: Index(['0', '1', '2', '3'], dtype='object') In [257]: si.columns Out[257]: Int64Index([0, 1, 2, 3], dtype='int64') In [258]: json = si.to_json() In [259]: sij = pd.read_json(json, convert_axes=False) In [260]: sij Out[260]: 0 1 2 3 0 0 0 0 0 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 In [261]: sij.index Out[261]: Index(['0', '1', '2', '3'], dtype='object') In [262]: sij.columns Out[262]: Index(['0', '1', '2', '3'], dtype='object')

Dates written in nanoseconds need to be read back in nanoseconds:

In [263]: json = dfj2.to_json(date_unit="ns") # Try to parse timestamps as milliseconds -> Won't Work In [264]: dfju = pd.read_json(json, date_unit="ms") In [265]: dfju Out[265]: A B date ints bools 1356998400000000000 -0.121306 -0.097883 1356998400000000000 0 True 1357084800000000000 0.695775 0.341734 1356998400000000000 1 True 1357171200000000000 0.959726 -1.110336 1356998400000000000 2 True 1357257600000000000 -0.619976 0.149748 1356998400000000000 3 True 1357344000000000000 -0.732339 0.687738 1356998400000000000 4 True # Let pandas detect the correct precision In [266]: dfju = pd.read_json(json) In [267]: dfju Out[267]: A B date ints bools 2013-01-01 -0.121306 -0.097883 2013-01-01 0 True 2013-01-02 0.695775 0.341734 2013-01-01 1 True 2013-01-03 0.959726 -1.110336 2013-01-01 2 True 2013-01-04 -0.619976 0.149748 2013-01-01 3 True 2013-01-05 -0.732339 0.687738 2013-01-01 4 True # Or specify that all timestamps are in nanoseconds In [268]: dfju = pd.read_json(json, date_unit="ns") In [269]: dfju Out[269]: A B date ints bools 2013-01-01 -0.121306 -0.097883 2013-01-01 0 True 2013-01-02 0.695775 0.341734 2013-01-01 1 True 2013-01-03 0.959726 -1.110336 2013-01-01 2 True 2013-01-04 -0.619976 0.149748 2013-01-01 3 True 2013-01-05 -0.732339 0.687738 2013-01-01 4 True

The Numpy parameter#

Note

This param has been deprecated as of version 1.0.0 and will raise a FutureWarning.

This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.

If numpy=True is passed to read_json an attempt will be made to sniff an appropriate dtype during deserialization and to subsequently decode directly to NumPy arrays, bypassing the need for intermediate Python objects.

This can provide speedups if you are deserialising a large amount of numeric data:

In [270]: randfloats = np.random.uniform(-100, 1000, 10000) In [271]: randfloats.shape = (1000, 10) In [272]: dffloats = pd.DataFrame(randfloats, columns=list("ABCDEFGHIJ")) In [273]: jsonfloats = dffloats.to_json()

In [274]: %timeit pd.read_json(jsonfloats) 11.6 ms +- 92.8 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

In [275]: %timeit pd.read_json(jsonfloats, numpy=True) 8.75 ms +- 107 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

The speedup is less noticeable for smaller datasets:

In [276]: jsonfloats = dffloats.head(100).to_json()

In [277]: %timeit pd.read_json(jsonfloats) 7.27 ms +- 105 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

In [278]: %timeit pd.read_json(jsonfloats, numpy=True) 6.69 ms +- 117 us per loop (mean +- std. dev. of 7 runs, 100 loops each)

Warning

Direct NumPy decoding makes a number of assumptions and may fail or produce unexpected output if these assumptions are not satisfied:

  • data is numeric.

  • data is uniform. The dtype is sniffed from the first value decoded. A ValueError may be raised, or incorrect output may be produced if this condition is not satisfied.

  • labels are ordered. Labels are only read from the first container, it is assumed that each subsequent row / column has been encoded in the same order. This should be satisfied if the data was encoded using to_json but may not be the case if the JSON is from another source.

Normalization#

pandas provides a utility function to take a dict or list of dicts and normalize this semi-structured data into a flat table.

In [279]: data = [ .....: {"id": 1, "name": {"first": "Coleen", "last": "Volk"}}, .....: {"name": {"given": "Mark", "family": "Regner"}}, .....: {"id": 2, "name": "Faye Raker"}, .....: ] .....: In [280]: pd.json_normalize(data) Out[280]: id name.first name.last name.given name.family name 0 1.0 Coleen Volk NaN NaN NaN 1 NaN NaN NaN Mark Regner NaN 2 2.0 NaN NaN NaN NaN Faye Raker

In [281]: data = [ .....: { .....: "state": "Florida", .....: "shortname": "FL", .....: "info": {"governor": "Rick Scott"}, .....: "county": [ .....: {"name": "Dade", "population": 12345}, .....: {"name": "Broward", "population": 40000}, .....: {"name": "Palm Beach", "population": 60000}, .....: ], .....: }, .....: { .....: "state": "Ohio", .....: "shortname": "OH", .....: "info": {"governor": "John Kasich"}, .....: "county": [ .....: {"name": "Summit", "population": 1234}, .....: {"name": "Cuyahoga", "population": 1337}, .....: ], .....: }, .....: ] .....: In [282]: pd.json_normalize(data, "county", ["state", "shortname", ["info", "governor"]]) Out[282]: name population state shortname info.governor 0 Dade 12345 Florida FL Rick Scott 1 Broward 40000 Florida FL Rick Scott 2 Palm Beach 60000 Florida FL Rick Scott 3 Summit 1234 Ohio OH John Kasich 4 Cuyahoga 1337 Ohio OH John Kasich

The max_level parameter provides more control over which level to end normalization. With max_level=1 the following snippet normalizes until 1st nesting level of the provided dict.

In [283]: data = [ .....: { .....: "CreatedBy": {"Name": "User001"}, .....: "Lookup": { .....: "TextField": "Some text", .....: "UserField": {"Id": "ID001", "Name": "Name001"}, .....: }, .....: "Image": {"a": "b"}, .....: } .....: ] .....: In [284]: pd.json_normalize(data, max_level=1) Out[284]: CreatedBy.Name Lookup.TextField Lookup.UserField Image.a 0 User001 Some text {'Id': 'ID001', 'Name': 'Name001'} b

Line delimited json#

pandas is able to read and write line-delimited json files that are common in data processing pipelines using Hadoop or Spark.

For line-delimited json files, pandas can also return an iterator which reads in chunksize lines at a time. This can be useful for large files or to read from a stream.

In [285]: jsonl = """ .....: {"a": 1, "b": 2} .....: {"a": 3, "b": 4} .....: """ .....: In [286]: df = pd.read_json(jsonl, lines=True) In [287]: df Out[287]: a b 0 1 2 1 3 4 In [288]: df.to_json(orient="records", lines=True) Out[288]: '{"a":1,"b":2}\n{"a":3,"b":4}\n' # reader is an iterator that returns ``chunksize`` lines each iteration In [289]: with pd.read_json(StringIO(jsonl), lines=True, chunksize=1) as reader: .....: reader .....: for chunk in reader: .....: print(chunk) .....: Empty DataFrame Columns: [] Index: [] a b 0 1 2 a b 1 3 4

Table schema#

Table Schema is a spec for describing tabular datasets as a JSON object. The JSON includes information on the field names, types, and other attributes. You can use the orient table to build a JSON string with two fields, schema and data.

In [290]: df = pd.DataFrame( .....: { .....: "A": [1, 2, 3], .....: "B": ["a", "b", "c"], .....: "C": pd.date_range("2016-01-01", freq="d", periods=3), .....: }, .....: index=pd.Index(range(3), name="idx"), .....: ) .....: In [291]: df Out[291]: A B C idx 0 1 a 2016-01-01 1 2 b 2016-01-02 2 3 c 2016-01-03 In [292]: df.to_json(orient="table", date_format="iso") Out[292]: '{"schema":{"fields":[{"name":"idx","type":"integer"},{"name":"A","type":"integer"},{"name":"B","type":"string"},{"name":"C","type":"datetime"}],"primaryKey":["idx"],"pandas_version":"1.4.0"},"data":[{"idx":0,"A":1,"B":"a","C":"2016-01-01T00:00:00.000"},{"idx":1,"A":2,"B":"b","C":"2016-01-02T00:00:00.000"},{"idx":2,"A":3,"B":"c","C":"2016-01-03T00:00:00.000"}]}'

The schema field contains the fields key, which itself contains a list of column name to type pairs, including the Index or MultiIndex (see below for a list of types). The schema field also contains a primaryKey field if the (Multi)index is unique.

The second field, data, contains the serialized data with the records orient. The index is included, and any datetimes are ISO 8601 formatted, as required by the Table Schema spec.

The full list of types supported are described in the Table Schema spec. This table shows the mapping from pandas types:

pandas type

Table Schema type

int64

integer

float64

number

bool

boolean

datetime64[ns]

datetime

timedelta64[ns]

duration

categorical

any

object

str

A few notes on the generated table schema:

  • The schema object contains a pandas_version field. This contains the version of pandas’ dialect of the schema, and will be incremented with each revision.

  • All dates are converted to UTC when serializing. Even timezone naive values, which are treated as UTC with an offset of 0.

    In [293]: from pandas.io.json import build_table_schema In [294]: s = pd.Series(pd.date_range("2016", periods=4)) In [295]: build_table_schema(s) Out[295]: {'fields': [{'name': 'index', 'type': 'integer'}, {'name': 'values', 'type': 'datetime'}], 'primaryKey': ['index'], 'pandas_version': '1.4.0'}

  • datetimes with a timezone (before serializing), include an additional field tz with the time zone name (e.g. 'US/Central').

    In [296]: s_tz = pd.Series(pd.date_range("2016", periods=12, tz="US/Central")) In [297]: build_table_schema(s_tz) Out[297]: {'fields': [{'name': 'index', 'type': 'integer'}, {'name': 'values', 'type': 'datetime', 'tz': 'US/Central'}], 'primaryKey': ['index'], 'pandas_version': '1.4.0'}

  • Periods are converted to timestamps before serialization, and so have the same behavior of being converted to UTC. In addition, periods will contain and additional field freq with the period’s frequency, e.g. 'A-DEC'.

    In [298]: s_per = pd.Series(1, index=pd.period_range("2016", freq="A-DEC", periods=4)) In [299]: build_table_schema(s_per) Out[299]: {'fields': [{'name': 'index', 'type': 'datetime', 'freq': 'A-DEC'}, {'name': 'values', 'type': 'integer'}], 'primaryKey': ['index'], 'pandas_version': '1.4.0'}

  • Categoricals use the any type and an enum constraint listing the set of possible values. Additionally, an ordered field is included:

    In [300]: s_cat = pd.Series(pd.Categorical(["a", "b", "a"])) In [301]: build_table_schema(s_cat) Out[301]: {'fields': [{'name': 'index', 'type': 'integer'}, {'name': 'values', 'type': 'any', 'constraints': {'enum': ['a', 'b']}, 'ordered': False}], 'primaryKey': ['index'], 'pandas_version': '1.4.0'}

  • A primaryKey field, containing an array of labels, is included if the index is unique:

    In [302]: s_dupe = pd.Series([1, 2], index=[1, 1]) In [303]: build_table_schema(s_dupe) Out[303]: {'fields': [{'name': 'index', 'type': 'integer'}, {'name': 'values', 'type': 'integer'}], 'pandas_version': '1.4.0'}

  • The primaryKey behavior is the same with MultiIndexes, but in this case the primaryKey is an array:

    In [304]: s_multi = pd.Series(1, index=pd.MultiIndex.from_product([("a", "b"), (0, 1)])) In [305]: build_table_schema(s_multi) Out[305]: {'fields': [{'name': 'level_0', 'type': 'string'}, {'name': 'level_1', 'type': 'integer'}, {'name': 'values', 'type': 'integer'}], 'primaryKey': FrozenList(['level_0', 'level_1']), 'pandas_version': '1.4.0'}

  • The default naming roughly follows these rules:

    • For series, the object.name is used. If that’s none, then the name is values

    • For DataFrames, the stringified version of the column name is used

    • For Index (not MultiIndex), index.name is used, with a fallback to index if that is None.

    • For MultiIndex, mi.names is used. If any level has no name, then level_ is used.

read_json also accepts orient='table' as an argument. This allows for the preservation of metadata such as dtypes and index names in a round-trippable manner.

In [306]: df = pd.DataFrame( .....: { .....: "foo": [1, 2, 3, 4], .....: "bar": ["a", "b", "c", "d"], .....: "baz": pd.date_range("2018-01-01", freq="d", periods=4), .....: "qux": pd.Categorical(["a", "b", "c", "c"]), .....: }, .....: index=pd.Index(range(4), name="idx"), .....: ) .....: In [307]: df Out[307]: foo bar baz qux idx 0 1 a 2018-01-01 a 1 2 b 2018-01-02 b 2 3 c 2018-01-03 c 3 4 d 2018-01-04 c In [308]: df.dtypes Out[308]: foo int64 bar object baz datetime64[ns] qux category dtype: object In [309]: df.to_json("test.json", orient="table") In [310]: new_df = pd.read_json("test.json", orient="table") In [311]: new_df Out[311]: foo bar baz qux idx 0 1 a 2018-01-01 a 1 2 b 2018-01-02 b 2 3 c 2018-01-03 c 3 4 d 2018-01-04 c In [312]: new_df.dtypes Out[312]: foo int64 bar object baz datetime64[ns] qux category dtype: object

Please note that the literal string ‘index’ as the name of an Index is not round-trippable, nor are any names beginning with 'level_' within a MultiIndex. These are used by default in DataFrame.to_json() to indicate missing values and the subsequent read cannot distinguish the intent.

In [313]: df.index.name = "index" In [314]: df.to_json("test.json", orient="table") In [315]: new_df = pd.read_json("test.json", orient="table") In [316]: print(new_df.index.name) None

When using orient='table' along with user-defined ExtensionArray, the generated schema will contain an additional extDtype key in the respective fields element. This extra key is not standard but does enable JSON roundtrips for extension types (e.g. read_json(df.to_json(orient="table"), orient="table")).

The extDtype key carries the name of the extension, if you have properly registered the ExtensionDtype, pandas will use said name to perform a lookup into the registry and re-convert the serialized data into your custom dtype.

HTML#

Reading HTML content#

Warning

We highly encourage you to read the HTML Table Parsing gotchas below regarding the issues surrounding the BeautifulSoup4/html5lib/lxml parsers.

The top-level read_html() function can accept an HTML string/file/URL and will parse HTML tables into list of pandas DataFrames. Let’s look at a few examples.

Note

read_html returns a list of DataFrame objects, even if there is only a single table contained in the HTML content.

Read a URL with no options:

In [320]: "https://www.fdic.gov/resources/resolutions/bank-failures/failed-bank-list" In [321]: pd.read_html(url) Out[321]: [ Bank NameBank CityCity StateSt ... Acquiring InstitutionAI Closing DateClosing FundFund 0 Almena State Bank Almena KS ... Equity Bank October 23, 2020 10538 1 First City Bank of Florida Fort Walton Beach FL ... United Fidelity Bank, fsb October 16, 2020 10537 2 The First State Bank Barboursville WV ... MVB Bank, Inc. April 3, 2020 10536 3 Ericson State Bank Ericson NE ... Farmers and Merchants Bank February 14, 2020 10535 4 City National Bank of New Jersey Newark NJ ... Industrial Bank November 1, 2019 10534 .. ... ... ... ... ... ... ... 558 Superior Bank, FSB Hinsdale IL ... Superior Federal, FSB July 27, 2001 6004 559 Malta National Bank Malta OH ... North Valley Bank May 3, 2001 4648 560 First Alliance Bank & Trust Co. Manchester NH ... Southern New Hampshire Bank & Trust February 2, 2001 4647 561 National State Bank of Metropolis Metropolis IL ... Banterra Bank of Marion December 14, 2000 4646 562 Bank of Honolulu Honolulu HI ... Bank of the Orient October 13, 2000 4645 [563 rows x 7 columns]]

Note

The data from the above URL changes every Monday so the resulting data above may be slightly different.

Read in the content of the file from the above URL and pass it to read_html as a string:

In [317]: html_str = """ .....:

.....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....:
A B C
a b c
.....: """ .....: In [318]: with open("tmp.html", "w") as f: .....: f.write(html_str) .....: In [319]: df = pd.read_html("tmp.html") In [320]: df[0] Out[320]: A B C 0 a b c

You can even pass in an instance of StringIO if you so desire:

In [321]: dfs = pd.read_html(StringIO(html_str)) In [322]: dfs[0] Out[322]: A B C 0 a b c

Note

The following examples are not run by the IPython evaluator due to the fact that having so many network-accessing functions slows down the documentation build. If you spot an error or an example that doesn’t run, please do not hesitate to report it over on pandas GitHub issues page.

Read a URL and match a table that contains specific text:

match = "Metcalf Bank" df_list = pd.read_html(url, match=match)

Specify a header row (by default

or elements located within a
elements).

dfs = pd.read_html(url, header=0)

Specify an index column:

dfs = pd.read_html(url, index_col=0)

Specify a number of rows to skip:

dfs = pd.read_html(url, skiprows=0)

Specify a number of rows to skip using a list (range works as well):

dfs = pd.read_html(url, skiprows=range(2))

Specify an HTML attribute:

dfs1 = pd.read_html(url, attrs={"id": "table"}) dfs2 = pd.read_html(url, attrs={"class": "sortable"}) print(np.array_equal(dfs1[0], dfs2[0])) # Should be True

Specify values that should be converted to NaN:

dfs = pd.read_html(url, na_values=["No Acquirer"])

Specify whether to keep the default set of NaN values:

dfs = pd.read_html(url, keep_default_na=False)

Specify converters for columns. This is useful for numerical text data that has leading zeros. By default columns that are numerical are cast to numeric types and the leading zeros are lost. To avoid this, we can convert these columns to strings.

url_mcc = "https://en.wikipedia.org/wiki/Mobile_country_code" dfs = pd.read_html( url_mcc, match="Telekom Albania", header=0, converters={"MNC": str}, )

Use some combination of the above:

dfs = pd.read_html(url, match="Metcalf Bank", index_col=0)

Read in pandas to_html output (with some loss of floating point precision):

df = pd.DataFrame(np.random.randn(2, 2)) s = df.to_html(float_format="{0:.40g}".format) dfin = pd.read_html(s, index_col=0)

The lxml backend will raise an error on a failed parse if that is the only parser you provide. If you only have a single parser you can provide just a string, but it is considered good practice to pass a list with one string if, for example, the function expects a sequence of strings. You may use:

dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml"])

Or you could pass flavor='lxml' without a list:

dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor="lxml")

However, if you have bs4 and html5lib installed and pass None or ['lxml', 'bs4'] then the parse will most likely succeed. Note that as soon as a parse succeeds, the function will return.

dfs = pd.read_html(url, "Metcalf Bank", index_col=0, flavor=["lxml", "bs4"])

Links can be extracted from cells along with the text using extract_links="all".

In [323]: html_table = """ .....:

.....: .....: .....: .....: .....: .....: .....:
GitHub
pandas
.....: """ .....: In [324]: df = pd.read_html( .....: html_table, .....: extract_links="all" .....: )[0] .....: In [325]: df Out[325]: (GitHub, None) 0 (pandas, https://github.com/pandas-dev/pandas) In [326]: df[("GitHub", None)] Out[326]: 0 (pandas, https://github.com/pandas-dev/pandas) Name: (GitHub, None), dtype: object In [327]: df[("GitHub", None)].str[1] Out[327]: 0 https://github.com/pandas-dev/pandas Name: (GitHub, None), dtype: object

New in version 1.5.0.

Writing to HTML files#

DataFrame objects have an instance method to_html which renders the contents of the DataFrame as an HTML table. The function arguments are as in the method to_string described above.

Note

Not all of the possible options for DataFrame.to_html are shown here for brevity’s sake. See to_html() for the full set of options.

Note

In an HTML-rendering supported environment like a Jupyter Notebook, display(HTML(...))` will render the raw HTML into the environment.

In [328]: from IPython.display import display, HTML In [329]: df = pd.DataFrame(np.random.randn(2, 2)) In [330]: df Out[330]: 0 1 0 0.070319 1.773907 1 0.253908 0.414581 In [331]: html = df.to_html() In [332]: print(html) # raw html

0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [333]: display(HTML(html))

The columns argument will limit the columns shown:

In [334]: html = df.to_html(columns=[0]) In [335]: print(html)

0
0 0.070319
1 0.253908
In [336]: display(HTML(html))

float_format takes a Python callable to control the precision of floating point values:

In [337]: html = df.to_html(float_format="{0:.10f}".format) In [338]: print(html)

0 1
0 0.0703192665 1.7739074228
1 0.2539083433 0.4145805920
In [339]: display(HTML(html))

bold_rows will make the row labels bold by default, but you can turn that off:

In [340]: html = df.to_html(bold_rows=False) In [341]: print(html)

0 1
0 0.070319 1.773907
1 0.253908 0.414581
In [342]: display(HTML(html))

The classes argument provides the ability to give the resulting HTML table CSS classes. Note that these classes are appended to the existing 'dataframe' class.

In [343]: print(df.to_html(classes=["awesome_table_class", "even_more_awesome_class"]))

0 1
0 0.070319 1.773907
1 0.253908 0.414581

The render_links argument provides the ability to add hyperlinks to cells that contain URLs.

In [344]: url_df = pd.DataFrame( .....: { .....: "name": ["Python", "pandas"], .....: "url": ["https://www.python.org/", "https://pandas.pydata.org"], .....: } .....: ) .....: In [345]: html = url_df.to_html(render_links=True) In [346]: print(html)

name url
0 Python https://www.python.org/
1 pandas https://pandas.pydata.org
In [347]: display(HTML(html))

Finally, the escape argument allows you to control whether the “<”, “>” and “&” characters escaped in the resulting HTML (by default it is True). So to get the HTML without escaped characters pass escape=False

In [348]: df = pd.DataFrame({"a": list("&<>"), "b": np.random.randn(3)})

Escaped:

In [349]: html = df.to_html() In [350]: print(html)

a b
0 & 0.842321
1 < 0.211337
2 > -1.055427
In [351]: display(HTML(html))

Not escaped:

In [352]: html = df.to_html(escape=False) In [353]: print(html)

a b
0 & 0.842321
1 < 0.211337
2 > -1.055427
In [354]: display(HTML(html))

Note

Some browsers may not show a difference in the rendering of the previous two HTML tables.

HTML Table Parsing Gotchas#

There are some versioning issues surrounding the libraries that are used to parse HTML tables in the top-level pandas io function read_html.

Issues with lxml

  • Benefits

    • lxml is very fast.

    • lxml requires Cython to install correctly.

  • Drawbacks

    • lxml does not make any guarantees about the results of its parse unless it is given strictly valid markup.

    • In light of the above, we have chosen to allow you, the user, to use the lxml backend, but this backend will use html5lib if lxml fails to parse

    • It is therefore highly recommended that you install both BeautifulSoup4 and html5lib, so that you will still get a valid result (provided everything else is valid) even if lxml fails.

Issues with BeautifulSoup4 using lxml as a backend

  • The above issues hold here as well since BeautifulSoup4 is essentially just a wrapper around a parser backend.

Issues with BeautifulSoup4 using html5lib as a backend

  • Benefits

    • html5lib is far more lenient than lxml and consequently deals with real-life markup in a much saner way rather than just, e.g., dropping an element without notifying you.

    • html5lib generates valid HTML5 markup from invalid markup automatically. This is extremely important for parsing HTML tables, since it guarantees a valid document. However, that does NOT mean that it is “correct”, since the process of fixing markup does not have a single definition.

    • html5lib is pure Python and requires no additional build steps beyond its own installation.

  • Drawbacks

    • The biggest drawback to using html5lib is that it is slow as molasses. However consider the fact that many tables on the web are not big enough for the parsing algorithm runtime to matter. It is more likely that the bottleneck will be in the process of reading the raw text from the URL over the web, i.e., IO (input-output). For very large tables, this might not be true.

LaTeX#

New in version 1.3.0.

Currently there are no methods to read from LaTeX, only output methods.

Writing to LaTeX files#

Note

DataFrame and Styler objects currently have a to_latex method. We recommend using the Styler.to_latex() method over DataFrame.to_latex() due to the former’s greater flexibility with conditional styling, and the latter’s possible future deprecation.

Review the documentation for Styler.to_latex, which gives examples of conditional styling and explains the operation of its keyword arguments.

For simple application the following pattern is sufficient.

In [355]: df = pd.DataFrame([[1, 2], [3, 4]], index=["a", "b"], columns=["c", "d"]) In [356]: print(df.style.to_latex()) \begin{tabular}{lrr} & c & d \\ a & 1 & 2 \\ b & 3 & 4 \\ \end{tabular}

To format values before output, chain the Styler.format method.

In [357]: print(df.style.format("€ {}").to_latex()) \begin{tabular}{lrr} & c & d \\ a & € 1 & € 2 \\ b & € 3 & € 4 \\ \end{tabular}

XML#

Reading XML#

New in version 1.3.0.

The top-level read_xml() function can accept an XML string/file/URL and will parse nodes and attributes into a pandas DataFrame.

Note

Since there is no standard XML structure where design types can vary in many ways, read_xml works best with flatter, shallow versions. If an XML document is deeply nested, use the stylesheet feature to transform XML into a flatter version.

Let’s look at a few examples.

Read an XML string:

In [358]: xml = """ .....: .....: .....: Everyday Italian .....: Giada De Laurentiis .....: 2005 .....: 30.00 .....: .....: .....: Harry Potter .....: J K. Rowling .....: 2005 .....: 29.99 .....: .....: .....: Learning XML .....: Erik T. Ray .....: 2003 .....: 39.95 .....: .....: """ .....: In [359]: df = pd.read_xml(xml) In [360]: df Out[360]: category title author year price 0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 1 children Harry Potter J K. Rowling 2005 29.99 2 web Learning XML Erik T. Ray 2003 39.95

Read a URL with no options:

In [361]: df = pd.read_xml("https://www.w3schools.com/xml/books.xml") In [362]: df Out[362]: category title author year price cover 0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 None 1 children Harry Potter J K. Rowling 2005 29.99 None 2 web XQuery Kick Start Vaidyanathan Nagarajan 2003 49.99 None 3 web Learning XML Erik T. Ray 2003 39.95 paperback

Read in the content of the “books.xml” file and pass it to read_xml as a string:

In [363]: file_path = "books.xml" In [364]: with open(file_path, "w") as f: .....: f.write(xml) .....: In [365]: with open(file_path, "r") as f: .....: df = pd.read_xml(f.read()) .....: In [366]: df Out[366]: category title author year price 0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 1 children Harry Potter J K. Rowling 2005 29.99 2 web Learning XML Erik T. Ray 2003 39.95

Read in the content of the “books.xml” as instance of StringIO or BytesIO and pass it to read_xml:

In [367]: with open(file_path, "r") as f: .....: sio = StringIO(f.read()) .....: In [368]: df = pd.read_xml(sio) In [369]: df Out[369]: category title author year price 0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 1 children Harry Potter J K. Rowling 2005 29.99 2 web Learning XML Erik T. Ray 2003 39.95

In [370]: with open(file_path, "rb") as f: .....: bio = BytesIO(f.read()) .....: In [371]: df = pd.read_xml(bio) In [372]: df Out[372]: category title author year price 0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 1 children Harry Potter J K. Rowling 2005 29.99 2 web Learning XML Erik T. Ray 2003 39.95

Even read XML from AWS S3 buckets such as NIH NCBI PMC Article Datasets providing Biomedical and Life Science Jorurnals:

In [373]: df = pd.read_xml( .....: "s3://pmc-oa-opendata/oa_comm/xml/all/PMC1236943.xml", .....: xpath=".//journal-meta", .....: ) .....: In [374]: df Out[374]: journal-id journal-title issn publisher 0 Cardiovasc Ultrasound Cardiovascular Ultrasound 1476-7120 NaN

With lxml as default parser, you access the full-featured XML library that extends Python’s ElementTree API. One powerful tool is ability to query nodes selectively or conditionally with more expressive XPath:

In [375]: df = pd.read_xml(file_path, xpath="//book[year=2005]") In [376]: df Out[376]: category title author year price 0 cooking Everyday Italian Giada De Laurentiis 2005 30.00 1 children Harry Potter J K. Rowling 2005 29.99

Specify only elements or only attributes to parse:

In [377]: df = pd.read_xml(file_path, elems_only=True) In [378]: df Out[378]: title author year price 0 Everyday Italian Giada De Laurentiis 2005 30.00 1 Harry Potter J K. Rowling 2005 29.99 2 Learning XML Erik T. Ray 2003 39.95

In [379]: df = pd.read_xml(file_path, attrs_only=True) In [380]: df Out[380]: category 0 cooking 1 children 2 web

XML documents can have namespaces with prefixes and default namespaces without prefixes both of which are denoted with a special attribute xmlns. In order to parse by node under a namespace context, xpath must reference a prefix.

For example, below XML contains a namespace with prefix, doc, and URI at https://example.com. In order to parse doc:row nodes, namespaces must be used.

In [381]: xml = """ .....: .....: .....: square .....: 360 .....: 4.0 .....: .....: .....: circle .....: 360 .....: .....: .....: .....: triangle .....: 180 .....: 3.0 .....: .....: """ .....: In [382]: df = pd.read_xml(xml, .....: xpath="//doc:row", .....: namespaces={"doc": "https://example.com"}) .....: In [383]: df Out[383]: shape degrees sides 0 square 360 4.0 1 circle 360 NaN 2 triangle 180 3.0

Similarly, an XML document can have a default namespace without prefix. Failing to assign a temporary prefix will return no nodes and raise a ValueError. But assigning any temporary name to correct URI allows parsing by nodes.

In [384]: xml = """ .....: .....: .....: square .....: 360 .....: 4.0 .....: .....: .....: circle .....: 360 .....: .....: .....: .....: triangle .....: 180 .....: 3.0 .....: .....: """ .....: In [385]: df = pd.read_xml(xml, .....: xpath="//pandas:row", .....: namespaces={"pandas": "https://example.com"}) .....: In [386]: df Out[386]: shape degrees sides 0 square 360 4.0 1 circle 360 NaN 2 triangle 180 3.0

However, if XPath does not reference node names such as default, /*, then namespaces is not required.

With lxml as parser, you can flatten nested XML documents with an XSLT script which also can be string/file/URL types. As background, XSLT is a special-purpose language written in a special XML file that can transform original XML documents into other XML, HTML, even text (CSV, JSON, etc.) using an XSLT processor.

For example, consider this somewhat nested structure of Chicago “L” Rides where station and rides elements encapsulate data in their own sections. With below XSLT, lxml can transform original nested document into a flatter output (as shown below for demonstration) for easier parse into DataFrame:

In [387]: xml = """ .....: .....: .....: .....: 2020-09-01T00:00:00 .....: .....: 864.2 .....: 534 .....: 417.2 .....: .....: .....: .....: .....: 2020-09-01T00:00:00 .....: .....: 2707.4 .....: 1909.8 .....: 1438.6 .....: .....: .....: .....: .....: 2020-09-01T00:00:00 .....: .....: 2949.6 .....: 1657 .....: 1453.8 .....: .....: .....: """ .....: In [388]: xsl = """ .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: """ .....: In [389]: output = """ .....: .....: .....: 40850 .....: Library .....: 2020-09-01T00:00:00 .....: 864.2 .....: 534 .....: 417.2 .....: .....: .....: 41700 .....: Washington/Wabash .....: 2020-09-01T00:00:00 .....: 2707.4 .....: 1909.8 .....: 1438.6 .....: .....: .....: 40380 .....: Clark/Lake .....: 2020-09-01T00:00:00 .....: 2949.6 .....: 1657 .....: 1453.8 .....: .....: """ .....: In [390]: df = pd.read_xml(xml, stylesheet=xsl) In [391]: df Out[391]: station_id station_name ... avg_saturday_rides avg_sunday_holiday_rides 0 40850 Library ... 534.0 417.2 1 41700 Washington/Wabash ... 1909.8 1438.6 2 40380 Clark/Lake ... 1657.0 1453.8 [3 rows x 6 columns]

For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml() supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes. without holding entire tree in memory.

To use this feature, you must pass a physical XML file path into read_xml and use the iterparse argument. Files should not be compressed or point to online sources but stored on local disk. Also, iterparse should be a dictionary where the key is the repeating nodes in document (which become the rows) and the value is a list of any element or attribute that is a descendant (i.e., child, grandchild) of repeating node. Since XPath is not used in this method, descendants do not need to share same relationship with one another. Below shows example of reading in Wikipedia’s very large (12 GB+) latest article data dump.

In [1]: df = pd.read_xml( ... "/path/to/downloaded/enwikisource-latest-pages-articles.xml", ... iterparse = {"page": ["title", "ns", "id"]} ... ) ... df Out[2]: title ns id 0 Gettysburg Address 0 21450 1 Main Page 0 42950 2 Declaration by United Nations 0 8435 3 Constitution of the United States of America 0 8435 4 Declaration of Independence (Israel) 0 17858 ... ... ... ... 3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649 3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649 3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649 3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291 3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450 [3578765 rows x 3 columns]

Writing XML#

New in version 1.3.0.

DataFrame objects have an instance method to_xml which renders the contents of the DataFrame as an XML document.

Note

This method does not support special properties of XML including DTD, CData, XSD schemas, processing instructions, comments, and others. Only namespaces at the root level is supported. However, stylesheet allows design changes after initial output.

Let’s look at a few examples.

Write an XML without options:

In [392]: geom_df = pd.DataFrame( .....: { .....: "shape": ["square", "circle", "triangle"], .....: "degrees": [360, 360, 180], .....: "sides": [4, np.nan, 3], .....: } .....: ) .....: In [393]: print(geom_df.to_xml()) 0 square 360 4.0 1 circle 360 2 triangle 180 3.0

Write an XML with new root and row name:

In [394]: print(geom_df.to_xml(root_name="geometry", row_name="objects")) 0 square 360 4.0 1 circle 360 2 triangle 180 3.0

Write an attribute-centric XML:

In [395]: print(geom_df.to_xml(attr_cols=geom_df.columns.tolist()))

Write a mix of elements and attributes:

In [396]: print( .....: geom_df.to_xml( .....: index=False, .....: attr_cols=['shape'], .....: elem_cols=['degrees', 'sides']) .....: ) .....: 360 4.0 360 180 3.0

Any DataFrames with hierarchical columns will be flattened for XML element names with levels delimited by underscores:

In [397]: ext_geom_df = pd.DataFrame( .....: { .....: "type": ["polygon", "other", "polygon"], .....: "shape": ["square", "circle", "triangle"], .....: "degrees": [360, 360, 180], .....: "sides": [4, np.nan, 3], .....: } .....: ) .....: In [398]: pvt_df = ext_geom_df.pivot_table(index='shape', .....: columns='type', .....: values=['degrees', 'sides'], .....: aggfunc='sum') .....: In [399]: pvt_df Out[399]: degrees sides type other polygon other polygon shape circle 360.0 NaN 0.0 NaN square NaN 360.0 NaN 4.0 triangle NaN 180.0 NaN 3.0 In [400]: print(pvt_df.to_xml()) circle 360.0 0.0 square 360.0 4.0 triangle 180.0 3.0

Write an XML with default namespace:

In [401]: print(geom_df.to_xml(namespaces={"": "https://example.com"})) 0 square 360 4.0 1 circle 360 2 triangle 180 3.0

Write an XML with namespace prefix:

In [402]: print( .....: geom_df.to_xml(namespaces={"doc": "https://example.com"}, .....: prefix="doc") .....: ) .....: 0 square 360 4.0 1 circle 360 2 triangle 180 3.0

Write an XML without declaration or pretty print:

In [403]: print( .....: geom_df.to_xml(xml_declaration=False, .....: pretty_print=False) .....: ) .....: 0square3604.01circle3602triangle1803.0

Write an XML and transform with stylesheet:

In [404]: xsl = """ .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: .....: polygon .....: .....: .....: .....: .....: .....: .....: .....: """ .....: In [405]: print(geom_df.to_xml(stylesheet=xsl)) square 360 4.0 circle 360 triangle 180 3.0

XML Final Notes#

  • All XML documents adhere to W3C specifications. Both etree and lxml parsers will fail to parse any markup document that is not well-formed or follows XML syntax rules. Do be aware HTML is not an XML document unless it follows XHTML specs. However, other popular markup types including KML, XAML, RSS, MusicML, MathML are compliant XML schemas.

  • For above reason, if your application builds XML prior to pandas operations, use appropriate DOM libraries like etree and lxml to build the necessary document and not by string concatenation or regex adjustments. Always remember XML is a special text file with markup rules.

  • With very large XML files (several hundred MBs to GBs), XPath and XSLT can become memory-intensive operations. Be sure to have enough available RAM for reading and writing to large XML files (roughly about 5 times the size of text).

  • Because XSLT is a programming language, use it with caution since such scripts can pose a security risk in your environment and can run large or infinite recursive operations. Always test scripts on small fragments before full run.

  • The etree parser supports all functionality of both read_xml and to_xml except for complex XPath and any XSLT. Though limited in features, etree is still a reliable and capable parser and tree builder. Its performance may trail lxml to a certain degree for larger files but relatively unnoticeable on small to medium size files.

Excel files#

The read_excel() method can read Excel 2007+ (.xlsx) files using the openpyxl Python module. Excel 2003 (.xls) files can be read using xlrd. Binary Excel (.xlsb) files can be read using pyxlsb. The to_excel() instance method is used for saving a DataFrame to Excel. Generally the semantics are similar to working with csv data. See the cookbook for some advanced strategies.

Warning

The xlwt package for writing old-style .xls excel files is no longer maintained. The xlrd package is now only for reading old-style .xls files.

Before pandas 1.3.0, the default argument engine=None to read_excel() would result in using the xlrd engine in many cases, including new Excel 2007+ (.xlsx) files. pandas will now default to using the openpyxl engine.

It is strongly encouraged to install openpyxl to read Excel 2007+ (.xlsx) files. Please do not report issues when using ``xlrd`` to read ``.xlsx`` files. This is no longer supported, switch to using openpyxl instead.

Attempting to use the xlwt engine will raise a FutureWarning unless the option io.excel.xls.writer is set to "xlwt". While this option is now deprecated and will also raise a FutureWarning, it can be globally set and the warning suppressed. Users are recommended to write .xlsx files using the openpyxl engine instead.

Reading Excel files#

In the most basic use-case, read_excel takes a path to an Excel file, and the sheet_name indicating which sheet to parse.

# Returns a DataFrame pd.read_excel("path_to_file.xls", sheet_name="Sheet1")

ExcelFile class#

To facilitate working with multiple sheets from the same file, the ExcelFile class can be used to wrap the file and can be passed into read_excel There will be a performance benefit for reading multiple sheets as the file is read into memory only once.

xlsx = pd.ExcelFile("path_to_file.xls") df = pd.read_excel(xlsx, "Sheet1")

The ExcelFile class can also be used as a context manager.

with pd.ExcelFile("path_to_file.xls") as xls: df1 = pd.read_excel(xls, "Sheet1") df2 = pd.read_excel(xls, "Sheet2")

The sheet_names property will generate a list of the sheet names in the file.

The primary use-case for an ExcelFile is parsing multiple sheets with different parameters:

data = {} # For when Sheet1's format differs from Sheet2 with pd.ExcelFile("path_to_file.xls") as xls: data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"]) data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=1)

Note that if the same parsing parameters are used for all sheets, a list of sheet names can simply be passed to read_excel with no loss in performance.

# using the ExcelFile class data = {} with pd.ExcelFile("path_to_file.xls") as xls: data["Sheet1"] = pd.read_excel(xls, "Sheet1", index_col=None, na_values=["NA"]) data["Sheet2"] = pd.read_excel(xls, "Sheet2", index_col=None, na_values=["NA"]) # equivalent using the read_excel function data = pd.read_excel( "path_to_file.xls", ["Sheet1", "Sheet2"], index_col=None, na_values=["NA"] )

ExcelFile can also be called with a xlrd.book.Book object as a parameter. This allows the user to control how the excel file is read. For example, sheets can be loaded on demand by calling xlrd.open_workbook() with on_demand=True.

import xlrd xlrd_book = xlrd.open_workbook("path_to_file.xls", on_demand=True) with pd.ExcelFile(xlrd_book) as xls: df1 = pd.read_excel(xls, "Sheet1") df2 = pd.read_excel(xls, "Sheet2")

Specifying sheets#

Note

The second argument is sheet_name, not to be confused with ExcelFile.sheet_names.

Note

An ExcelFile’s attribute sheet_names provides access to a list of sheets.

  • The arguments sheet_name allows specifying the sheet or sheets to read.

  • The default value for sheet_name is 0, indicating to read the first sheet

  • Pass a string to refer to the name of a particular sheet in the workbook.

  • Pass an integer to refer to the index of a sheet. Indices follow Python convention, beginning at 0.

  • Pass a list of either strings or integers, to return a dictionary of specified sheets.

  • Pass a None to return a dictionary of all available sheets.

# Returns a DataFrame pd.read_excel("path_to_file.xls", "Sheet1", index_col=None, na_values=["NA"])

Using the sheet index:

# Returns a DataFrame pd.read_excel("path_to_file.xls", 0, index_col=None, na_values=["NA"])

Using all default values:

# Returns a DataFrame pd.read_excel("path_to_file.xls")

Using None to get all sheets:

# Returns a dictionary of DataFrames pd.read_excel("path_to_file.xls", sheet_name=None)

Using a list to get multiple sheets:

# Returns the 1st and 4th sheet, as a dictionary of DataFrames. pd.read_excel("path_to_file.xls", sheet_name=["Sheet1", 3])

read_excel can read more than one sheet, by setting sheet_name to either a list of sheet names, a list of sheet positions, or None to read all sheets. Sheets can be specified by sheet index or sheet name, using an integer or string, respectively.

Reading a MultiIndex#

read_excel can read a MultiIndex index, by passing a list of columns to index_col and a MultiIndex column by passing a list of rows to header. If either the index or columns have serialized level names those will be read in as well by specifying the rows/columns that make up the levels.

For example, to read in a MultiIndex index without names:

In [406]: df = pd.DataFrame( .....: {"a": [1, 2, 3, 4], "b": [5, 6, 7, 8]}, .....: index=pd.MultiIndex.from_product([["a", "b"], ["c", "d"]]), .....: ) .....: In [407]: df.to_excel("path_to_file.xlsx") In [408]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1]) In [409]: df Out[409]: a b a c 1 5 d 2 6 b c 3 7 d 4 8

If the index has level names, they will parsed as well, using the same parameters.

In [410]: df.index = df.index.set_names(["lvl1", "lvl2"]) In [411]: df.to_excel("path_to_file.xlsx") In [412]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1]) In [413]: df Out[413]: a b lvl1 lvl2 a c 1 5 d 2 6 b c 3 7 d 4 8

If the source file has both MultiIndex index and columns, lists specifying each should be passed to index_col and header:

In [414]: df.columns = pd.MultiIndex.from_product([["a"], ["b", "d"]], names=["c1", "c2"]) In [415]: df.to_excel("path_to_file.xlsx") In [416]: df = pd.read_excel("path_to_file.xlsx", index_col=[0, 1], header=[0, 1]) In [417]: df Out[417]: c1 a c2 b d lvl1 lvl2 a c 1 5 d 2 6 b c 3 7 d 4 8

Missing values in columns specified in index_col will be forward filled to allow roundtripping with to_excel for merged_cells=True. To avoid forward filling the missing values use set_index after reading the data instead of index_col.

Parsing specific columns#

It is often the case that users will insert columns to do temporary computations in Excel and you may not want to read in those columns. read_excel takes a usecols keyword to allow you to specify a subset of columns to parse.

Changed in version 1.0.0.

Passing in an integer for usecols will no longer work. Please pass in a list of ints from 0 to usecols inclusive instead.

You can specify a comma-delimited set of Excel columns and ranges as a string:

pd.read_excel("path_to_file.xls", "Sheet1", usecols="A,C:E")

If usecols is a list of integers, then it is assumed to be the file column indices to be parsed.

pd.read_excel("path_to_file.xls", "Sheet1", usecols=[0, 2, 3])

Element order is ignored, so usecols=[0, 1] is the same as [1, 0].

If usecols is a list of strings, it is assumed that each string corresponds to a column name provided either by the user in names or inferred from the document header row(s). Those strings define which columns will be parsed:

pd.read_excel("path_to_file.xls", "Sheet1", usecols=["foo", "bar"])

Element order is ignored, so usecols=['baz', 'joe'] is the same as ['joe', 'baz'].

If usecols is callable, the callable function will be evaluated against the column names, returning names where the callable function evaluates to True.

pd.read_excel("path_to_file.xls", "Sheet1", usecols=lambda x: x.isalpha())

Parsing dates#

Datetime-like values are normally automatically converted to the appropriate dtype when reading the excel file. But if you have a column of strings that look like dates (but are not actually formatted as dates in excel), you can use the parse_dates keyword to parse those strings to datetimes:

pd.read_excel("path_to_file.xls", "Sheet1", parse_dates=["date_strings"])

Cell converters#

It is possible to transform the contents of Excel cells via the converters option. For instance, to convert a column to boolean:

pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyBools": bool})

This options handles missing values and treats exceptions in the converters as missing data. Transformations are applied cell by cell rather than to the column as a whole, so the array dtype is not guaranteed. For instance, a column of integers with missing values cannot be transformed to an array with integer dtype, because NaN is strictly a float. You can manually mask missing data to recover integer dtype:

def cfun(x): return int(x) if x else -1 pd.read_excel("path_to_file.xls", "Sheet1", converters={"MyInts": cfun})

Dtype specifications#

As an alternative to converters, the type for an entire column can be specified using the dtype keyword, which takes a dictionary mapping column names to types. To interpret data with no type inference, use the type str or object.

pd.read_excel("path_to_file.xls", dtype={"MyInts": "int64", "MyText": str})

Writing Excel files#

Writing Excel files to disk#

To write a DataFrame object to a sheet of an Excel file, you can use the to_excel instance method. The arguments are largely the same as to_csv described above, the first argument being the name of the excel file, and the optional second argument the name of the sheet to which the DataFrame should be written. For example:

df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")

Files with a .xls extension will be written using xlwt and those with a .xlsx extension will be written using xlsxwriter (if available) or openpyxl.

The DataFrame will be written in a way that tries to mimic the REPL output. The index_label will be placed in the second row instead of the first. You can place it in the first row by setting the merge_cells option in to_excel() to False:

df.to_excel("path_to_file.xlsx", index_label="label", merge_cells=False)

In order to write separate DataFrames to separate sheets in a single Excel file, one can pass an ExcelWriter.

with pd.ExcelWriter("path_to_file.xlsx") as writer: df1.to_excel(writer, sheet_name="Sheet1") df2.to_excel(writer, sheet_name="Sheet2")

Writing Excel files to memory#

pandas supports writing Excel files to buffer-like objects such as StringIO or BytesIO using ExcelWriter.

from io import BytesIO bio = BytesIO() # By setting the 'engine' in the ExcelWriter constructor. writer = pd.ExcelWriter(bio, engine="xlsxwriter") df.to_excel(writer, sheet_name="Sheet1") # Save the workbook writer.save() # Seek to the beginning and read to copy the workbook to a variable in memory bio.seek(0) workbook = bio.read()

Note

engine is optional but recommended. Setting the engine determines the version of workbook produced. Setting engine='xlrd' will produce an Excel 2003-format workbook (xls). Using either 'openpyxl' or 'xlsxwriter' will produce an Excel 2007-format workbook (xlsx). If omitted, an Excel 2007-formatted workbook is produced.

Excel writer engines#

Deprecated since version 1.2.0: As the xlwt package is no longer maintained, the xlwt engine will be removed from a future version of pandas. This is the only engine in pandas that supports writing to .xls files.

pandas chooses an Excel writer via two methods:

  1. the engine keyword argument

  2. the filename extension (via the default specified in config options)

By default, pandas uses the XlsxWriter for .xlsx, openpyxl for .xlsm, and xlwt for .xls files. If you have multiple engines installed, you can set the default engine through setting the config options io.excel.xlsx.writer and io.excel.xls.writer. pandas will fall back on openpyxl for .xlsx files if Xlsxwriter is not available.

To specify which writer you want to use, you can pass an engine keyword argument to to_excel and to ExcelWriter. The built-in engines are:

  • openpyxl: version 2.4 or higher is required

  • xlsxwriter

  • xlwt

# By setting the 'engine' in the DataFrame 'to_excel()' methods. df.to_excel("path_to_file.xlsx", sheet_name="Sheet1", engine="xlsxwriter") # By setting the 'engine' in the ExcelWriter constructor. writer = pd.ExcelWriter("path_to_file.xlsx", engine="xlsxwriter") # Or via pandas configuration. from pandas import options # noqa: E402 options.io.excel.xlsx.writer = "xlsxwriter" df.to_excel("path_to_file.xlsx", sheet_name="Sheet1")

Style and formatting#

The look and feel of Excel worksheets created from pandas can be modified using the following parameters on the DataFrame’s to_excel method.

  • float_format : Format string for floating point numbers (default None).

  • freeze_panes : A tuple of two integers representing the bottommost row and rightmost column to freeze. Each of these parameters is one-based, so (1, 1) will freeze the first row and first column (default None).

Using the Xlsxwriter engine provides many options for controlling the format of an Excel worksheet created with the to_excel method. Excellent examples can be found in the Xlsxwriter documentation here: https://xlsxwriter.readthedocs.io/working_with_pandas.html

OpenDocument Spreadsheets#

New in version 0.25.

The read_excel() method can also read OpenDocument spreadsheets using the odfpy module. The semantics and features for reading OpenDocument spreadsheets match what can be done for Excel files using engine='odf'.

# Returns a DataFrame pd.read_excel("path_to_file.ods", engine="odf")

Note

Currently pandas only supports reading OpenDocument spreadsheets. Writing is not implemented.

Binary Excel (.xlsb) files#

New in version 1.0.0.

The read_excel() method can also read binary Excel files using the pyxlsb module. The semantics and features for reading binary Excel files mostly match what can be done for Excel files using engine='pyxlsb'. pyxlsb does not recognize datetime types in files and will return floats instead.

# Returns a DataFrame pd.read_excel("path_to_file.xlsb", engine="pyxlsb")

Note

Currently pandas only supports reading binary Excel files. Writing is not implemented.

Clipboard#

A handy way to grab data is to use the read_clipboard() method, which takes the contents of the clipboard buffer and passes them to the read_csv method. For instance, you can copy the following text to the clipboard (CTRL-C on many operating systems):

A B C x 1 4 p y 2 5 q z 3 6 r

And then import the data directly to a DataFrame by calling:

>>> clipdf = pd.read_clipboard() >>> clipdf A B C x 1 4 p y 2 5 q z 3 6 r

The to_clipboard method can be used to write the contents of a DataFrame to the clipboard. Following which you can paste the clipboard contents into other applications (CTRL-V on many operating systems). Here we illustrate writing a DataFrame into clipboard and reading it back.

>>> df = pd.DataFrame( ... {"A": [1, 2, 3], "B": [4, 5, 6], "C": ["p", "q", "r"]}, index=["x", "y", "z"] ... ) >>> df A B C x 1 4 p y 2 5 q z 3 6 r >>> df.to_clipboard() >>> pd.read_clipboard() A B C x 1 4 p y 2 5 q z 3 6 r

We can see that we got the same content back, which we had earlier written to the clipboard.

Note

You may need to install xclip or xsel (with PyQt5, PyQt4 or qtpy) on Linux to use these methods.

Pickling#

All pandas objects are equipped with to_pickle methods which use Python’s cPickle module to save data structures to disk using the pickle format.

In [418]: df Out[418]: c1 a c2 b d lvl1 lvl2 a c 1 5 d 2 6 b c 3 7 d 4 8 In [419]: df.to_pickle("foo.pkl")

The read_pickle function in the pandas namespace can be used to load any pickled pandas object (or any other pickled object) from file:

In [420]: pd.read_pickle("foo.pkl") Out[420]: c1 a c2 b d lvl1 lvl2 a c 1 5 d 2 6 b c 3 7 d 4 8

Warning

read_pickle() is only guaranteed backwards compatible back to pandas version 0.20.3

Compressed pickle files#

read_pickle(), DataFrame.to_pickle() and Series.to_pickle() can read and write compressed pickle files. The compression types of gzip, bz2, xz, zstd are supported for reading and writing. The zip file format only supports reading and must contain only one data file to be read.

The compression type can be an explicit parameter or be inferred from the file extension. If ‘infer’, then use gzip, bz2, zip, xz, zstd if filename ends in '.gz', '.bz2', '.zip', '.xz', or '.zst', respectively.

The compression parameter can also be a dict in order to pass options to the compression protocol. It must have a 'method' key set to the name of the compression protocol, which must be one of {'zip', 'gzip', 'bz2', 'xz', 'zstd'}. All other key-value pairs are passed to the underlying compression library.

In [421]: df = pd.DataFrame( .....: { .....: "A": np.random.randn(1000), .....: "B": "foo", .....: "C": pd.date_range("20130101", periods=1000, freq="s"), .....: } .....: ) .....: In [422]: df Out[422]: A B C 0 -0.828876 foo 2013-01-01 00:00:00 1 -0.110383 foo 2013-01-01 00:00:01 2 2.357598 foo 2013-01-01 00:00:02 3 -1.620073 foo 2013-01-01 00:00:03 4 0.440903 foo 2013-01-01 00:00:04 .. ... ... ... 995 -1.177365 foo 2013-01-01 00:16:35 996 1.236988 foo 2013-01-01 00:16:36 997 0.743946 foo 2013-01-01 00:16:37 998 -0.533097 foo 2013-01-01 00:16:38 999 -0.140850 foo 2013-01-01 00:16:39 [1000 rows x 3 columns]

Using an explicit compression type:

In [423]: df.to_pickle("data.pkl.compress", compression="gzip") In [424]: rt = pd.read_pickle("data.pkl.compress", compression="gzip") In [425]: rt Out[425]: A B C 0 -0.828876 foo 2013-01-01 00:00:00 1 -0.110383 foo 2013-01-01 00:00:01 2 2.357598 foo 2013-01-01 00:00:02 3 -1.620073 foo 2013-01-01 00:00:03 4 0.440903 foo 2013-01-01 00:00:04 .. ... ... ... 995 -1.177365 foo 2013-01-01 00:16:35 996 1.236988 foo 2013-01-01 00:16:36 997 0.743946 foo 2013-01-01 00:16:37 998 -0.533097 foo 2013-01-01 00:16:38 999 -0.140850 foo 2013-01-01 00:16:39 [1000 rows x 3 columns]

Inferring compression type from the extension:

In [426]: df.to_pickle("data.pkl.xz", compression="infer") In [427]: rt = pd.read_pickle("data.pkl.xz", compression="infer") In [428]: rt Out[428]: A B C 0 -0.828876 foo 2013-01-01 00:00:00 1 -0.110383 foo 2013-01-01 00:00:01 2 2.357598 foo 2013-01-01 00:00:02 3 -1.620073 foo 2013-01-01 00:00:03 4 0.440903 foo 2013-01-01 00:00:04 .. ... ... ... 995 -1.177365 foo 2013-01-01 00:16:35 996 1.236988 foo 2013-01-01 00:16:36 997 0.743946 foo 2013-01-01 00:16:37 998 -0.533097 foo 2013-01-01 00:16:38 999 -0.140850 foo 2013-01-01 00:16:39 [1000 rows x 3 columns]

The default is to ‘infer’:

In [429]: df.to_pickle("data.pkl.gz") In [430]: rt = pd.read_pickle("data.pkl.gz") In [431]: rt Out[431]: A B C 0 -0.828876 foo 2013-01-01 00:00:00 1 -0.110383 foo 2013-01-01 00:00:01 2 2.357598 foo 2013-01-01 00:00:02 3 -1.620073 foo 2013-01-01 00:00:03 4 0.440903 foo 2013-01-01 00:00:04 .. ... ... ... 995 -1.177365 foo 2013-01-01 00:16:35 996 1.236988 foo 2013-01-01 00:16:36 997 0.743946 foo 2013-01-01 00:16:37 998 -0.533097 foo 2013-01-01 00:16:38 999 -0.140850 foo 2013-01-01 00:16:39 [1000 rows x 3 columns] In [432]: df["A"].to_pickle("s1.pkl.bz2") In [433]: rt = pd.read_pickle("s1.pkl.bz2") In [434]: rt Out[434]: 0 -0.828876 1 -0.110383 2 2.357598 3 -1.620073 4 0.440903 ... 995 -1.177365 996 1.236988 997 0.743946 998 -0.533097 999 -0.140850 Name: A, Length: 1000, dtype: float64

Passing options to the compression protocol in order to speed up compression:

In [435]: df.to_pickle("data.pkl.gz", compression={"method": "gzip", "compresslevel": 1})

msgpack#

pandas support for msgpack has been removed in version 1.0.0. It is recommended to use pickle instead.

Alternatively, you can also the Arrow IPC serialization format for on-the-wire transmission of pandas objects. For documentation on pyarrow, see here.

HDF5 (PyTables)#

HDFStore is a dict-like object which reads and writes pandas using the high performance HDF5 format using the excellent PyTables library. See the cookbook for some advanced strategies

Warning

pandas uses PyTables for reading and writing HDF5 files, which allows serializing object-dtype data with pickle. Loading pickled data received from untrusted sources can be unsafe.

See: https://docs.python.org/3/library/pickle.html for more.

In [436]: store = pd.HDFStore("store.h5") In [437]: print(store) File path: store.h5

Objects can be written to the file just like adding key-value pairs to a dict:

In [438]: index = pd.date_range("1/1/2000", periods=8) In [439]: s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"]) In [440]: df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"]) # store.put('s', s) is an equivalent method In [441]: store["s"] = s In [442]: store["df"] = df In [443]: store Out[443]: File path: store.h5

In a current or later Python session, you can retrieve stored objects:

# store.get('df') is an equivalent method In [444]: store["df"] Out[444]: A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908 # dotted (attribute) access provides get as well In [445]: store.df Out[445]: A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908

Deletion of the object specified by the key:

# store.remove('df') is an equivalent method In [446]: del store["df"] In [447]: store Out[447]: File path: store.h5

Closing a Store and using a context manager:

In [448]: store.close() In [449]: store Out[449]: File path: store.h5 In [450]: store.is_open Out[450]: False # Working with, and automatically closing the store using a context manager In [451]: with pd.HDFStore("store.h5") as store: .....: store.keys() .....:

Read/write API#

HDFStore supports a top-level API using read_hdf for reading and to_hdf for writing, similar to how read_csv and to_csv work.

In [452]: df_tl = pd.DataFrame({"A": list(range(5)), "B": list(range(5))}) In [453]: df_tl.to_hdf("store_tl.h5", "table", append=True) In [454]: pd.read_hdf("store_tl.h5", "table", where=["index>2"]) Out[454]: A B 3 3 3 4 4 4

HDFStore will by default not drop rows that are all missing. This behavior can be changed by setting dropna=True.

In [455]: df_with_missing = pd.DataFrame( .....: { .....: "col1": [0, np.nan, 2], .....: "col2": [1, np.nan, np.nan], .....: } .....: ) .....: In [456]: df_with_missing Out[456]: col1 col2 0 0.0 1.0 1 NaN NaN 2 2.0 NaN In [457]: df_with_missing.to_hdf("file.h5", "df_with_missing", format="table", mode="w") In [458]: pd.read_hdf("file.h5", "df_with_missing") Out[458]: col1 col2 0 0.0 1.0 1 NaN NaN 2 2.0 NaN In [459]: df_with_missing.to_hdf( .....: "file.h5", "df_with_missing", format="table", mode="w", dropna=True .....: ) .....: In [460]: pd.read_hdf("file.h5", "df_with_missing") Out[460]: col1 col2 0 0.0 1.0 2 2.0 NaN

Fixed format#

The examples above show storing using put, which write the HDF5 to PyTables in a fixed array format, called the fixed format. These types of stores are not appendable once written (though you can simply remove them and rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with non-unique column names. The fixed format stores offer very fast writing and slightly faster reading than table stores. This format is specified by default when using put or to_hdf or by format='fixed' or format='f'.

Warning

A fixed format will raise a TypeError if you try to retrieve using a where:

>>> pd.DataFrame(np.random.randn(10, 2)).to_hdf("test_fixed.h5", "df") >>> pd.read_hdf("test_fixed.h5", "df", where="index>5") TypeError: cannot pass a where specification when reading a fixed format. this store must be selected in its entirety

Table format#

HDFStore supports another PyTables format on disk, the table format. Conceptually a table is shaped very much like a DataFrame, with rows and columns. A table may be appended to in the same or other sessions. In addition, delete and query type operations are supported. This format is specified by format='table' or format='t' to append or put or to_hdf.

This format can be set as an option as well pd.set_option('io.hdf.default_format','table') to enable put/append/to_hdf to by default store in the table format.

In [461]: store = pd.HDFStore("store.h5") In [462]: df1 = df[0:4] In [463]: df2 = df[4:] # append data (creates a table automatically) In [464]: store.append("df", df1) In [465]: store.append("df", df2) In [466]: store Out[466]: File path: store.h5 # select the entire object In [467]: store.select("df") Out[467]: A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908 # the type of stored data In [468]: store.root.df._v_attrs.pandas_type Out[468]: 'frame_table'

Note

You can also create a table by passing format='table' or format='t' to a put operation.

Hierarchical keys#

Keys to a store can be specified as a string. These can be in a hierarchical path-name like format (e.g. foo/bar/bah), which will generate a hierarchy of sub-stores (or Groups in PyTables parlance). Keys can be specified without the leading ‘/’ and are always absolute (e.g. ‘foo’ refers to ‘/foo’). Removal operations can remove everything in the sub-store and below, so be careful.

In [469]: store.put("foo/bar/bah", df) In [470]: store.append("food/orange", df) In [471]: store.append("food/apple", df) In [472]: store Out[472]: File path: store.h5 # a list of keys are returned In [473]: store.keys() Out[473]: ['/df', '/food/apple', '/food/orange', '/foo/bar/bah'] # remove all nodes under this level In [474]: store.remove("food") In [475]: store Out[475]: File path: store.h5

You can walk through the group hierarchy using the walk method which will yield a tuple for each group key along with the relative keys of its contents.

In [476]: for (path, subgroups, subkeys) in store.walk(): .....: for subgroup in subgroups: .....: print("GROUP: {}/{}".format(path, subgroup)) .....: for subkey in subkeys: .....: key = "/".join([path, subkey]) .....: print("KEY: {}".format(key)) .....: print(store.get(key)) .....: GROUP: /foo KEY: /df A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908 GROUP: /foo/bar KEY: /foo/bar/bah A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908

Warning

Hierarchical keys cannot be retrieved as dotted (attribute) access as described above for items stored under the root node.

In [8]: store.foo.bar.bah AttributeError: 'HDFStore' object has no attribute 'foo' # you can directly access the actual PyTables node but using the root node In [9]: store.root.foo.bar.bah Out[9]: /foo/bar/bah (Group) '' children := ['block0_items' (Array), 'block0_values' (Array), 'axis0' (Array), 'axis1' (Array)]

Instead, use explicit string based keys:

In [477]: store["foo/bar/bah"] Out[477]: A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908

Storing types#

Storing mixed types in a table#

Storing mixed-dtype data is supported. Strings are stored as a fixed-width using the maximum size of the appended column. Subsequent attempts at appending longer strings will raise a ValueError.

Passing min_itemsize={`values`: size} as a parameter to append will set a larger minimum for the string columns. Storing floats, strings, ints, bools, datetime64 are currently supported. For string columns, passing nan_rep = 'nan' to append will change the default nan representation on disk (which converts to/from np.nan), this defaults to nan.

In [478]: df_mixed = pd.DataFrame( .....: { .....: "A": np.random.randn(8), .....: "B": np.random.randn(8), .....: "C": np.array(np.random.randn(8), dtype="float32"), .....: "string": "string", .....: "int": 1, .....: "bool": True, .....: "datetime64": pd.Timestamp("20010102"), .....: }, .....: index=list(range(8)), .....: ) .....: In [479]: df_mixed.loc[df_mixed.index[3:5], ["A", "B", "string", "datetime64"]] = np.nan In [480]: store.append("df_mixed", df_mixed, min_itemsize={"values": 50}) In [481]: df_mixed1 = store.select("df_mixed") In [482]: df_mixed1 Out[482]: A B C string int bool datetime64 0 1.778161 -0.898283 -0.263043 string 1 True 2001-01-02 1 -0.913867 -0.218499 -0.639244 string 1 True 2001-01-02 2 -0.030004 1.408028 -0.866305 string 1 True 2001-01-02 3 NaN NaN -0.225250 NaN 1 True NaT 4 NaN NaN -0.890978 NaN 1 True NaT 5 0.081323 0.520995 -0.553839 string 1 True 2001-01-02 6 -0.268494 0.620028 -2.762875 string 1 True 2001-01-02 7 0.168016 0.159416 -1.244763 string 1 True 2001-01-02 In [483]: df_mixed1.dtypes.value_counts() Out[483]: float64 2 float32 1 object 1 int64 1 bool 1 datetime64[ns] 1 dtype: int64 # we have provided a minimum string column size In [484]: store.root.df_mixed.table Out[484]: /df_mixed/table (Table(8,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": Float64Col(shape=(2,), dflt=0.0, pos=1), "values_block_1": Float32Col(shape=(1,), dflt=0.0, pos=2), "values_block_2": StringCol(itemsize=50, shape=(1,), dflt=b'', pos=3), "values_block_3": Int64Col(shape=(1,), dflt=0, pos=4), "values_block_4": BoolCol(shape=(1,), dflt=False, pos=5), "values_block_5": Int64Col(shape=(1,), dflt=0, pos=6)} byteorder := 'little' chunkshape := (689,) autoindex := True colindexes := { "index": Index(6, mediumshuffle, zlib(1)).is_csi=False}

Storing MultiIndex DataFrames#

Storing MultiIndex DataFrames as tables is very similar to storing/selecting from homogeneous index DataFrames.

In [485]: index = pd.MultiIndex( .....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]], .....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], .....: names=["foo", "bar"], .....: ) .....: In [486]: df_mi = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"]) In [487]: df_mi Out[487]: A B C foo bar foo one -1.280289 0.692545 -0.536722 two 1.005707 0.296917 0.139796 three -1.083889 0.811865 1.648435 bar one -0.164377 -0.402227 1.618922 two -1.424723 -0.023232 0.948196 baz two 0.183573 0.145277 0.308146 three -1.043530 -0.708145 1.430905 qux one -0.850136 0.813949 1.508891 two -1.556154 0.187597 1.176488 three -1.246093 -0.002726 -0.444249 In [488]: store.append("df_mi", df_mi) In [489]: store.select("df_mi") Out[489]: A B C foo bar foo one -1.280289 0.692545 -0.536722 two 1.005707 0.296917 0.139796 three -1.083889 0.811865 1.648435 bar one -0.164377 -0.402227 1.618922 two -1.424723 -0.023232 0.948196 baz two 0.183573 0.145277 0.308146 three -1.043530 -0.708145 1.430905 qux one -0.850136 0.813949 1.508891 two -1.556154 0.187597 1.176488 three -1.246093 -0.002726 -0.444249 # the levels are automatically included as data columns In [490]: store.select("df_mi", "foo=bar") Out[490]: A B C foo bar bar one -0.164377 -0.402227 1.618922 two -1.424723 -0.023232 0.948196

Note

The index keyword is reserved and cannot be use as a level name.

Querying#

Querying a table#

select and delete operations have an optional criterion that can be specified to select/delete only a subset of the data. This allows one to have a very large on-disk table and retrieve only a portion of the data.

A query is specified using the Term class under the hood, as a boolean expression.

  • index and columns are supported indexers of DataFrames.

  • if data_columns are specified, these can be used as additional indexers.

  • level name in a MultiIndex, with default name level_0, level_1, … if not provided.

Valid comparison operators are:

=, ==, !=, >, >=, <, <=

Valid boolean expressions are combined with:

  • | : or

  • & : and

  • ( and ) : for grouping

These rules are similar to how boolean expressions are used in pandas for indexing.

Note

  • = will be automatically expanded to the comparison operator ==

  • ~ is the not operator, but can only be used in very limited circumstances

  • If a list/tuple of expressions is passed they will be combined via &

The following are valid expressions:

  • 'index >= date'

  • "columns = ['A', 'D']"

  • "columns in ['A', 'D']"

  • 'columns = A'

  • 'columns == A'

  • "~(columns = ['A', 'B'])"

  • 'index > df.index[3] & string = "bar"'

  • '(index > df.index[3] & index <= df.index[6]) | string = "bar"'

  • "ts >= Timestamp('2012-02-01')"

  • "major_axis>=20130101"

The indexers are on the left-hand side of the sub-expression:

columns, major_axis, ts

The right-hand side of the sub-expression (after a comparison operator) can be:

  • functions that will be evaluated, e.g. Timestamp('2012-02-01')

  • strings, e.g. "bar"

  • date-like, e.g. 20130101, or "20130101"

  • lists, e.g. "['A', 'B']"

  • variables that are defined in the local names space, e.g. date

Note

Passing a string to a query by interpolating it into the query expression is not recommended. Simply assign the string of interest to a variable and use that variable in an expression. For example, do this

string = "HolyMoly'" store.select("df", "index == string")

instead of this

string = "HolyMoly'" store.select('df', f'index == {string}')

The latter will not work and will raise a SyntaxError.Note that there’s a single quote followed by a double quote in the string variable.

If you must interpolate, use the '%r' format specifier

store.select("df", "index == %r" % string)

which will quote string.

Here are some examples:

In [491]: dfq = pd.DataFrame( .....: np.random.randn(10, 4), .....: columns=list("ABCD"), .....: index=pd.date_range("20130101", periods=10), .....: ) .....: In [492]: store.append("dfq", dfq, format="table", data_columns=True)

Use boolean expressions, with in-line function evaluation.

In [493]: store.select("dfq", "index>pd.Timestamp('20130104') & columns=['A', 'B']") Out[493]: A B 2013-01-05 1.366810 1.073372 2013-01-06 2.119746 -2.628174 2013-01-07 0.337920 -0.634027 2013-01-08 1.053434 1.109090 2013-01-09 -0.772942 -0.269415 2013-01-10 0.048562 -0.285920

Use inline column reference.

In [494]: store.select("dfq", where="A>0 or C>0") Out[494]: A B C D 2013-01-01 0.856838 1.491776 0.001283 0.701816 2013-01-02 -1.097917 0.102588 0.661740 0.443531 2013-01-03 0.559313 -0.459055 -1.222598 -0.455304 2013-01-05 1.366810 1.073372 -0.994957 0.755314 2013-01-06 2.119746 -2.628174 -0.089460 -0.133636 2013-01-07 0.337920 -0.634027 0.421107 0.604303 2013-01-08 1.053434 1.109090 -0.367891 -0.846206 2013-01-10 0.048562 -0.285920 1.334100 0.194462

The columns keyword can be supplied to select a list of columns to be returned, this is equivalent to passing a 'columns=list_of_columns_to_filter':

In [495]: store.select("df", "columns=['A', 'B']") Out[495]: A B 2000-01-01 -0.398501 -0.677311 2000-01-02 -1.167564 -0.593353 2000-01-03 -0.131959 0.089012 2000-01-04 0.169405 -1.358046 2000-01-05 0.492195 0.076693 2000-01-06 -0.285283 -1.210529 2000-01-07 0.941577 -0.342447 2000-01-08 0.052607 2.093214

start and stop parameters can be specified to limit the total search space. These are in terms of the total number of rows in a table.

Note

select will raise a ValueError if the query expression has an unknown variable reference. Usually this means that you are trying to select on a column that is not a data_column.

select will raise a SyntaxError if the query expression is not valid.

Query timedelta64[ns]#

You can store and query using the timedelta64[ns] type. Terms can be specified in the format: (), where float may be signed (and fractional), and unit can be D,s,ms,us,ns for the timedelta. Here’s an example:

In [496]: from datetime import timedelta In [497]: dftd = pd.DataFrame( .....: { .....: "A": pd.Timestamp("20130101"), .....: "B": [ .....: pd.Timestamp("20130101") + timedelta(days=i, seconds=10) .....: for i in range(10) .....: ], .....: } .....: ) .....: In [498]: dftd["C"] = dftd["A"] - dftd["B"] In [499]: dftd Out[499]: A B C 0 2013-01-01 2013-01-01 00:00:10 -1 days +23:59:50 1 2013-01-01 2013-01-02 00:00:10 -2 days +23:59:50 2 2013-01-01 2013-01-03 00:00:10 -3 days +23:59:50 3 2013-01-01 2013-01-04 00:00:10 -4 days +23:59:50 4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50 5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50 6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50 7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50 8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50 9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50 In [500]: store.append("dftd", dftd, data_columns=True) In [501]: store.select("dftd", "C<'-3.5D'") Out[501]: A B C 4 2013-01-01 2013-01-05 00:00:10 -5 days +23:59:50 5 2013-01-01 2013-01-06 00:00:10 -6 days +23:59:50 6 2013-01-01 2013-01-07 00:00:10 -7 days +23:59:50 7 2013-01-01 2013-01-08 00:00:10 -8 days +23:59:50 8 2013-01-01 2013-01-09 00:00:10 -9 days +23:59:50 9 2013-01-01 2013-01-10 00:00:10 -10 days +23:59:50

Query MultiIndex#

Selecting from a MultiIndex can be achieved by using the name of the level.

In [502]: df_mi.index.names Out[502]: FrozenList(['foo', 'bar']) In [503]: store.select("df_mi", "foo=baz and bar=two") Out[503]: A B C foo bar baz two 0.183573 0.145277 0.308146

If the MultiIndex levels names are None, the levels are automatically made available via the level_n keyword with n the level of the MultiIndex you want to select from.

In [504]: index = pd.MultiIndex( .....: levels=[["foo", "bar", "baz", "qux"], ["one", "two", "three"]], .....: codes=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]], .....: ) .....: In [505]: df_mi_2 = pd.DataFrame(np.random.randn(10, 3), index=index, columns=["A", "B", "C"]) In [506]: df_mi_2 Out[506]: A B C foo one -0.646538 1.210676 -0.315409 two 1.528366 0.376542 0.174490 three 1.247943 -0.742283 0.710400 bar one 0.434128 -1.246384 1.139595 two 1.388668 -0.413554 -0.666287 baz two 0.010150 -0.163820 -0.115305 three 0.216467 0.633720 0.473945 qux one -0.155446 1.287082 0.320201 two -1.256989 0.874920 0.765944 three 0.025557 -0.729782 -0.127439 In [507]: store.append("df_mi_2", df_mi_2) # the levels are automatically included as data columns with keyword level_n In [508]: store.select("df_mi_2", "level_0=foo and level_1=two") Out[508]: A B C foo two 1.528366 0.376542 0.17449

Indexing#

You can create/modify an index for a table with create_table_index after data is already in the table (after and append/put operation). Creating a table index is highly encouraged. This will speed your queries a great deal when you use a select with the indexed dimension as the where.

Note

Indexes are automagically created on the indexables and any data columns you specify. This behavior can be turned off by passing index=False to append.

# we have automagically already created an index (in the first section) In [509]: i = store.root.df.table.cols.index.index In [510]: i.optlevel, i.kind Out[510]: (6, 'medium') # change an index by passing new parameters In [511]: store.create_table_index("df", optlevel=9, kind="full") In [512]: i = store.root.df.table.cols.index.index In [513]: i.optlevel, i.kind Out[513]: (9, 'full')

Oftentimes when appending large amounts of data to a store, it is useful to turn off index creation for each append, then recreate at the end.

In [514]: df_1 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB")) In [515]: df_2 = pd.DataFrame(np.random.randn(10, 2), columns=list("AB")) In [516]: st = pd.HDFStore("appends.h5", mode="w") In [517]: st.append("df", df_1, data_columns=["B"], index=False) In [518]: st.append("df", df_2, data_columns=["B"], index=False) In [519]: st.get_storer("df").table Out[519]: /df/table (Table(20,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1), "B": Float64Col(shape=(), dflt=0.0, pos=2)} byteorder := 'little' chunkshape := (2730,)

Then create the index when finished appending.

In [520]: st.create_table_index("df", columns=["B"], optlevel=9, kind="full") In [521]: st.get_storer("df").table Out[521]: /df/table (Table(20,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1), "B": Float64Col(shape=(), dflt=0.0, pos=2)} byteorder := 'little' chunkshape := (2730,) autoindex := True colindexes := { "B": Index(9, fullshuffle, zlib(1)).is_csi=True} In [522]: st.close()

See here for how to create a completely-sorted-index (CSI) on an existing store.

Query via data columns#

You can designate (and index) certain columns that you want to be able to perform queries (other than the indexable columns, which you can always query). For instance say you want to perform this common operation, on-disk, and return just the frame that matches this query. You can specify data_columns = True to force all columns to be data_columns.

In [523]: df_dc = df.copy() In [524]: df_dc["string"] = "foo" In [525]: df_dc.loc[df_dc.index[4:6], "string"] = np.nan In [526]: df_dc.loc[df_dc.index[7:9], "string"] = "bar" In [527]: df_dc["string2"] = "cool" In [528]: df_dc.loc[df_dc.index[1:3], ["B", "C"]] = 1.0 In [529]: df_dc Out[529]: A B C string string2 2000-01-01 -0.398501 -0.677311 -0.874991 foo cool 2000-01-02 -1.167564 1.000000 1.000000 foo cool 2000-01-03 -0.131959 1.000000 1.000000 foo cool 2000-01-04 0.169405 -1.358046 -0.105563 foo cool 2000-01-05 0.492195 0.076693 0.213685 NaN cool 2000-01-06 -0.285283 -1.210529 -1.408386 NaN cool 2000-01-07 0.941577 -0.342447 0.222031 foo cool 2000-01-08 0.052607 2.093214 1.064908 bar cool # on-disk operations In [530]: store.append("df_dc", df_dc, data_columns=["B", "C", "string", "string2"]) In [531]: store.select("df_dc", where="B > 0") Out[531]: A B C string string2 2000-01-02 -1.167564 1.000000 1.000000 foo cool 2000-01-03 -0.131959 1.000000 1.000000 foo cool 2000-01-05 0.492195 0.076693 0.213685 NaN cool 2000-01-08 0.052607 2.093214 1.064908 bar cool # getting creative In [532]: store.select("df_dc", "B > 0 & C > 0 & string == foo") Out[532]: A B C string string2 2000-01-02 -1.167564 1.0 1.0 foo cool 2000-01-03 -0.131959 1.0 1.0 foo cool # this is in-memory version of this type of selection In [533]: df_dc[(df_dc.B > 0) & (df_dc.C > 0) & (df_dc.string == "foo")] Out[533]: A B C string string2 2000-01-02 -1.167564 1.0 1.0 foo cool 2000-01-03 -0.131959 1.0 1.0 foo cool # we have automagically created this index and the B/C/string/string2 # columns are stored separately as ``PyTables`` columns In [534]: store.root.df_dc.table Out[534]: /df_dc/table (Table(8,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": Float64Col(shape=(1,), dflt=0.0, pos=1), "B": Float64Col(shape=(), dflt=0.0, pos=2), "C": Float64Col(shape=(), dflt=0.0, pos=3), "string": StringCol(itemsize=3, shape=(), dflt=b'', pos=4), "string2": StringCol(itemsize=4, shape=(), dflt=b'', pos=5)} byteorder := 'little' chunkshape := (1680,) autoindex := True colindexes := { "index": Index(6, mediumshuffle, zlib(1)).is_csi=False, "B": Index(6, mediumshuffle, zlib(1)).is_csi=False, "C": Index(6, mediumshuffle, zlib(1)).is_csi=False, "string": Index(6, mediumshuffle, zlib(1)).is_csi=False, "string2": Index(6, mediumshuffle, zlib(1)).is_csi=False}

There is some performance degradation by making lots of columns into data columns, so it is up to the user to designate these. In addition, you cannot change data columns (nor indexables) after the first append/put operation (Of course you can simply read in the data and create a new table!).

Iterator#

You can pass iterator=True or chunksize=number_in_a_chunk to select and select_as_multiple to return an iterator on the results. The default is 50,000 rows returned in a chunk.

In [535]: for df in store.select("df", chunksize=3): .....: print(df) .....: A B C 2000-01-01 -0.398501 -0.677311 -0.874991 2000-01-02 -1.167564 -0.593353 0.146262 2000-01-03 -0.131959 0.089012 0.667450 A B C 2000-01-04 0.169405 -1.358046 -0.105563 2000-01-05 0.492195 0.076693 0.213685 2000-01-06 -0.285283 -1.210529 -1.408386 A B C 2000-01-07 0.941577 -0.342447 0.222031 2000-01-08 0.052607 2.093214 1.064908

Note

You can also use the iterator with read_hdf which will open, then automatically close the store when finished iterating.

for df in pd.read_hdf("store.h5", "df", chunksize=3): print(df)

Note, that the chunksize keyword applies to the source rows. So if you are doing a query, then the chunksize will subdivide the total rows in the table and the query applied, returning an iterator on potentially unequal sized chunks.

Here is a recipe for generating a query and using it to create equal sized return chunks.

In [536]: dfeq = pd.DataFrame({"number": np.arange(1, 11)}) In [537]: dfeq Out[537]: number 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 In [538]: store.append("dfeq", dfeq, data_columns=["number"]) In [539]: def chunks(l, n): .....: return [l[i: i + n] for i in range(0, len(l), n)] .....: In [540]: evens = [2, 4, 6, 8, 10] In [541]: coordinates = store.select_as_coordinates("dfeq", "number=evens") In [542]: for c in chunks(coordinates, 2): .....: print(store.select("dfeq", where=c)) .....: number 1 2 3 4 number 5 6 7 8 number 9 10

Advanced queries#

Select a single column#

To retrieve a single indexable or data column, use the method select_column. This will, for example, enable you to get the index very quickly. These return a Series of the result, indexed by the row number. These do not currently accept the where selector.

In [543]: store.select_column("df_dc", "index") Out[543]: 0 2000-01-01 1 2000-01-02 2 2000-01-03 3 2000-01-04 4 2000-01-05 5 2000-01-06 6 2000-01-07 7 2000-01-08 Name: index, dtype: datetime64[ns] In [544]: store.select_column("df_dc", "string") Out[544]: 0 foo 1 foo 2 foo 3 foo 4 NaN 5 NaN 6 foo 7 bar Name: string, dtype: object

Selecting coordinates#

Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an Int64Index of the resulting locations. These coordinates can also be passed to subsequent where operations.

In [545]: df_coord = pd.DataFrame( .....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000) .....: ) .....: In [546]: store.append("df_coord", df_coord) In [547]: c = store.select_as_coordinates("df_coord", "index > 20020101") In [548]: c Out[548]: Int64Index([732, 733, 734, 735, 736, 737, 738, 739, 740, 741, ... 990, 991, 992, 993, 994, 995, 996, 997, 998, 999], dtype='int64', length=268) In [549]: store.select("df_coord", where=c) Out[549]: 0 1 2002-01-02 0.009035 0.921784 2002-01-03 -1.476563 -1.376375 2002-01-04 1.266731 2.173681 2002-01-05 0.147621 0.616468 2002-01-06 0.008611 2.136001 ... ... ... 2002-09-22 0.781169 -0.791687 2002-09-23 -0.764810 -2.000933 2002-09-24 -0.345662 0.393915 2002-09-25 -0.116661 0.834638 2002-09-26 -1.341780 0.686366 [268 rows x 2 columns]

Selecting using a where mask#

Sometime your query can involve creating a list of rows to select. Usually this mask would be a resulting index from an indexing operation. This example selects the months of a datetimeindex which are 5.

In [550]: df_mask = pd.DataFrame( .....: np.random.randn(1000, 2), index=pd.date_range("20000101", periods=1000) .....: ) .....: In [551]: store.append("df_mask", df_mask) In [552]: c = store.select_column("df_mask", "index") In [553]: where = c[pd.DatetimeIndex(c).month == 5].index In [554]: store.select("df_mask", where=where) Out[554]: 0 1 2000-05-01 -0.386742 -0.977433 2000-05-02 -0.228819 0.471671 2000-05-03 0.337307 1.840494 2000-05-04 0.050249 0.307149 2000-05-05 -0.802947 -0.946730 ... ... ... 2002-05-27 1.605281 1.741415 2002-05-28 -0.804450 -0.715040 2002-05-29 -0.874851 0.037178 2002-05-30 -0.161167 -1.294944 2002-05-31 -0.258463 -0.731969 [93 rows x 2 columns]

Storer object#

If you want to inspect the stored object, retrieve via get_storer. You could use this programmatically to say get the number of rows in an object.

In [555]: store.get_storer("df_dc").nrows Out[555]: 8

Multiple table queries#

The methods append_to_multiple and select_as_multiple can perform appending/selecting from multiple tables at once. The idea is to have one table (call it the selector table) that you index most/all of the columns, and perform your queries. The other table(s) are data tables with an index matching the selector table’s index. You can then perform a very fast query on the selector table, yet get lots of data back. This method is similar to having a very wide table, but enables more efficient queries.

The append_to_multiple method splits a given single DataFrame into multiple tables according to d, a dictionary that maps the table names to a list of ‘columns’ you want in that table. If None is used in place of a list, that table will have the remaining unspecified columns of the given DataFrame. The argument selector defines which table is the selector table (which you can make queries from). The argument dropna will drop rows from the input DataFrame to ensure tables are synchronized. This means that if a row for one of the tables being written to is entirely np.NaN, that row will be dropped from all tables.

If dropna is False, THE USER IS RESPONSIBLE FOR SYNCHRONIZING THE TABLES. Remember that entirely np.Nan rows are not written to the HDFStore, so if you choose to call dropna=False, some tables may have more rows than others, and therefore select_as_multiple may not work or it may return unexpected results.

In [556]: df_mt = pd.DataFrame( .....: np.random.randn(8, 6), .....: index=pd.date_range("1/1/2000", periods=8), .....: columns=["A", "B", "C", "D", "E", "F"], .....: ) .....: In [557]: df_mt["foo"] = "bar" In [558]: df_mt.loc[df_mt.index[1], ("A", "B")] = np.nan # you can also create the tables individually In [559]: store.append_to_multiple( .....: {"df1_mt": ["A", "B"], "df2_mt": None}, df_mt, selector="df1_mt" .....: ) .....: In [560]: store Out[560]: File path: store.h5 # individual tables were created In [561]: store.select("df1_mt") Out[561]: A B 2000-01-01 0.079529 -1.459471 2000-01-02 NaN NaN 2000-01-03 -0.423113 2.314361 2000-01-04 0.756744 -0.792372 2000-01-05 -0.184971 0.170852 2000-01-06 0.678830 0.633974 2000-01-07 0.034973 0.974369 2000-01-08 -2.110103 0.243062 In [562]: store.select("df2_mt") Out[562]: C D E F foo 2000-01-01 -0.596306 -0.910022 -1.057072 -0.864360 bar 2000-01-02 0.477849 0.283128 -2.045700 -0.338206 bar 2000-01-03 -0.033100 -0.965461 -0.001079 -0.351689 bar 2000-01-04 -0.513555 -1.484776 -0.796280 -0.182321 bar 2000-01-05 -0.872407 -1.751515 0.934334 0.938818 bar 2000-01-06 -1.398256 1.347142 -0.029520 0.082738 bar 2000-01-07 -0.755544 0.380786 -1.634116 1.293610 bar 2000-01-08 1.453064 0.500558 -0.574475 0.694324 bar # as a multiple In [563]: store.select_as_multiple( .....: ["df1_mt", "df2_mt"], .....: where=["A>0", "B>0"], .....: selector="df1_mt", .....: ) .....: Out[563]: A B C D E F foo 2000-01-06 0.678830 0.633974 -1.398256 1.347142 -0.029520 0.082738 bar 2000-01-07 0.034973 0.974369 -0.755544 0.380786 -1.634116 1.293610 bar

Delete from a table#

You can delete from a table selectively by specifying a where. In deleting rows, it is important to understand the PyTables deletes rows by erasing the rows, then moving the following data. Thus deleting can potentially be a very expensive operation depending on the orientation of your data. To get optimal performance, it’s worthwhile to have the dimension you are deleting be the first of the indexables.

Data is ordered (on the disk) in terms of the indexables. Here’s a simple use case. You store panel-type data, with dates in the major_axis and ids in the minor_axis. The data is then interleaved like this:

  • date_1
    • id_1

    • id_2

    • .

    • id_n

  • date_2
    • id_1

    • .

    • id_n

It should be clear that a delete operation on the major_axis will be fairly quick, as one chunk is removed, then the following data moved. On the other hand a delete operation on the minor_axis will be very expensive. In this case it would almost certainly be faster to rewrite the table using a where that selects all but the missing data.

Warning

Please note that HDF5 DOES NOT RECLAIM SPACE in the h5 files automatically. Thus, repeatedly deleting (or removing nodes) and adding again, WILL TEND TO INCREASE THE FILE SIZE.

To repack and clean the file, use ptrepack.

Notes & caveats#

Compression#

PyTables allows the stored data to be compressed. This applies to all kinds of stores, not just tables. Two parameters are used to control compression: complevel and complib.

  • complevel specifies if and how hard data is to be compressed. complevel=0 and complevel=None disables compression and 0 enables compression.

  • complib specifies which compression library to use. If nothing is specified the default library zlib is used. A compression library usually optimizes for either good compression rates or speed and the results will depend on the type of data. Which type of compression to choose depends on your specific needs and data. The list of supported compression libraries:

    • zlib: The default compression library. A classic in terms of compression, achieves good compression rates but is somewhat slow.

    • lzo: Fast compression and decompression.

    • bzip2: Good compression rates.

    • blosc: Fast compression and decompression.

      Support for alternative blosc compressors:

      • blosc:blosclz This is the default compressor for blosc

      • blosc:lz4: A compact, very popular and fast compressor.

      • blosc:lz4hc: A tweaked version of LZ4, produces better compression ratios at the expense of speed.

      • blosc:snappy: A popular compressor used in many places.

      • blosc:zlib: A classic; somewhat slower than the previous ones, but achieving better compression ratios.

      • blosc:zstd: An extremely well balanced codec; it provides the best compression ratios among the others above, and at reasonably fast speed.

    If complib is defined as something other than the listed libraries a ValueError exception is issued.

Note

If the library specified with the complib option is missing on your platform, compression defaults to zlib without further ado.

Enable compression for all objects within the file:

store_compressed = pd.HDFStore( "store_compressed.h5", complevel=9, complib="blosc:blosclz" )

Or on-the-fly compression (this only applies to tables) in stores where compression is not enabled:

store.append("df", df, complib="zlib", complevel=5)

ptrepack#

PyTables offers better write performance when tables are compressed after they are written, as opposed to turning on compression at the very beginning. You can use the supplied PyTables utility ptrepack. In addition, ptrepack can change compression levels after the fact.

ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5

Furthermore ptrepack in.h5 out.h5 will repack the file to allow you to reuse previously deleted space. Alternatively, one can simply remove the file and write again, or use the copy method.

Caveats#

Warning

HDFStore is not-threadsafe for writing. The underlying PyTables only supports concurrent reads (via threading or processes). If you need reading and writing at the same time, you need to serialize these operations in a single thread in a single process. You will corrupt your data otherwise. See the (GH2397) for more information.

  • If you use locks to manage write access between multiple processes, you may want to use fsync() before releasing write locks. For convenience you can use store.flush(fsync=True) to do this for you.

  • Once a table is created columns (DataFrame) are fixed; only exactly the same columns can be appended

  • Be aware that timezones (e.g., pytz.timezone('US/Eastern')) are not necessarily equal across timezone versions. So if data is localized to a specific timezone in the HDFStore using one version of a timezone library and that data is updated with another version, the data will be converted to UTC since these timezones are not considered equal. Either use the same version of timezone library or use tz_convert with the updated timezone definition.

Warning

PyTables will show a NaturalNameWarning if a column name cannot be used as an attribute selector. Natural identifiers contain only letters, numbers, and underscores, and may not begin with a number. Other identifiers cannot be used in a where clause and are generally a bad idea.

DataTypes#

HDFStore will map an object dtype to the PyTables underlying dtype. This means the following types are known to work:

Type

Represents missing values

floating : float64, float32, float16

np.nan

integer : int64, int32, int8, uint64,uint32, uint8

boolean

datetime64[ns]

NaT

timedelta64[ns]

NaT

categorical : see the section below

object : strings

np.nan

unicode columns are not supported, and WILL FAIL.

Categorical data#

You can write data that contains category dtypes to a HDFStore. Queries work the same as if it was an object array. However, the category dtyped data is stored in a more efficient manner.

In [564]: dfcat = pd.DataFrame( .....: {"A": pd.Series(list("aabbcdba")).astype("category"), "B": np.random.randn(8)} .....: ) .....: In [565]: dfcat Out[565]: A B 0 a -1.608059 1 a 0.851060 2 b -0.736931 3 b 0.003538 4 c -1.422611 5 d 2.060901 6 b 0.993899 7 a -1.371768 In [566]: dfcat.dtypes Out[566]: A category B float64 dtype: object In [567]: cstore = pd.HDFStore("cats.h5", mode="w") In [568]: cstore.append("dfcat", dfcat, format="table", data_columns=["A"]) In [569]: result = cstore.select("dfcat", where="A in ['b', 'c']") In [570]: result Out[570]: A B 2 b -0.736931 3 b 0.003538 4 c -1.422611 6 b 0.993899 In [571]: result.dtypes Out[571]: A category B float64 dtype: object

String columns#

min_itemsize

The underlying implementation of HDFStore uses a fixed column width (itemsize) for string columns. A string column itemsize is calculated as the maximum of the length of data (for that column) that is passed to the HDFStore, in the first append. Subsequent appends, may introduce a string for a column larger than the column can hold, an Exception will be raised (otherwise you could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and allow a user-specified truncation to occur.

Pass min_itemsize on the first table creation to a-priori specify the minimum length of a particular string column. min_itemsize can be an integer, or a dict mapping a column name to an integer. You can pass values as a key to allow all indexables or data_columns to have this min_itemsize.

Passing a min_itemsize dict will cause all passed columns to be created as data_columns automatically.

Note

If you are not passing any data_columns, then the min_itemsize will be the maximum of the length of any string passed

In [572]: dfs = pd.DataFrame({"A": "foo", "B": "bar"}, index=list(range(5))) In [573]: dfs Out[573]: A B 0 foo bar 1 foo bar 2 foo bar 3 foo bar 4 foo bar # A and B have a size of 30 In [574]: store.append("dfs", dfs, min_itemsize=30) In [575]: store.get_storer("dfs").table Out[575]: /dfs/table (Table(5,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": StringCol(itemsize=30, shape=(2,), dflt=b'', pos=1)} byteorder := 'little' chunkshape := (963,) autoindex := True colindexes := { "index": Index(6, mediumshuffle, zlib(1)).is_csi=False} # A is created as a data_column with a size of 30 # B is size is calculated In [576]: store.append("dfs2", dfs, min_itemsize={"A": 30}) In [577]: store.get_storer("dfs2").table Out[577]: /dfs2/table (Table(5,)) '' description := { "index": Int64Col(shape=(), dflt=0, pos=0), "values_block_0": StringCol(itemsize=3, shape=(1,), dflt=b'', pos=1), "A": StringCol(itemsize=30, shape=(), dflt=b'', pos=2)} byteorder := 'little' chunkshape := (1598,) autoindex := True colindexes := { "index": Index(6, mediumshuffle, zlib(1)).is_csi=False, "A": Index(6, mediumshuffle, zlib(1)).is_csi=False}

nan_rep

String columns will serialize a np.nan (a missing value) with the nan_rep string representation. This defaults to the string value nan. You could inadvertently turn an actual nan value into a missing value.

In [578]: dfss = pd.DataFrame({"A": ["foo", "bar", "nan"]}) In [579]: dfss Out[579]: A 0 foo 1 bar 2 nan In [580]: store.append("dfss", dfss) In [581]: store.select("dfss") Out[581]: A 0 foo 1 bar 2 NaN # here you need to specify a different nan rep In [582]: store.append("dfss2", dfss, nan_rep="_nan_") In [583]: store.select("dfss2") Out[583]: A 0 foo 1 bar 2 nan

External compatibility#

HDFStore writes table format objects in specific formats suitable for producing loss-less round trips to pandas objects. For external compatibility, HDFStore can read native PyTables format tables.

It is possible to write an HDFStore object that can easily be imported into R using the rhdf5 library (Package website). Create a table format store like this:

In [584]: df_for_r = pd.DataFrame( .....: { .....: "first": np.random.rand(100), .....: "second": np.random.rand(100), .....: "class": np.random.randint(0, 2, (100,)), .....: }, .....: index=range(100), .....: ) .....: In [585]: df_for_r.head() Out[585]: first second class 0 0.013480 0.504941 0 1 0.690984 0.898188 1 2 0.510113 0.618748 1 3 0.357698 0.004972 0 4 0.451658 0.012065 1 In [586]: store_export = pd.HDFStore("export.h5") In [587]: store_export.append("df_for_r", df_for_r, data_columns=df_dc.columns) In [588]: store_export Out[588]: File path: export.h5

In R this file can be read into a data.frame object using the rhdf5 library. The following example function reads the corresponding column names and data values from the values and assembles them into a data.frame:

# Load values and column names for all datasets from corresponding nodes and # insert them into one data.frame object. library(rhdf5) loadhdf5data <- function(h5File) { listing <- h5ls(h5File) # Find all data nodes, values are stored in *_values and corresponding column # titles in *_items data_nodes <- grep("_values", listing$name) name_nodes <- grep("_items", listing$name) data_paths = paste(listing$group[data_nodes], listing$name[data_nodes], sep = "/") name_paths = paste(listing$group[name_nodes], listing$name[name_nodes], sep = "/") columns = list() for (idx in seq(data_paths)) { # NOTE: matrices returned by h5read have to be transposed to obtain # required Fortran order! data <- data.frame(t(h5read(h5File, data_paths[idx]))) names <- t(h5read(h5File, name_paths[idx])) entry <- data.frame(data) colnames(entry) <- names columns <- append(columns, entry) } data <- data.frame(columns) return(data) }

Now you can import the DataFrame into R:

> data = loadhdf5data("transfer.hdf5") > head(data) first second class 1 0.4170220047 0.3266449 0 2 0.7203244934 0.5270581 0 3 0.0001143748 0.8859421 1 4 0.3023325726 0.3572698 1 5 0.1467558908 0.9085352 1 6 0.0923385948 0.6233601 1

Note

The R function lists the entire HDF5 file’s contents and assembles the data.frame object from all matching nodes, so use this only as a starting point if you have stored multiple DataFrame objects to a single HDF5 file.

Performance#

  • tables format come with a writing performance penalty as compared to fixed stores. The benefit is the ability to append/delete and query (potentially very large amounts of data). Write times are generally longer as compared with regular stores. Query times can be quite fast, especially on an indexed axis.

  • You can pass chunksize= to append, specifying the write chunksize (default is 50000). This will significantly lower your memory usage on writing.

  • You can pass expectedrows= to the first append, to set the TOTAL number of rows that PyTables will expect. This will optimize read/write performance.

  • Duplicate rows can be written to tables, but are filtered out in selection (with the last items being selected; thus a table is unique on major, minor pairs)

  • A PerformanceWarning will be raised if you are attempting to store types that will be pickled by PyTables (rather than stored as endemic types). See Here for more information and some solutions.

Feather#

Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data frames efficient, and to make sharing data across data analysis languages easy.

Feather is designed to faithfully serialize and de-serialize DataFrames, supporting all of the pandas dtypes, including extension dtypes such as categorical and datetime with tz.

Several caveats:

  • The format will NOT write an Index, or MultiIndex for the DataFrame and will raise an error if a non-default one is provided. You can .reset_index() to store the index or .reset_index(drop=True) to ignore it.

  • Duplicate column names and non-string columns names are not supported

  • Actual Python objects in object dtype columns are not supported. These will raise a helpful error message on an attempt at serialization.

See the Full Documentation.

In [589]: df = pd.DataFrame( .....: { .....: "a": list("abc"), .....: "b": list(range(1, 4)), .....: "c": np.arange(3, 6).astype("u1"), .....: "d": np.arange(4.0, 7.0, dtype="float64"), .....: "e": [True, False, True], .....: "f": pd.Categorical(list("abc")), .....: "g": pd.date_range("20130101", periods=3), .....: "h": pd.date_range("20130101", periods=3, tz="US/Eastern"), .....: "i": pd.date_range("20130101", periods=3, freq="ns"), .....: } .....: ) .....: In [590]: df Out[590]: a b c ... g h i 0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000 1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001 2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002 [3 rows x 9 columns] In [591]: df.dtypes Out[591]: a object b int64 c uint8 d float64 e bool f category g datetime64[ns] h datetime64[ns, US/Eastern] i datetime64[ns] dtype: object

Write to a feather file.

In [592]: df.to_feather("example.feather")

Read from a feather file.

In [593]: result = pd.read_feather("example.feather") In [594]: result Out[594]: a b c ... g h i 0 a 1 3 ... 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00.000000000 1 b 2 4 ... 2013-01-02 2013-01-02 00:00:00-05:00 2013-01-01 00:00:00.000000001 2 c 3 5 ... 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-01 00:00:00.000000002 [3 rows x 9 columns] # we preserve dtypes In [595]: result.dtypes Out[595]: a object b int64 c uint8 d float64 e bool f category g datetime64[ns] h datetime64[ns, US/Eastern] i datetime64[ns] dtype: object

Parquet#

Apache Parquet provides a partitioned binary columnar serialization for data frames. It is designed to make reading and writing data frames efficient, and to make sharing data across data analysis languages easy. Parquet can use a variety of compression techniques to shrink the file size as much as possible while still maintaining good read performance.

Parquet is designed to faithfully serialize and de-serialize DataFrame s, supporting all of the pandas dtypes, including extension dtypes such as datetime with tz.

Several caveats.

  • Duplicate column names and non-string columns names are not supported.

  • The pyarrow engine always writes the index to the output, but fastparquet only writes non-default indexes. This extra column can cause problems for non-pandas consumers that are not expecting it. You can force including or omitting indexes with the index argument, regardless of the underlying engine.

  • Index level names, if specified, must be strings.

  • In the pyarrow engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.

  • The pyarrow engine preserves the ordered flag of categorical dtypes with string types. fastparquet does not preserve the ordered flag.

  • Non supported types include Interval and actual Python object types. These will raise a helpful error message on an attempt at serialization. Period type is supported with pyarrow >= 0.16.0.

  • The pyarrow engine preserves extension data types such as the nullable integer and string data type (requiring pyarrow >= 0.16.0, and requiring the extension type to implement the needed protocols, see the extension types documentation).

You can specify an engine to direct the serialization. This can be one of pyarrow, or fastparquet, or auto. If the engine is NOT specified, then the pd.options.io.parquet.engine option is checked; if this is also auto, then pyarrow is tried, and falling back to fastparquet.

See the documentation for pyarrow and fastparquet.

Note

These engines are very similar and should read/write nearly identical parquet format files. pyarrow>=8.0.0 supports timedelta data, fastparquet>=0.1.4 supports timezone aware datetimes. These libraries differ by having different underlying dependencies (fastparquet by using numba, while pyarrow uses a c-library).

In [596]: df = pd.DataFrame( .....: { .....: "a": list("abc"), .....: "b": list(range(1, 4)), .....: "c": np.arange(3, 6).astype("u1"), .....: "d": np.arange(4.0, 7.0, dtype="float64"), .....: "e": [True, False, True], .....: "f": pd.date_range("20130101", periods=3), .....: "g": pd.date_range("20130101", periods=3, tz="US/Eastern"), .....: "h": pd.Categorical(list("abc")), .....: "i": pd.Categorical(list("abc"), ordered=True), .....: } .....: ) .....: In [597]: df Out[597]: a b c d e f g h i 0 a 1 3 4.0 True 2013-01-01 2013-01-01 00:00:00-05:00 a a 1 b 2 4 5.0 False 2013-01-02 2013-01-02 00:00:00-05:00 b b 2 c 3 5 6.0 True 2013-01-03 2013-01-03 00:00:00-05:00 c c In [598]: df.dtypes Out[598]: a object b int64 c uint8 d float64 e bool f datetime64[ns] g datetime64[ns, US/Eastern] h category i category dtype: object

Write to a parquet file.

In [599]: df.to_parquet("example_pa.parquet", engine="pyarrow") In [600]: df.to_parquet("example_fp.parquet", engine="fastparquet")

Read from a parquet file.

In [601]: result = pd.read_parquet("example_fp.parquet", engine="fastparquet") In [602]: result = pd.read_parquet("example_pa.parquet", engine="pyarrow") In [603]: result.dtypes Out[603]: a object b int64 c uint8 d float64 e bool f datetime64[ns] g datetime64[ns, US/Eastern] h category i category dtype: object

Read only certain columns of a parquet file.

In [604]: result = pd.read_parquet( .....: "example_fp.parquet", .....: engine="fastparquet", .....: columns=["a", "b"], .....: ) .....: In [605]: result = pd.read_parquet( .....: "example_pa.parquet", .....: engine="pyarrow", .....: columns=["a", "b"], .....: ) .....: In [606]: result.dtypes Out[606]: a object b int64 dtype: object

Handling indexes#

Serializing a DataFrame to parquet may include the implicit index as one or more columns in the output file. Thus, this code:

In [607]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]}) In [608]: df.to_parquet("test.parquet", engine="pyarrow")

creates a parquet file with three columns if you use pyarrow for serialization: a, b, and __index_level_0__. If you’re using fastparquet, the index may or may not be written to the file.

This unexpected extra column causes some databases like Amazon Redshift to reject the file, because that column doesn’t exist in the target table.

If you want to omit a dataframe’s indexes when writing, pass index=False to to_parquet():

In [609]: df.to_parquet("test.parquet", index=False)

This creates a parquet file with just the two expected columns, a and b. If your DataFrame has a custom index, you won’t get it back when you load this file into a DataFrame.

Passing index=True will always write the index, even if that’s not the underlying engine’s default behavior.

Partitioning Parquet files#

Parquet supports partitioning of data based on the values of one or more columns.

In [610]: df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]}) In [611]: df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)

The path specifies the parent directory to which data will be saved. The partition_cols are the column names by which the dataset will be partitioned. Columns are partitioned in the order they are given. The partition splits are determined by the unique values in the partition columns. The above example creates a partitioned dataset that may look like:

test ├── a=0 │ ├── 0bac803e32dc42ae83fddfd029cbdebc.parquet │ └── ... └── a=1 ├── e6ab24a4f45147b49b54a662f0c412a3.parquet └── ...

ORC#

New in version 1.0.0.

Similar to the parquet format, the ORC Format is a binary columnar serialization for data frames. It is designed to make reading data frames efficient. pandas provides both the reader and the writer for the ORC format, read_orc() and to_orc(). This requires the pyarrow library.

Warning

  • It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.

  • to_orc() requires pyarrow>=7.0.0.

  • read_orc() and to_orc() are not supported on Windows yet, you can find valid environments on install optional dependencies.

  • For supported dtypes please refer to supported ORC features in Arrow.

  • Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.

In [612]: df = pd.DataFrame( .....: { .....: "a": list("abc"), .....: "b": list(range(1, 4)), .....: "c": np.arange(4.0, 7.0, dtype="float64"), .....: "d": [True, False, True], .....: "e": pd.date_range("20130101", periods=3), .....: } .....: ) .....: In [613]: df Out[613]: a b c d e 0 a 1 4.0 True 2013-01-01 1 b 2 5.0 False 2013-01-02 2 c 3 6.0 True 2013-01-03 In [614]: df.dtypes Out[614]: a object b int64 c float64 d bool e datetime64[ns] dtype: object

Write to an orc file.

In [615]: df.to_orc("example_pa.orc", engine="pyarrow")

Read from an orc file.

In [616]: result = pd.read_orc("example_pa.orc") In [617]: result.dtypes Out[617]: a object b int64 c float64 d bool e datetime64[ns] dtype: object

Read only certain columns of an orc file.

In [618]: result = pd.read_orc( .....: "example_pa.orc", .....: columns=["a", "b"], .....: ) .....: In [619]: result.dtypes Out[619]: a object b int64 dtype: object

SQL queries#

The pandas.io.sql module provides a collection of query wrappers to both facilitate data retrieval and to reduce dependency on DB-specific API. Database abstraction is provided by SQLAlchemy if installed. In addition you will need a driver library for your database. Examples of such drivers are psycopg2 for PostgreSQL or pymysql for MySQL. For SQLite this is included in Python’s standard library by default. You can find an overview of supported drivers for each SQL dialect in the SQLAlchemy docs.

If SQLAlchemy is not installed, a fallback is only provided for sqlite (and for mysql for backwards compatibility, but this is deprecated and will be removed in a future version). This mode requires a Python database adapter which respect the Python DB-API.

See also some cookbook examples for some advanced strategies.

The key functions are:

read_sql_table(table_name, con[, schema, ...])

Read SQL database table into a DataFrame.

read_sql_query(sql, con[, index_col, ...])

Read SQL query into a DataFrame.

read_sql(sql, con[, index_col, ...])

Read SQL query or database table into a DataFrame.

DataFrame.to_sql(name, con[, schema, ...])

Write records stored in a DataFrame to a SQL database.

Note

The function read_sql() is a convenience wrapper around read_sql_table() and read_sql_query() (and for backward compatibility) and will delegate to specific function depending on the provided input (database table name or sql query). Table names do not need to be quoted if they have special characters.

In the following example, we use the SQlite SQL database engine. You can use a temporary SQLite database where data are stored in “memory”.

To connect with SQLAlchemy you use the create_engine() function to create an engine object from database URI. You only need to create the engine once per database you are connecting to. For more information on create_engine() and the URI formatting, see the examples below and the SQLAlchemy documentation

In [620]: from sqlalchemy import create_engine # Create your engine. In [621]: engine = create_engine("sqlite:///:memory:")

If you want to manage your own connections you can pass one of those instead. The example below opens a connection to the database using a Python context manager that automatically closes the connection after the block has completed. See the SQLAlchemy docs for an explanation of how the database connection is handled.

with engine.connect() as conn, conn.begin(): data = pd.read_sql_table("data", conn)

Warning

When you open a connection to a database you are also responsible for closing it. Side effects of leaving a connection open may include locking the database or other breaking behaviour.

Writing DataFrames#

Assuming the following data is in a DataFrame data, we can insert it into the database using to_sql().

id

Date

Col_1

Col_2

Col_3

26

2012-10-18

X

25.7

True

42

2012-10-19

Y

-12.4

False

63

2012-10-20

Z

5.73

True

In [622]: import datetime In [623]: c = ["id", "Date", "Col_1", "Col_2", "Col_3"] In [624]: d = [ .....: (26, datetime.datetime(2010, 10, 18), "X", 27.5, True), .....: (42, datetime.datetime(2010, 10, 19), "Y", -12.5, False), .....: (63, datetime.datetime(2010, 10, 20), "Z", 5.73, True), .....: ] .....: In [625]: data = pd.DataFrame(d, columns=c) In [626]: data Out[626]: id Date Col_1 Col_2 Col_3 0 26 2010-10-18 X 27.50 True 1 42 2010-10-19 Y -12.50 False 2 63 2010-10-20 Z 5.73 True In [627]: data.to_sql("data", engine) Out[627]: 3

With some databases, writing large DataFrames can result in errors due to packet size limitations being exceeded. This can be avoided by setting the chunksize parameter when calling to_sql. For example, the following writes data to the database in batches of 1000 rows at a time:

In [628]: data.to_sql("data_chunked", engine, chunksize=1000) Out[628]: 3

SQL data types#

to_sql() will try to map your data to an appropriate SQL data type based on the dtype of the data. When you have columns of dtype object, pandas will try to infer the data type.

You can always override the default type by specifying the desired SQL type of any of the columns by using the dtype argument. This argument needs a dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3 fallback mode). For example, specifying to use the sqlalchemy String type instead of the default Text type for string columns:

In [629]: from sqlalchemy.types import String In [630]: data.to_sql("data_dtype", engine, dtype={"Col_1": String}) Out[630]: 3

Note

Due to the limited support for timedelta’s in the different database flavors, columns with type timedelta64 will be written as integer values as nanoseconds to the database and a warning will be raised.

Note

Columns of category dtype will be converted to the dense representation as you would get with np.asarray(categorical) (e.g. for string categories this gives an array of strings). Because of this, reading the database table back in does not generate a categorical.

Datetime data types#

Using SQLAlchemy, to_sql() is capable of writing datetime data that is timezone naive or timezone aware. However, the resulting data stored in the database ultimately depends on the supported data type for datetime data of the database system being used.

The following table lists supported data types for datetime data for some common databases. Other database dialects may have different data types for datetime data.

Database

SQL Datetime Types

Timezone Support

SQLite

TEXT

No

MySQL

TIMESTAMP or DATETIME

No

PostgreSQL

TIMESTAMP or TIMESTAMP WITH TIME ZONE

Yes

When writing timezone aware data to databases that do not support timezones, the data will be written as timezone naive timestamps that are in local time with respect to the timezone.

read_sql_table() is also capable of reading datetime data that is timezone aware or naive. When reading TIMESTAMP WITH TIME ZONE types, pandas will convert the data to UTC.

Insertion method#

The parameter method controls the SQL insertion clause used. Possible values are:

  • None: Uses standard SQL INSERT clause (one per row).

  • 'multi': Pass multiple values in a single INSERT clause. It uses a special SQL syntax not supported by all backends. This usually provides better performance for analytic databases like Presto and Redshift, but has worse performance for traditional SQL backend if the table contains many columns. For more information check the SQLAlchemy documentation.

  • callable with signature (pd_table, conn, keys, data_iter): This can be used to implement a more performant insertion method based on specific backend dialect features.

Example of a callable using PostgreSQL COPY clause:

# Alternative to_sql() *method* for DBs that support COPY FROM import csv from io import StringIO def psql_insert_copy(table, conn, keys, data_iter): """ Execute SQL statement inserting data Parameters ---------- table : pandas.io.sql.SQLTable conn : sqlalchemy.engine.Engine or sqlalchemy.engine.Connection keys : list of str Column names data_iter : Iterable that iterates the values to be inserted """ # gets a DBAPI connection that can provide a cursor dbapi_conn = conn.connection with dbapi_conn.cursor() as cur: s_buf = StringIO() writer = csv.writer(s_buf) writer.writerows(data_iter) s_buf.seek(0) columns = ', '.join(['"{}"'.format(k) for k in keys]) if table.schema: table_name = '{}.{}'.format(table.schema, table.name) else: table_name = table.name sql = 'COPY {} ({}) FROM STDIN WITH CSV'.format( table_name, columns) cur.copy_expert(sql=sql, file=s_buf)

Reading tables#

read_sql_table() will read a database table given the table name and optionally a subset of columns to read.

Note

In order to use read_sql_table(), you must have the SQLAlchemy optional dependency installed.

In [631]: pd.read_sql_table("data", engine) Out[631]: index id Date Col_1 Col_2 Col_3 0 0 26 2010-10-18 X 27.50 True 1 1 42 2010-10-19 Y -12.50 False 2 2 63 2010-10-20 Z 5.73 True

Note

Note that pandas infers column dtypes from query outputs, and not by looking up data types in the physical database schema. For example, assume userid is an integer column in a table. Then, intuitively, select userid ... will return integer-valued series, while select cast(userid as text) ... will return object-valued (str) series. Accordingly, if the query output is empty, then all resulting columns will be returned as object-valued (since they are most general). If you foresee that your query will sometimes generate an empty result, you may want to explicitly typecast afterwards to ensure dtype integrity.

You can also specify the name of the column as the DataFrame index, and specify a subset of columns to be read.

In [632]: pd.read_sql_table("data", engine, index_col="id") Out[632]: index Date Col_1 Col_2 Col_3 id 26 0 2010-10-18 X 27.50 True 42 1 2010-10-19 Y -12.50 False 63 2 2010-10-20 Z 5.73 True In [633]: pd.read_sql_table("data", engine, columns=["Col_1", "Col_2"]) Out[633]: Col_1 Col_2 0 X 27.50 1 Y -12.50 2 Z 5.73

And you can explicitly force columns to be parsed as dates:

In [634]: pd.read_sql_table("data", engine, parse_dates=["Date"]) Out[634]: index id Date Col_1 Col_2 Col_3 0 0 26 2010-10-18 X 27.50 True 1 1 42 2010-10-19 Y -12.50 False 2 2 63 2010-10-20 Z 5.73 True

If needed you can explicitly specify a format string, or a dict of arguments to pass to pandas.to_datetime():

pd.read_sql_table("data", engine, parse_dates={"Date": "%Y-%m-%d"}) pd.read_sql_table( "data", engine, parse_dates={"Date": {"format": "%Y-%m-%d %H:%M:%S"}}, )

You can check if a table exists using has_table()

Schema support#

Reading from and writing to different schema’s is supported through the schema keyword in the read_sql_table() and to_sql() functions. Note however that this depends on the database flavor (sqlite does not have schema’s). For example:

df.to_sql("table", engine, schema="other_schema") pd.read_sql_table("table", engine, schema="other_schema")

Querying#

You can query using raw SQL in the read_sql_query() function. In this case you must use the SQL variant appropriate for your database. When using SQLAlchemy, you can also pass SQLAlchemy Expression language constructs, which are database-agnostic.

In [635]: pd.read_sql_query("SELECT * FROM data", engine) Out[635]: index id Date Col_1 Col_2 Col_3 0 0 26 2010-10-18 00:00:00.000000 X 27.50 1 1 1 42 2010-10-19 00:00:00.000000 Y -12.50 0 2 2 63 2010-10-20 00:00:00.000000 Z 5.73 1

Of course, you can specify a more “complex” query.

In [636]: pd.read_sql_query("SELECT id, Col_1, Col_2 FROM data WHERE id = 42;", engine) Out[636]: id Col_1 Col_2 0 42 Y -12.5

The read_sql_query() function supports a chunksize argument. Specifying this will return an iterator through chunks of the query result:

In [637]: df = pd.DataFrame(np.random.randn(20, 3), columns=list("abc")) In [638]: df.to_sql("data_chunks", engine, index=False) Out[638]: 20

In [639]: for chunk in pd.read_sql_query("SELECT * FROM data_chunks", engine, chunksize=5): .....: print(chunk) .....: a b c 0 0.070470 0.901320 0.937577 1 0.295770 1.420548 -0.005283 2 -1.518598 -0.730065 0.226497 3 -2.061465 0.632115 0.853619 4 2.719155 0.139018 0.214557 a b c 0 -1.538924 -0.366973 -0.748801 1 -0.478137 -1.559153 -3.097759 2 -2.320335 -0.221090 0.119763 3 0.608228 1.064810 -0.780506 4 -2.736887 0.143539 1.170191 a b c 0 -1.573076 0.075792 -1.722223 1 -0.774650 0.803627 0.221665 2 0.584637 0.147264 1.057825 3 -0.284136 0.912395 1.552808 4 0.189376 -0.109830 0.539341 a b c 0 0.592591 -0.155407 -1.356475 1 0.833837 1.524249 1.606722 2 -0.029487 -0.051359 1.700152 3 0.921484 -0.926347 0.979818 4 0.182380 -0.186376 0.049820

You can also run a plain query without creating a DataFrame with execute(). This is useful for queries that don’t return values, such as INSERT. This is functionally equivalent to calling execute on the SQLAlchemy engine or db connection object. Again, you must use the SQL syntax variant appropriate for your database.

from pandas.io import sql sql.execute("SELECT * FROM table_name", engine) sql.execute( "INSERT INTO table_name VALUES(?, ?, ?)", engine, params=[("id", 1, 12.2, True)] )

Engine connection examples#

To connect with SQLAlchemy you use the create_engine() function to create an engine object from database URI. You only need to create the engine once per database you are connecting to.

from sqlalchemy import create_engine engine = create_engine("postgresql://scott:[email protected]:5432/mydatabase") engine = create_engine("mysql+mysqldb://scott:[email protected]/foo") engine = create_engine("oracle://scott:[email protected]:1521/sidname") engine = create_engine("mssql+pyodbc://mydsn") # sqlite:/// # where is relative: engine = create_engine("sqlite:///foo.db") # or absolute, starting with a slash: engine = create_engine("sqlite:////absolute/path/to/foo.db")

For more information see the examples the SQLAlchemy documentation

Advanced SQLAlchemy queries#

You can use SQLAlchemy constructs to describe your query.

Use sqlalchemy.text() to specify query parameters in a backend-neutral way

In [640]: import sqlalchemy as sa In [641]: pd.read_sql( .....: sa.text("SELECT * FROM data where Col_1=:col1"), engine, params={"col1": "X"} .....: ) .....: Out[641]: index id Date Col_1 Col_2 Col_3 0 0 26 2010-10-18 00:00:00.000000 X 27.5 1

If you have an SQLAlchemy description of your database you can express where conditions using SQLAlchemy expressions

In [642]: metadata = sa.MetaData() In [643]: data_table = sa.Table( .....: "data", .....: metadata, .....: sa.Column("index", sa.Integer), .....: sa.Column("Date", sa.DateTime), .....: sa.Column("Col_1", sa.String), .....: sa.Column("Col_2", sa.Float), .....: sa.Column("Col_3", sa.Boolean), .....: ) .....: In [644]: pd.read_sql(sa.select([data_table]).where(data_table.c.Col_3 is True), engine) Out[644]: Empty DataFrame Columns: [index, Date, Col_1, Col_2, Col_3] Index: []

You can combine SQLAlchemy expressions with parameters passed to read_sql() using sqlalchemy.bindparam()

In [645]: import datetime as dt In [646]: expr = sa.select([data_table]).where(data_table.c.Date > sa.bindparam("date")) In [647]: pd.read_sql(expr, engine, params={"date": dt.datetime(2010, 10, 18)}) Out[647]: index Date Col_1 Col_2 Col_3 0 1 2010-10-19 Y -12.50 False 1 2 2010-10-20 Z 5.73 True

Sqlite fallback#

The use of sqlite is supported without using SQLAlchemy. This mode requires a Python database adapter which respect the Python DB-API.

You can create connections like so:

import sqlite3 con = sqlite3.connect(":memory:")

And then issue the following queries:

data.to_sql("data", con) pd.read_sql_query("SELECT * FROM data", con)

Google BigQuery#

Warning

Starting in 0.20.0, pandas has split off Google BigQuery support into the separate package pandas-gbq. You can pip install pandas-gbq to get it.

The pandas-gbq package provides functionality to read/write from Google BigQuery.

pandas integrates with this external package. if pandas-gbq is installed, you can use the pandas methods pd.read_gbq and DataFrame.to_gbq, which will call the respective functions from pandas-gbq.

Full documentation can be found here.

Stata format#

Writing to stata format#

The method to_stata() will write a DataFrame into a .dta file. The format version of this file is always 115 (Stata 12).

In [648]: df = pd.DataFrame(np.random.randn(10, 2), columns=list("AB")) In [649]: df.to_stata("stata.dta")

Stata data files have limited data type support; only strings with 244 or fewer characters, int8, int16, int32, float32 and float64 can be stored in .dta files. Additionally, Stata reserves certain values to represent missing data. Exporting a non-missing value that is outside of the permitted range in Stata for a particular data type will retype the variable to the next larger size. For example, int8 values are restricted to lie between -127 and 100 in Stata, and so variables with values above 100 will trigger a conversion to int16. nan values in floating points data types are stored as the basic missing data type (. in Stata).

Note

It is not possible to export missing data values for integer data types.

The Stata writer gracefully handles other data types including int64, bool, uint8, uint16, uint32 by casting to the smallest supported type that can represent the data. For example, data with a type of uint8 will be cast to int8 if all values are less than 100 (the upper bound for non-missing int8 data in Stata), or, if values are outside of this range, the variable is cast to int16.

Warning

Conversion from int64 to float64 may result in a loss of precision if int64 values are larger than 2**53.

Warning

StataWriter and to_stata() only support fixed width strings containing up to 244 characters, a limitation imposed by the version 115 dta file format. Attempting to write Stata dta files with strings longer than 244 characters raises a ValueError.

Reading from Stata format#

The top-level function read_stata will read a dta file and return either a DataFrame or a StataReader that can be used to read the file incrementally.

In [650]: pd.read_stata("stata.dta") Out[650]: index A B 0 0 -1.690072 0.405144 1 1 -1.511309 -1.531396 2 2 0.572698 -1.106845 3 3 -1.185859 0.174564 4 4 0.603797 -1.796129 5 5 -0.791679 1.173795 6 6 -0.277710 1.859988 7 7 -0.258413 1.251808 8 8 1.443262 0.441553 9 9 1.168163 -2.054946

Specifying a chunksize yields a StataReader instance that can be used to read chunksize lines from the file at a time. The StataReader object can be used as an iterator.

In [651]: with pd.read_stata("stata.dta", chunksize=3) as reader: .....: for df in reader: .....: print(df.shape) .....: (3, 3) (3, 3) (3, 3) (1, 3)

For more fine-grained control, use iterator=True and specify chunksize with each call to read().

In [652]: with pd.read_stata("stata.dta", iterator=True) as reader: .....: chunk1 = reader.read(5) .....: chunk2 = reader.read(5) .....:

Currently the index is retrieved as a column.

The parameter convert_categoricals indicates whether value labels should be read and used to create a Categorical variable from them. Value labels can also be retrieved by the function value_labels, which requires read() to be called before use.

The parameter convert_missing indicates whether missing value representations in Stata should be preserved. If False (the default), missing values are represented as np.nan. If True, missing values are represented using StataMissingValue objects, and columns containing missing values will have object data type.

Note

read_stata() and StataReader support .dta formats 113-115 (Stata 10-12), 117 (Stata 13), and 118 (Stata 14).

Note

Setting preserve_dtypes=False will upcast to the standard pandas data types: int64 for all integer types and float64 for floating point data. By default, the Stata data types are preserved when importing.

Categorical data#

Categorical data can be exported to Stata data files as value labeled data. The exported data consists of the underlying category codes as integer data values and the categories as value labels. Stata does not have an explicit equivalent to a Categorical and information about whether the variable is ordered is lost when exporting.

Warning

Stata only supports string value labels, and so str is called on the categories when exporting data. Exporting Categorical variables with non-string categories produces a warning, and can result a loss of information if the str representations of the categories are not unique.

Labeled data can similarly be imported from Stata data files as Categorical variables using the keyword argument convert_categoricals (True by default). The keyword argument order_categoricals (True by default) determines whether imported Categorical variables are ordered.

Note

When importing categorical data, the values of the variables in the Stata data file are not preserved since Categorical variables always use integer data types between -1 and n-1 where n is the number of categories. If the original values in the Stata data file are required, these can be imported by setting convert_categoricals=False, which will import original data (but not the variable labels). The original values can be matched to the imported categorical data since there is a simple mapping between the original Stata data values and the category codes of imported Categorical variables: missing values are assigned code -1, and the smallest original value is assigned 0, the second smallest is assigned 1 and so on until the largest original value is assigned the code n-1.

Note

Stata supports partially labeled series. These series have value labels for some but not all data values. Importing a partially labeled series will produce a Categorical with string categories for the values that are labeled and numeric categories for values with no label.

SAS formats#

The top-level function read_sas() can read (but not write) SAS XPORT (.xpt) and (since v0.18.0) SAS7BDAT (.sas7bdat) format files.

SAS files only contain two value types: ASCII text and floating point values (usually 8 bytes but sometimes truncated). For xport files, there is no automatic type conversion to integers, dates, or categoricals. For SAS7BDAT files, the format codes may allow date variables to be automatically converted to dates. By default the whole file is read and returned as a DataFrame.

Specify a chunksize or use iterator=True to obtain reader objects (XportReader or SAS7BDATReader) for incrementally reading the file. The reader objects also have attributes that contain additional information about the file and its variables.

Read a SAS7BDAT file:

df = pd.read_sas("sas_data.sas7bdat")

Obtain an iterator and read an XPORT file 100,000 lines at a time:

def do_something(chunk): pass with pd.read_sas("sas_xport.xpt", chunk=100000) as rdr: for chunk in rdr: do_something(chunk)

The specification for the xport file format is available from the SAS web site.

No official documentation is available for the SAS7BDAT format.

SPSS formats#

New in version 0.25.0.

The top-level function read_spss() can read (but not write) SPSS SAV (.sav) and ZSAV (.zsav) format files.

SPSS files contain column names. By default the whole file is read, categorical columns are converted into pd.Categorical, and a DataFrame with all columns is returned.

Specify the usecols parameter to obtain a subset of columns. Specify convert_categoricals=False to avoid converting categorical columns into pd.Categorical.

Read an SPSS file:

df = pd.read_spss("spss_data.sav")

Extract a subset of columns contained in usecols from an SPSS file and avoid converting categorical columns into pd.Categorical:

df = pd.read_spss( "spss_data.sav", usecols=["foo", "bar"], convert_categoricals=False, )

More information about the SAV and ZSAV file formats is available here.

Other file formats#

pandas itself only supports IO with a limited set of file formats that map cleanly to its tabular data model. For reading and writing other file formats into and from pandas, we recommend these packages from the broader community.

netCDF#

xarray provides data structures inspired by the pandas DataFrame for working with multi-dimensional datasets, with a focus on the netCDF file format and easy conversion to and from pandas.

Performance considerations#

This is an informal comparison of various IO methods, using pandas 0.24.2. Timings are machine dependent and small differences should be ignored.

In [1]: sz = 1000000 In [2]: df = pd.DataFrame({'A': np.random.randn(sz), 'B': [1] * sz}) In [3]: df.info() RangeIndex: 1000000 entries, 0 to 999999 Data columns (total 2 columns): A 1000000 non-null float64 B 1000000 non-null int64 dtypes: float64(1), int64(1) memory usage: 15.3 MB

The following test functions will be used below to compare the performance of several IO methods:

import numpy as np import os sz = 1000000 df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz}) sz = 1000000 np.random.seed(42) df = pd.DataFrame({"A": np.random.randn(sz), "B": [1] * sz}) def test_sql_write(df): if os.path.exists("test.sql"): os.remove("test.sql") sql_db = sqlite3.connect("test.sql") df.to_sql(name="test_table", con=sql_db) sql_db.close() def test_sql_read(): sql_db = sqlite3.connect("test.sql") pd.read_sql_query("select * from test_table", sql_db) sql_db.close() def test_hdf_fixed_write(df): df.to_hdf("test_fixed.hdf", "test", mode="w") def test_hdf_fixed_read(): pd.read_hdf("test_fixed.hdf", "test") def test_hdf_fixed_write_compress(df): df.to_hdf("test_fixed_compress.hdf", "test", mode="w", complib="blosc") def test_hdf_fixed_read_compress(): pd.read_hdf("test_fixed_compress.hdf", "test") def test_hdf_table_write(df): df.to_hdf("test_table.hdf", "test", mode="w", format="table") def test_hdf_table_read(): pd.read_hdf("test_table.hdf", "test") def test_hdf_table_write_compress(df): df.to_hdf( "test_table_compress.hdf", "test", mode="w", complib="blosc", format="table" ) def test_hdf_table_read_compress(): pd.read_hdf("test_table_compress.hdf", "test") def test_csv_write(df): df.to_csv("test.csv", mode="w") def test_csv_read(): pd.read_csv("test.csv", index_col=0) def test_feather_write(df): df.to_feather("test.feather") def test_feather_read(): pd.read_feather("test.feather") def test_pickle_write(df): df.to_pickle("test.pkl") def test_pickle_read(): pd.read_pickle("test.pkl") def test_pickle_write_compress(df): df.to_pickle("test.pkl.compress", compression="xz") def test_pickle_read_compress(): pd.read_pickle("test.pkl.compress", compression="xz") def test_parquet_write(df): df.to_parquet("test.parquet") def test_parquet_read(): pd.read_parquet("test.parquet")

When writing, the top three functions in terms of speed are test_feather_write, test_hdf_fixed_write and test_hdf_fixed_write_compress.

In [4]: %timeit test_sql_write(df) 3.29 s ± 43.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [5]: %timeit test_hdf_fixed_write(df) 19.4 ms ± 560 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) In [6]: %timeit test_hdf_fixed_write_compress(df) 19.6 ms ± 308 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [7]: %timeit test_hdf_table_write(df) 449 ms ± 5.61 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [8]: %timeit test_hdf_table_write_compress(df) 448 ms ± 11.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [9]: %timeit test_csv_write(df) 3.66 s ± 26.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [10]: %timeit test_feather_write(df) 9.75 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [11]: %timeit test_pickle_write(df) 30.1 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [12]: %timeit test_pickle_write_compress(df) 4.29 s ± 15.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [13]: %timeit test_parquet_write(df) 67.6 ms ± 706 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

When reading, the top three functions in terms of speed are test_feather_read, test_pickle_read and test_hdf_fixed_read.

In [14]: %timeit test_sql_read() 1.77 s ± 17.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [15]: %timeit test_hdf_fixed_read() 19.4 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [16]: %timeit test_hdf_fixed_read_compress() 19.5 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [17]: %timeit test_hdf_table_read() 38.6 ms ± 857 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [18]: %timeit test_hdf_table_read_compress() 38.8 ms ± 1.49 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) In [19]: %timeit test_csv_read() 452 ms ± 9.04 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [20]: %timeit test_feather_read() 12.4 ms ± 99.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [21]: %timeit test_pickle_read() 18.4 ms ± 191 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) In [22]: %timeit test_pickle_read_compress() 915 ms ± 7.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [23]: %timeit test_parquet_read() 24.4 ms ± 146 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

The files test.pkl.compress, test.parquet and test.feather took the least space on disk (in bytes).

29519500 Oct 10 06:45 test.csv 16000248 Oct 10 06:45 test.feather 8281983 Oct 10 06:49 test.parquet 16000857 Oct 10 06:47 test.pkl 7552144 Oct 10 06:48 test.pkl.compress 34816000 Oct 10 06:42 test.sql 24009288 Oct 10 06:43 test_fixed.hdf 24009288 Oct 10 06:43 test_fixed_compress.hdf 24458940 Oct 10 06:44 test_table.hdf 24458940 Oct 10 06:44 test_table_compress.hdf

Which filing system is considered to be the most efficient?

The 3 most efficient filing systems for paperwork.
Expanding files or filing cabinets. Expanding files are great for paperwork that requires being divided into lots of small categories. ... .
Folders. ... .
Digital..

Which of the following best describes the most important function of the health record quizlet?

Which of the following best describes the most important function of the health record? Storing patient care documentation is the most important function of a health record. As a result, clinical professionals who provide direct patient care are the primary users of the health record.

Which of the following is a disadvantage of alphabetic filing?

Which of the following is a disadvantage of alphabetic filing? Uneven expansion of file shelves or cabinets.

Which of the following is the best definition of electronic health records?

An electronic health record (EHR) is a digital version of a patient's paper chart. EHRs are real-time, patient-centered records that make information available instantly and securely to authorized users.