10 minutes to Mars DataFrame

This is a short introduction to Mars DataFrame which is originated from 10 minutes to pandas.

Customarily, we import as follows:

In [1]: import mars.tensor as mt

In [2]: import mars.dataframe as md

Object creation

Creating a Series by passing a list of values, letting it create a default integer index:

In [3]: s = md.Series([1, 3, 5, mt.nan, 6, 8])

In [4]: s.execute()
Out[4]: 
0    1.0
1    3.0
2    5.0
3    NaN
4    6.0
5    8.0
dtype: float64

Creating a DataFrame by passing a Mars tensor, with a datetime index and labeled columns:

In [5]: dates = md.date_range('20130101', periods=6)

In [6]: dates.execute()
Out[6]: 
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')

In [7]: df = md.DataFrame(mt.random.randn(6, 4), index=dates, columns=list('ABCD'))

In [8]: df.execute()
Out[8]: 
                   A         B         C         D
2013-01-01  0.572140  0.124694 -0.928270 -0.594956
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-03  0.333611  1.763351 -0.840694  0.566314
2013-01-04 -0.911812 -0.191421 -2.479027  0.046732
2013-01-05  2.386801 -0.914918  0.489357 -0.486204
2013-01-06  0.130482  0.549315 -0.151800  0.948822

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

In [9]: df2 = md.DataFrame({'A': 1.,
   ...:                     'B': md.Timestamp('20130102'),
   ...:                     'C': md.Series(1, index=list(range(4)), dtype='float32'),
   ...:                     'D': mt.array([3] * 4, dtype='int32'),
   ...:                     'E': 'foo'})
   ...: 

In [10]: df2.execute()
Out[10]: 
     A          B    C  D    E
0  1.0 2013-01-02  1.0  3  foo
1  1.0 2013-01-02  1.0  3  foo
2  1.0 2013-01-02  1.0  3  foo
3  1.0 2013-01-02  1.0  3  foo

The columns of the resulting DataFrame have different dtypes.

In [11]: df2.dtypes
Out[11]: 
A           float64
B    datetime64[ns]
C           float32
D             int32
E            object
dtype: object

Viewing data

Here is how to view the top and bottom rows of the frame:

In [12]: df.head().execute()
Out[12]: 
                   A         B         C         D
2013-01-01  0.572140  0.124694 -0.928270 -0.594956
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-03  0.333611  1.763351 -0.840694  0.566314
2013-01-04 -0.911812 -0.191421 -2.479027  0.046732
2013-01-05  2.386801 -0.914918  0.489357 -0.486204

In [13]: df.tail(3).execute()
Out[13]: 
                   A         B         C         D
2013-01-04 -0.911812 -0.191421 -2.479027  0.046732
2013-01-05  2.386801 -0.914918  0.489357 -0.486204
2013-01-06  0.130482  0.549315 -0.151800  0.948822

Display the index, columns:

In [14]: df.index.execute()
Out[14]: 
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')

In [15]: df.columns.execute()
Out[15]: Index(['A', 'B', 'C', 'D'], dtype='object')

DataFrame.to_tensor() gives a Mars tensor representation of the underlying data. Note that this can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between DataFrame and tensor: tensors have one dtype for the entire tensor, while DataFrames have one dtype per column. When you call DataFrame.to_tensor(), Mars DataFrame will find the tensor dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object.

For df, our DataFrame of all floating-point values, DataFrame.to_tensor() is fast and doesn’t require copying data.

In [16]: df.to_tensor().execute()
Out[16]: 
array([[ 0.57214028,  0.12469417, -0.9282697 , -0.59495576],
       [ 1.19367045, -0.34007038,  0.61249971, -1.94749212],
       [ 0.33361128,  1.76335129, -0.84069401,  0.56631384],
       [-0.91181164, -0.19142078, -2.47902704,  0.04673182],
       [ 2.38680148, -0.91491751,  0.48935718, -0.48620354],
       [ 0.13048177,  0.54931487, -0.15179998,  0.94882234]])

For df2, the DataFrame with multiple dtypes, DataFrame.to_tensor() is relatively expensive.

In [17]: df2.to_tensor().execute()
Out[17]: 
array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo']],
      dtype=object)

Note

DataFrame.to_tensor() does not include the index or column labels in the output.

describe() shows a quick statistic summary of your data:

In [18]: df.describe().execute()
Out[18]: 
              A         B         C         D
count  6.000000  6.000000  6.000000  6.000000
mean   0.617482  0.165159 -0.549656 -0.244464
std    1.106439  0.922215  1.143588  1.024468
min   -0.911812 -0.914918 -2.479027 -1.947492
25%    0.181264 -0.302908 -0.906376 -0.567768
50%    0.452876 -0.033363 -0.496247 -0.219736
75%    1.038288  0.443160  0.329068  0.436418
max    2.386801  1.763351  0.612500  0.948822

Sorting by an axis:

In [19]: df.sort_index(axis=1, ascending=False).execute()
Out[19]: 
                   D         C         B         A
2013-01-01 -0.594956 -0.928270  0.124694  0.572140
2013-01-02 -1.947492  0.612500 -0.340070  1.193670
2013-01-03  0.566314 -0.840694  1.763351  0.333611
2013-01-04  0.046732 -2.479027 -0.191421 -0.911812
2013-01-05 -0.486204  0.489357 -0.914918  2.386801
2013-01-06  0.948822 -0.151800  0.549315  0.130482

Sorting by values:

In [20]: df.sort_values(by='B').execute()
Out[20]: 
                   A         B         C         D
2013-01-05  2.386801 -0.914918  0.489357 -0.486204
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-04 -0.911812 -0.191421 -2.479027  0.046732
2013-01-01  0.572140  0.124694 -0.928270 -0.594956
2013-01-06  0.130482  0.549315 -0.151800  0.948822
2013-01-03  0.333611  1.763351 -0.840694  0.566314

Selection

Note

While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized DataFrame data access methods, .at, .iat, .loc and .iloc.

Getting

Selecting a single column, which yields a Series, equivalent to df.A:

In [21]: df['A'].execute()
Out[21]: 
2013-01-01    0.572140
2013-01-02    1.193670
2013-01-03    0.333611
2013-01-04   -0.911812
2013-01-05    2.386801
2013-01-06    0.130482
Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

In [22]: df[0:3].execute()
Out[22]: 
                   A         B         C         D
2013-01-01  0.572140  0.124694 -0.928270 -0.594956
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-03  0.333611  1.763351 -0.840694  0.566314

In [23]: df['20130102':'20130104'].execute()
Out[23]: 
                   A         B         C         D
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-03  0.333611  1.763351 -0.840694  0.566314
2013-01-04 -0.911812 -0.191421 -2.479027  0.046732

Selection by label

For getting a cross section using a label:

In [24]: df.loc['20130101'].execute()
Out[24]: 
A    0.572140
B    0.124694
C   -0.928270
D   -0.594956
Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

In [25]: df.loc[:, ['A', 'B']].execute()
Out[25]: 
                   A         B
2013-01-01  0.572140  0.124694
2013-01-02  1.193670 -0.340070
2013-01-03  0.333611  1.763351
2013-01-04 -0.911812 -0.191421
2013-01-05  2.386801 -0.914918
2013-01-06  0.130482  0.549315

Showing label slicing, both endpoints are included:

In [26]: df.loc['20130102':'20130104', ['A', 'B']].execute()
Out[26]: 
                   A         B
2013-01-02  1.193670 -0.340070
2013-01-03  0.333611  1.763351
2013-01-04 -0.911812 -0.191421

Reduction in the dimensions of the returned object:

In [27]: df.loc['20130102', ['A', 'B']].execute()
Out[27]: 
A    1.19367
B   -0.34007
Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:

In [28]: df.loc['20130101', 'A'].execute()
Out[28]: 0.5721402827747063

For getting fast access to a scalar (equivalent to the prior method):

In [29]: df.at['20130101', 'A'].execute()
Out[29]: 0.5721402827747063

Selection by position

Select via the position of the passed integers:

In [30]: df.iloc[3].execute()
Out[30]: 
A   -0.911812
B   -0.191421
C   -2.479027
D    0.046732
Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:

In [31]: df.iloc[3:5, 0:2].execute()
Out[31]: 
                   A         B
2013-01-04 -0.911812 -0.191421
2013-01-05  2.386801 -0.914918

By lists of integer position locations, similar to the numpy/python style:

In [32]: df.iloc[[1, 2, 4], [0, 2]].execute()
Out[32]: 
                   A         C
2013-01-02  1.193670  0.612500
2013-01-03  0.333611 -0.840694
2013-01-05  2.386801  0.489357

For slicing rows explicitly:

In [33]: df.iloc[1:3, :].execute()
Out[33]: 
                   A         B         C         D
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-03  0.333611  1.763351 -0.840694  0.566314

For slicing columns explicitly:

In [34]: df.iloc[:, 1:3].execute()
Out[34]: 
                   B         C
2013-01-01  0.124694 -0.928270
2013-01-02 -0.340070  0.612500
2013-01-03  1.763351 -0.840694
2013-01-04 -0.191421 -2.479027
2013-01-05 -0.914918  0.489357
2013-01-06  0.549315 -0.151800

For getting a value explicitly:

In [35]: df.iloc[1, 1].execute()
Out[35]: -0.3400703793352206

For getting fast access to a scalar (equivalent to the prior method):

In [36]: df.iat[1, 1].execute()
Out[36]: -0.3400703793352206

Boolean indexing

Using a single column’s values to select data.

In [37]: df[df['A'] > 0].execute()
Out[37]: 
                   A         B         C         D
2013-01-01  0.572140  0.124694 -0.928270 -0.594956
2013-01-02  1.193670 -0.340070  0.612500 -1.947492
2013-01-03  0.333611  1.763351 -0.840694  0.566314
2013-01-05  2.386801 -0.914918  0.489357 -0.486204
2013-01-06  0.130482  0.549315 -0.151800  0.948822

Selecting values from a DataFrame where a boolean condition is met.

In [38]: df[df > 0].execute()
Out[38]: 
                   A         B         C         D
2013-01-01  0.572140  0.124694       NaN       NaN
2013-01-02  1.193670       NaN  0.612500       NaN
2013-01-03  0.333611  1.763351       NaN  0.566314
2013-01-04       NaN       NaN       NaN  0.046732
2013-01-05  2.386801       NaN  0.489357       NaN
2013-01-06  0.130482  0.549315       NaN  0.948822

Operations

Stats

Operations in general exclude missing data.

Performing a descriptive statistic:

In [39]: df.mean().execute()
Out[39]: 
A    0.617482
B    0.165159
C   -0.549656
D   -0.244464
dtype: float64

Same operation on the other axis:

In [40]: df.mean(1).execute()
Out[40]: 
2013-01-01   -0.206598
2013-01-02   -0.120348
2013-01-03    0.455646
2013-01-04   -0.883882
2013-01-05    0.368759
2013-01-06    0.369205
Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, Mars DataFrame automatically broadcasts along the specified dimension.

In [41]: s = md.Series([1, 3, 5, mt.nan, 6, 8], index=dates).shift(2)

In [42]: s.execute()
Out[42]: 
2013-01-01    NaN
2013-01-02    NaN
2013-01-03    1.0
2013-01-04    3.0
2013-01-05    5.0
2013-01-06    NaN
Freq: D, dtype: float64

In [43]: df.sub(s, axis='index').execute()
Out[43]: 
                   A         B         C         D
2013-01-01       NaN       NaN       NaN       NaN
2013-01-02       NaN       NaN       NaN       NaN
2013-01-03 -0.666389  0.763351 -1.840694 -0.433686
2013-01-04 -3.911812 -3.191421 -5.479027 -2.953268
2013-01-05 -2.613199 -5.914918 -4.510643 -5.486204
2013-01-06       NaN       NaN       NaN       NaN

Apply

Applying functions to the data:

In [44]: df.apply(lambda x: x.max() - x.min()).execute()
Out[44]: 
A    3.298613
B    2.678269
C    3.091527
D    2.896314
dtype: float64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [45]: s = md.Series(['A', 'B', 'C', 'Aaba', 'Baca', mt.nan, 'CABA', 'dog', 'cat'])

In [46]: s.str.lower().execute()
Out[46]: 
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object

Merge

Concat

Mars DataFrame provides various facilities for easily combining together Series and DataFrame objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

Concatenating DataFrame objects together with concat():

In [47]: df = md.DataFrame(mt.random.randn(10, 4))

In [48]: df.execute()
Out[48]: 
          0         1         2         3
0  0.865336 -0.164171  1.037110  0.939320
1 -0.874971  0.944145  0.799537 -0.226670
2  0.050179 -0.635861  0.928079 -0.559748
3  0.776064  0.903772  0.631800 -0.403639
4  0.023813  0.882686  0.445662 -1.071524
5 -1.060269  1.027663  0.971989  1.407123
6  0.989096 -0.558727 -0.894881  1.040315
7  1.283829  0.133717 -0.924659 -0.930725
8 -0.033920 -1.279681 -0.097244 -0.642539
9  0.809765  0.973865 -0.028517  0.452735

# break it into pieces
In [49]: pieces = [df[:3], df[3:7], df[7:]]

In [50]: md.concat(pieces).execute()
Out[50]: 
          0         1         2         3
0  0.865336 -0.164171  1.037110  0.939320
1 -0.874971  0.944145  0.799537 -0.226670
2  0.050179 -0.635861  0.928079 -0.559748
3  0.776064  0.903772  0.631800 -0.403639
4  0.023813  0.882686  0.445662 -1.071524
5 -1.060269  1.027663  0.971989  1.407123
6  0.989096 -0.558727 -0.894881  1.040315
7  1.283829  0.133717 -0.924659 -0.930725
8 -0.033920 -1.279681 -0.097244 -0.642539
9  0.809765  0.973865 -0.028517  0.452735

Note

Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a DataFrame by iteratively appending records to it.

Join

SQL style merges. See the Database style joining section.

In [51]: left = md.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})

In [52]: right = md.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})

In [53]: left.execute()
Out[53]: 
   key  lval
0  foo     1
1  foo     2

In [54]: right.execute()
Out[54]: 
   key  rval
0  foo     4
1  foo     5

In [55]: md.merge(left, right, on='key').execute()
Out[55]: 
   key  lval  rval
0  foo     1     4
1  foo     1     5
2  foo     2     4
3  foo     2     5

Another example that can be given is:

In [56]: left = md.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})

In [57]: right = md.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})

In [58]: left.execute()
Out[58]: 
   key  lval
0  foo     1
1  bar     2

In [59]: right.execute()
Out[59]: 
   key  rval
0  foo     4
1  bar     5

In [60]: md.merge(left, right, on='key').execute()
Out[60]: 
   key  lval  rval
0  foo     1     4
1  bar     2     5

Grouping

By “group by” we are referring to a process involving one or more of the following steps:

  • Splitting the data into groups based on some criteria
  • Applying a function to each group independently
  • Combining the results into a data structure
In [61]: df = md.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
   ....:                          'foo', 'bar', 'foo', 'foo'],
   ....:                    'B': ['one', 'one', 'two', 'three',
   ....:                          'two', 'two', 'one', 'three'],
   ....:                    'C': mt.random.randn(8),
   ....:                    'D': mt.random.randn(8)})
   ....: 

In [62]: df.execute()
Out[62]: 
     A      B         C         D
0  foo    one  0.738336 -0.097878
1  bar    one  0.524878 -1.555377
2  foo    two  0.974853  0.340334
3  bar  three  1.846379  0.266051
4  foo    two -1.127621 -0.692943
5  bar    two -0.976904 -0.189871
6  foo    one  1.433725  0.106069
7  foo  three  1.570558  0.069969

Grouping and then applying the sum() function to the resulting groups.

In [63]: df.groupby('A').sum().execute()
Out[63]: 
            C         D
A                      
bar  1.394353 -1.479196
foo  3.589852 -0.274449

Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.

In [64]: df.groupby(['A', 'B']).sum().execute()
Out[64]: 
                  C         D
A   B                        
foo one    2.172061  0.008192
    two   -0.152768 -0.352609
    three  1.570558  0.069969
bar one    0.524878 -1.555377
    two   -0.976904 -0.189871
    three  1.846379  0.266051

Plotting

We use the standard convention for referencing the matplotlib API:

In [65]: import matplotlib.pyplot as plt

In [66]: plt.close('all')
In [67]: ts = md.Series(mt.random.randn(1000),
   ....:                index=md.date_range('1/1/2000', periods=1000))
   ....: 

In [68]: ts = ts.cumsum()

In [69]: ts.plot()
Out[69]: <matplotlib.axes._subplots.AxesSubplot at 0x7f89818dc4a8>
../../_images/series_plot_basic.png

On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:

In [70]: df = md.DataFrame(mt.random.randn(1000, 4), index=ts.index,
   ....:                   columns=['A', 'B', 'C', 'D'])
   ....: 

In [71]: df = df.cumsum()

In [72]: plt.figure()
Out[72]: <Figure size 640x480 with 0 Axes>

In [73]: df.plot()
Out[73]: <matplotlib.axes._subplots.AxesSubplot at 0x7f898189cef0>

In [74]: plt.legend(loc='best')
Out[74]: <matplotlib.legend.Legend at 0x7f89818a2470>
../../_images/frame_plot_basic.png

Getting data in/out

CSV

In [75]: df.to_csv('foo.csv').execute()
Out[75]: 
Empty DataFrame
Columns: []
Index: []

Reading from a csv file.

In [76]: md.read_csv('foo.csv').execute()
Out[76]: 
     Unnamed: 0          A          B          C          D
0    2000-01-01  -0.835714   1.604884  -0.273930   0.408331
1    2000-01-02  -2.214988   1.924183  -0.669611  -1.671061
2    2000-01-03  -2.798588   1.333475   0.489577  -2.809427
3    2000-01-04  -3.653005   1.605684   2.354319  -2.949953
4    2000-01-05  -3.048120   3.977252   4.958628  -1.928357
..          ...        ...        ...        ...        ...
995  2002-09-22 -70.496656 -17.358709  83.185594 -10.073752
996  2002-09-23 -69.704160 -18.770357  82.685384  -8.525361
997  2002-09-24 -67.008692 -17.902152  83.345451  -7.996977
998  2002-09-25 -66.282040 -17.375870  85.056918  -7.616020
999  2002-09-26 -66.390527 -18.427182  84.591029  -7.547320

[1000 rows x 5 columns]