10 minutes to Mars DataFrame#

This is a short introduction to Mars DataFrame which is originated from 10 minutes to pandas.

Customarily, we import as follows:

In [1]: import mars

In [2]: import mars.tensor as mt

In [3]: import mars.dataframe as md

Now create a new default session.

In [4]: mars.new_session()
Out[4]: <mars.deploy.oscar.session.SyncSession at 0x7f348d98a690>

Object creation#

Creating a Series by passing a list of values, letting it create a default integer index:

In [5]: s = md.Series([1, 3, 5, mt.nan, 6, 8])

In [6]: s.execute()
Out[6]: 
0    1.0
1    3.0
2    5.0
3    NaN
4    6.0
5    8.0
dtype: float64

Creating a DataFrame by passing a Mars tensor, with a datetime index and labeled columns:

In [7]: dates = md.date_range('20130101', periods=6)

In [8]: dates.execute()
Out[8]: 
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')

In [9]: df = md.DataFrame(mt.random.randn(6, 4), index=dates, columns=list('ABCD'))

In [10]: df.execute()
Out[10]: 
                   A         B         C         D
2013-01-01 -0.408528 -1.615864 -0.414608 -0.979481
2013-01-02 -0.490754  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406
2013-01-04 -1.395899  0.031388 -0.313637  0.360562
2013-01-05  1.152373 -2.142283  0.454617 -1.062158
2013-01-06  2.122499  2.550671 -0.832217 -0.981746

Creating a DataFrame by passing a dict of objects that can be converted to series-like.

In [11]: df2 = md.DataFrame({'A': 1.,
   ....:                     'B': md.Timestamp('20130102'),
   ....:                     'C': md.Series(1, index=list(range(4)), dtype='float32'),
   ....:                     'D': mt.array([3] * 4, dtype='int32'),
   ....:                     'E': 'foo'})
   ....: 

In [12]: df2.execute()
Out[12]: 
     A          B    C  D    E
0  1.0 2013-01-02  1.0  3  foo
1  1.0 2013-01-02  1.0  3  foo
2  1.0 2013-01-02  1.0  3  foo
3  1.0 2013-01-02  1.0  3  foo

The columns of the resulting DataFrame have different dtypes.

In [13]: df2.dtypes
Out[13]: 
A           float64
B    datetime64[ns]
C           float32
D             int32
E            object
dtype: object

Viewing data#

Here is how to view the top and bottom rows of the frame:

In [14]: df.head().execute()
Out[14]: 
                   A         B         C         D
2013-01-01 -0.408528 -1.615864 -0.414608 -0.979481
2013-01-02 -0.490754  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406
2013-01-04 -1.395899  0.031388 -0.313637  0.360562
2013-01-05  1.152373 -2.142283  0.454617 -1.062158

In [15]: df.tail(3).execute()
Out[15]: 
                   A         B         C         D
2013-01-04 -1.395899  0.031388 -0.313637  0.360562
2013-01-05  1.152373 -2.142283  0.454617 -1.062158
2013-01-06  2.122499  2.550671 -0.832217 -0.981746

Display the index, columns:

In [16]: df.index.execute()
Out[16]: 
DatetimeIndex(['2013-01-01', '2013-01-02', '2013-01-03', '2013-01-04',
               '2013-01-05', '2013-01-06'],
              dtype='datetime64[ns]', freq='D')

In [17]: df.columns.execute()
Out[17]: Index(['A', 'B', 'C', 'D'], dtype='object')

DataFrame.to_tensor() gives a Mars tensor representation of the underlying data. Note that this can be an expensive operation when your DataFrame has columns with different data types, which comes down to a fundamental difference between DataFrame and tensor: tensors have one dtype for the entire tensor, while DataFrames have one dtype per column. When you call DataFrame.to_tensor(), Mars DataFrame will find the tensor dtype that can hold all of the dtypes in the DataFrame. This may end up being object, which requires casting every value to a Python object.

For df, our DataFrame of all floating-point values, DataFrame.to_tensor() is fast and doesn’t require copying data.

In [18]: df.to_tensor().execute()
Out[18]: 
array([[-0.40852821, -1.61586429, -0.41460765, -0.97948149],
       [-0.49075401,  0.16500056,  0.95381411,  0.40605319],
       [ 1.14554439,  0.43736114,  1.03143266,  0.60540606],
       [-1.39589861,  0.03138783, -0.31363672,  0.36056206],
       [ 1.15237294, -2.14228337,  0.45461671, -1.0621575 ],
       [ 2.12249899,  2.55067103, -0.83221736, -0.98174604]])

For df2, the DataFrame with multiple dtypes, DataFrame.to_tensor() is relatively expensive.

In [19]: df2.to_tensor().execute()
Out[19]: 
array([[1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo'],
       [1.0, Timestamp('2013-01-02 00:00:00'), 1.0, 3, 'foo']],
      dtype=object)

Note

DataFrame.to_tensor() does not include the index or column labels in the output.

describe() shows a quick statistic summary of your data:

In [20]: df.describe().execute()
Out[20]: 
              A         B         C         D
count  6.000000  6.000000  6.000000  6.000000
mean   0.354206 -0.095621  0.146567 -0.275227
std    1.322780  1.665590  0.776435  0.807253
min   -1.395899 -2.142283 -0.832217 -1.062158
25%   -0.470198 -1.204051 -0.389365 -0.981180
50%    0.368508  0.098194  0.070490 -0.309460
75%    1.150666  0.369271  0.829015  0.394680
max    2.122499  2.550671  1.031433  0.605406

Sorting by an axis:

In [21]: df.sort_index(axis=1, ascending=False).execute()
Out[21]: 
                   D         C         B         A
2013-01-01 -0.979481 -0.414608 -1.615864 -0.408528
2013-01-02  0.406053  0.953814  0.165001 -0.490754
2013-01-03  0.605406  1.031433  0.437361  1.145544
2013-01-04  0.360562 -0.313637  0.031388 -1.395899
2013-01-05 -1.062158  0.454617 -2.142283  1.152373
2013-01-06 -0.981746 -0.832217  2.550671  2.122499

Sorting by values:

In [22]: df.sort_values(by='B').execute()
Out[22]: 
                   A         B         C         D
2013-01-05  1.152373 -2.142283  0.454617 -1.062158
2013-01-01 -0.408528 -1.615864 -0.414608 -0.979481
2013-01-04 -1.395899  0.031388 -0.313637  0.360562
2013-01-02 -0.490754  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406
2013-01-06  2.122499  2.550671 -0.832217 -0.981746

Selection#

Note

While standard Python / Numpy expressions for selecting and setting are intuitive and come in handy for interactive work, for production code, we recommend the optimized DataFrame data access methods, .at, .iat, .loc and .iloc.

Getting#

Selecting a single column, which yields a Series, equivalent to df.A:

In [23]: df['A'].execute()
Out[23]: 
2013-01-01   -0.408528
2013-01-02   -0.490754
2013-01-03    1.145544
2013-01-04   -1.395899
2013-01-05    1.152373
2013-01-06    2.122499
Freq: D, Name: A, dtype: float64

Selecting via [], which slices the rows.

In [24]: df[0:3].execute()
Out[24]: 
                   A         B         C         D
2013-01-01 -0.408528 -1.615864 -0.414608 -0.979481
2013-01-02 -0.490754  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406

In [25]: df['20130102':'20130104'].execute()
Out[25]: 
                   A         B         C         D
2013-01-02 -0.490754  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406
2013-01-04 -1.395899  0.031388 -0.313637  0.360562

Selection by label#

For getting a cross section using a label:

In [26]: df.loc['20130101'].execute()
Out[26]: 
A   -0.408528
B   -1.615864
C   -0.414608
D   -0.979481
Name: 2013-01-01 00:00:00, dtype: float64

Selecting on a multi-axis by label:

In [27]: df.loc[:, ['A', 'B']].execute()
Out[27]: 
                   A         B
2013-01-01 -0.408528 -1.615864
2013-01-02 -0.490754  0.165001
2013-01-03  1.145544  0.437361
2013-01-04 -1.395899  0.031388
2013-01-05  1.152373 -2.142283
2013-01-06  2.122499  2.550671

Showing label slicing, both endpoints are included:

In [28]: df.loc['20130102':'20130104', ['A', 'B']].execute()
Out[28]: 
                   A         B
2013-01-02 -0.490754  0.165001
2013-01-03  1.145544  0.437361
2013-01-04 -1.395899  0.031388

Reduction in the dimensions of the returned object:

In [29]: df.loc['20130102', ['A', 'B']].execute()
Out[29]: 
A   -0.490754
B    0.165001
Name: 2013-01-02 00:00:00, dtype: float64

For getting a scalar value:

In [30]: df.loc['20130101', 'A'].execute()
Out[30]: -0.4085282147445123

For getting fast access to a scalar (equivalent to the prior method):

In [31]: df.at['20130101', 'A'].execute()
Out[31]: -0.4085282147445123

Selection by position#

Select via the position of the passed integers:

In [32]: df.iloc[3].execute()
Out[32]: 
A   -1.395899
B    0.031388
C   -0.313637
D    0.360562
Name: 2013-01-04 00:00:00, dtype: float64

By integer slices, acting similar to numpy/python:

In [33]: df.iloc[3:5, 0:2].execute()
Out[33]: 
                   A         B
2013-01-04 -1.395899  0.031388
2013-01-05  1.152373 -2.142283

By lists of integer position locations, similar to the numpy/python style:

In [34]: df.iloc[[1, 2, 4], [0, 2]].execute()
Out[34]: 
                   A         C
2013-01-02 -0.490754  0.953814
2013-01-03  1.145544  1.031433
2013-01-05  1.152373  0.454617

For slicing rows explicitly:

In [35]: df.iloc[1:3, :].execute()
Out[35]: 
                   A         B         C         D
2013-01-02 -0.490754  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406

For slicing columns explicitly:

In [36]: df.iloc[:, 1:3].execute()
Out[36]: 
                   B         C
2013-01-01 -1.615864 -0.414608
2013-01-02  0.165001  0.953814
2013-01-03  0.437361  1.031433
2013-01-04  0.031388 -0.313637
2013-01-05 -2.142283  0.454617
2013-01-06  2.550671 -0.832217

For getting a value explicitly:

In [37]: df.iloc[1, 1].execute()
Out[37]: 0.16500056492327506

For getting fast access to a scalar (equivalent to the prior method):

In [38]: df.iat[1, 1].execute()
Out[38]: 0.16500056492327506

Boolean indexing#

Using a single column’s values to select data.

In [39]: df[df['A'] > 0].execute()
Out[39]: 
                   A         B         C         D
2013-01-03  1.145544  0.437361  1.031433  0.605406
2013-01-05  1.152373 -2.142283  0.454617 -1.062158
2013-01-06  2.122499  2.550671 -0.832217 -0.981746

Selecting values from a DataFrame where a boolean condition is met.

In [40]: df[df > 0].execute()
Out[40]: 
                   A         B         C         D
2013-01-01       NaN       NaN       NaN       NaN
2013-01-02       NaN  0.165001  0.953814  0.406053
2013-01-03  1.145544  0.437361  1.031433  0.605406
2013-01-04       NaN  0.031388       NaN  0.360562
2013-01-05  1.152373       NaN  0.454617       NaN
2013-01-06  2.122499  2.550671       NaN       NaN

Operations#

Stats#

Operations in general exclude missing data.

Performing a descriptive statistic:

In [41]: df.mean().execute()
Out[41]: 
A    0.354206
B   -0.095621
C    0.146567
D   -0.275227
dtype: float64

Same operation on the other axis:

In [42]: df.mean(1).execute()
Out[42]: 
2013-01-01   -0.854620
2013-01-02    0.258528
2013-01-03    0.804936
2013-01-04   -0.329396
2013-01-05   -0.399363
2013-01-06    0.714802
Freq: D, dtype: float64

Operating with objects that have different dimensionality and need alignment. In addition, Mars DataFrame automatically broadcasts along the specified dimension.

In [43]: s = md.Series([1, 3, 5, mt.nan, 6, 8], index=dates).shift(2)

In [44]: s.execute()
Out[44]: 
2013-01-01    NaN
2013-01-02    NaN
2013-01-03    1.0
2013-01-04    3.0
2013-01-05    5.0
2013-01-06    NaN
Freq: D, dtype: float64

In [45]: df.sub(s, axis='index').execute()
Out[45]: 
                   A         B         C         D
2013-01-01       NaN       NaN       NaN       NaN
2013-01-02       NaN       NaN       NaN       NaN
2013-01-03  0.145544 -0.562639  0.031433 -0.394594
2013-01-04 -4.395899 -2.968612 -3.313637 -2.639438
2013-01-05 -3.847627 -7.142283 -4.545383 -6.062158
2013-01-06       NaN       NaN       NaN       NaN

Apply#

Applying functions to the data:

In [46]: df.apply(lambda x: x.max() - x.min()).execute()
Out[46]: 
A    3.518398
B    4.692954
C    1.863650
D    1.667564
dtype: float64

String Methods#

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [47]: s = md.Series(['A', 'B', 'C', 'Aaba', 'Baca', mt.nan, 'CABA', 'dog', 'cat'])

In [48]: s.str.lower().execute()
Out[48]: 
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object

Merge#

Concat#

Mars DataFrame provides various facilities for easily combining together Series and DataFrame objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.

Concatenating DataFrame objects together with concat():

In [49]: df = md.DataFrame(mt.random.randn(10, 4))

In [50]: df.execute()
Out[50]: 
          0         1         2         3
0  0.115075  0.884831  0.478299  0.416459
1  0.250367  0.826709  0.603764  0.811368
2 -1.710826 -0.622261 -0.583373  1.292714
3  0.215724  0.119656  1.232683  0.050237
4 -0.439354 -0.683713  0.005588  0.566589
5 -0.220359 -0.294231 -0.035997  0.134134
6  0.313627 -0.071105  0.176026  0.039439
7 -1.006887  1.401754  0.060687 -1.002063
8 -1.292104  0.252355 -0.904728  1.582943
9  1.014965 -0.848492  1.425587  0.562507

# break it into pieces
In [51]: pieces = [df[:3], df[3:7], df[7:]]

In [52]: md.concat(pieces).execute()
Out[52]: 
          0         1         2         3
0  0.115075  0.884831  0.478299  0.416459
1  0.250367  0.826709  0.603764  0.811368
2 -1.710826 -0.622261 -0.583373  1.292714
3  0.215724  0.119656  1.232683  0.050237
4 -0.439354 -0.683713  0.005588  0.566589
5 -0.220359 -0.294231 -0.035997  0.134134
6  0.313627 -0.071105  0.176026  0.039439
7 -1.006887  1.401754  0.060687 -1.002063
8 -1.292104  0.252355 -0.904728  1.582943
9  1.014965 -0.848492  1.425587  0.562507

Note

Adding a column to a DataFrame is relatively fast. However, adding a row requires a copy, and may be expensive. We recommend passing a pre-built list of records to the DataFrame constructor instead of building a DataFrame by iteratively appending records to it.

Join#

SQL style merges. See the Database style joining section.

In [53]: left = md.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})

In [54]: right = md.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})

In [55]: left.execute()
Out[55]: 
   key  lval
0  foo     1
1  foo     2

In [56]: right.execute()
Out[56]: 
   key  rval
0  foo     4
1  foo     5

In [57]: md.merge(left, right, on='key').execute()
Out[57]: 
   key  lval  rval
0  foo     1     4
1  foo     1     5
2  foo     2     4
3  foo     2     5

Another example that can be given is:

In [58]: left = md.DataFrame({'key': ['foo', 'bar'], 'lval': [1, 2]})

In [59]: right = md.DataFrame({'key': ['foo', 'bar'], 'rval': [4, 5]})

In [60]: left.execute()
Out[60]: 
   key  lval
0  foo     1
1  bar     2

In [61]: right.execute()
Out[61]: 
   key  rval
0  foo     4
1  bar     5

In [62]: md.merge(left, right, on='key').execute()
Out[62]: 
   key  lval  rval
0  foo     1     4
1  bar     2     5

Grouping#

By “group by” we are referring to a process involving one or more of the following steps:

  • Splitting the data into groups based on some criteria

  • Applying a function to each group independently

  • Combining the results into a data structure

In [63]: df = md.DataFrame({'A': ['foo', 'bar', 'foo', 'bar',
   ....:                          'foo', 'bar', 'foo', 'foo'],
   ....:                    'B': ['one', 'one', 'two', 'three',
   ....:                          'two', 'two', 'one', 'three'],
   ....:                    'C': mt.random.randn(8),
   ....:                    'D': mt.random.randn(8)})
   ....: 

In [64]: df.execute()
Out[64]: 
     A      B         C         D
0  foo    one -0.378475 -1.768423
1  bar    one  0.646488  0.670867
2  foo    two -0.799690 -0.994373
3  bar  three -0.429681 -1.156063
4  foo    two -1.411407 -0.895470
5  bar    two  1.815013  1.745927
6  foo    one  1.873860  0.868160
7  foo  three  0.303657 -1.404396

Grouping and then applying the sum() function to the resulting groups.

In [65]: df.groupby('A').sum().execute()
Out[65]: 
            C         D
A                      
bar  2.031820  1.260731
foo -0.412055 -4.194502

Grouping by multiple columns forms a hierarchical index, and again we can apply the sum function.

In [66]: df.groupby(['A', 'B']).sum().execute()
Out[66]: 
                  C         D
A   B                        
bar one    0.646488  0.670867
    three -0.429681 -1.156063
    two    1.815013  1.745927
foo one    1.495385 -0.900263
    three  0.303657 -1.404396
    two   -2.211097 -1.889843

Plotting#

We use the standard convention for referencing the matplotlib API:

In [67]: import matplotlib.pyplot as plt

In [68]: plt.close('all')
In [69]: ts = md.Series(mt.random.randn(1000),
   ....:                index=md.date_range('1/1/2000', periods=1000))
   ....: 

In [70]: ts = ts.cumsum()

In [71]: ts.plot()
Out[71]: <AxesSubplot:>
../../_images/series_plot_basic.png

On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:

In [72]: df = md.DataFrame(mt.random.randn(1000, 4), index=ts.index,
   ....:                   columns=['A', 'B', 'C', 'D'])
   ....: 

In [73]: df = df.cumsum()

In [74]: plt.figure()
Out[74]: <Figure size 640x480 with 0 Axes>

In [75]: df.plot()
Out[75]: <AxesSubplot:>

In [76]: plt.legend(loc='best')
Out[76]: <matplotlib.legend.Legend at 0x7f3493af4ad0>
../../_images/frame_plot_basic.png

Getting data in/out#

CSV#

In [77]: df.to_csv('foo.csv').execute()
Out[77]: 
Empty DataFrame
Columns: []
Index: []

Reading from a csv file.

In [78]: md.read_csv('foo.csv').execute()
Out[78]: 
     Unnamed: 0          A         B          C          D
0    2000-01-01  -0.114199  1.435991   0.313251  -0.369235
1    2000-01-02  -0.436170  1.211855  -1.337571  -0.093541
2    2000-01-03   0.798615  3.331479  -0.737810   0.113401
3    2000-01-04   0.720034  4.373077  -1.629931   1.252530
4    2000-01-05   1.729162  4.733967  -2.373432   1.606382
..          ...        ...       ...        ...        ...
995  2002-09-22 -52.892286  4.551149 -31.389413  20.340402
996  2002-09-23 -53.597744  3.677439 -32.662184  20.555935
997  2002-09-24 -54.506083  2.744942 -31.397477  20.441688
998  2002-09-25 -53.458040  2.556464 -31.120635  20.500559
999  2002-09-26 -54.590758  4.216846 -29.428213  21.627531

[1000 rows x 5 columns]