U
    /eM                     @   s   d Z ddlZddlZddlZddlmZmZm	Z	 ddl
mZmZmZ ddlmZ ddlmZ ddlmZ ddlmZmZ dd
dZd ddZdd Zdd Zdd Zdd Zdd Zdd Zdd Zd!ddZ dS )"ab  Determine new partition divisions using approximate percentiles.

We use a custom algorithm to calculate approximate, evenly-distributed
percentiles of arbitrarily-ordered data for any dtype in a distributed
fashion with one pass over the data.  This is used to determine new
partition divisions when changing the index of a dask.dataframe.  We claim
no statistical guarantees, but we use a variety of heuristics to try to
provide reliable, robust results that are "good enough" and can scale to
large number of partitions.

Our approach is similar to standard approaches such as t- and q-digest,
GK, and sampling-based algorithms, which consist of three parts:

1. **Summarize:** create summaries of subsets of data
2. **Merge:** combine summaries to make a new summary
3. **Compress:** periodically compress a summary into a smaller summary

We summarize the data in each partition by calculating several percentiles.
The value at each percentile is given a weight proportional to the length
of the partition and the differences between the current percentile and
the adjacent percentiles.  Merging summaries is simply a ``merge_sorted``
of the values and their weights, which we do with a reduction tree.

Percentiles is a good choice for our case, because we are given a numpy
array of the partition's data, and percentiles is a relatively cheap
operation.  Moreover, percentiles are, by definition, much less
susceptible to the underlying distribution of the data, so the weights
given to each value--even across partitions--should be comparable.

Let us describe this to a child of five.  We are given many small cubes
(of equal size) with numbers on them.  Split these into many piles.  This
is like the original data.  Let's sort and stack the cubes from one of the
piles.  Next, we are given a bunch of unlabeled blocks of different sizes,
and most are much larger than the the original cubes.  Stack these blocks
until they're the same height as our first stack.  Let's write a number on
each block of the new stack.  To do this, choose the number of the cube in
the first stack that is located in the middle of an unlabeled block.  We
are finished with this stack once all blocks have a number written on them.
Repeat this for all the piles of cubes.  Finished already?  Great!  Now
take all the stacks of the larger blocks you wrote on and throw them into
a single pile.  We'll be sorting these blocks next, which may be easier if
you carefully move the blocks over and organize... ah, nevermind--too late.
Okay, sort and stack all the blocks from that amazing, disorganized pile
you just made.  This will be very tall, so we had better stack it sideways
on the floor like so.  This will also make it easier for us to split the
stack into groups of approximately equal size, which is our final task...

This, in a nutshell, is the algorithm we deploy.  The main difference
is that we don't always assign a block the number at its median (ours
fluctuates around the median).  The numbers at the edges of the final
groups is what we use as divisions for repartitioning.  We also need
the overall min and max, so we take the 0th and 100th percentile of
each partition, and another sample near each edge so we don't give
disproportionate weights to extreme values.

Choosing appropriate percentiles to take in each partition is where things
get interesting.  The data is arbitrarily ordered, which means it may be
sorted, random, or follow some pathological distribution--who knows.  We
hope all partitions are of similar length, but we ought to expect some
variation in lengths.  The number of partitions may also be changing
significantly, which could affect the optimal choice of percentiles.  For
improved robustness, we use both evenly-distributed and random percentiles.
If the number of partitions isn't changing, then the total number of
percentiles across all partitions scales as ``npartitions**1.5``.  Although
we only have a simple compression operation (step 3 above) that combines
weights of equal values, a more sophisticated one could be added if needed,
such as for extremely large ``npartitions`` or if we find we need to
increase the sample size for each partition.

    N)is_datetime64_dtypeis_datetime64tz_dtypeis_integer_dtype)mergemerge_sortedtake)tokenize)Series)is_categorical_dtype)is_cupy_typerandom_state_data      ?c                 C   s  ddd| |  d   }|| | d d  |  }t |d|  d }t || d }|| d |krvtdd	|d S t|tjjstj|}tdd	|}	||d	 }
d
|d  d	d
|d   g}t|	|
|dd	gg}|  d|dd |dd   }|S )a	  Construct percentiles for a chunk for repartitioning.

    Adapt the number of total percentiles calculated based on the number
    of current and new partitions.  Returned percentiles include equally
    spaced percentiles between [0, 100], and random percentiles.  See
    detailed discussion below.

    Parameters
    ----------
    num_old: int
        Number of partitions of the current object
    num_new: int
        Number of partitions of the new object
    chunk_length: int
        Number of rows of the partition
    upsample : float
        Multiplicative factor to increase the number of samples

    Returns
    -------
    qs : numpy.ndarray of sorted percentiles between 0, 100

    Constructing ordered (i.e., not hashed) partitions is hard.  Calculating
    approximate percentiles for generic objects in an out-of-core fashion is
    also hard.  Fortunately, partition boundaries don't need to be perfect
    in order for partitioning to be effective, so we strive for a "good enough"
    method that can scale to many partitions and is reasonably well-behaved for
    a wide variety of scenarios.

    Two similar approaches come to mind: (1) take a subsample of every
    partition, then find the best new partitions for the combined subsamples;
    and (2) calculate equally-spaced percentiles on every partition (a
    relatively cheap operation), then merge the results.  We do both, but
    instead of random samples, we use random percentiles.

    If the number of partitions isn't changing, then the ratio of fixed
    percentiles to random percentiles is 2 to 1.  If repartitioning goes from
    a very high number of partitions to a very low number of partitions, then
    we use more random percentiles, because a stochastic approach will be more
    stable to potential correlations in the data that may cause a few equally-
    spaced partitions to under-sample the data.

    The more partitions there are, then the more total percentiles will get
    calculated across all partitions.  Squaring the number of partitions
    approximately doubles the number of total percentiles calculated, so
    num_total_percentiles ~ sqrt(num_partitions).  We assume each partition
    is approximately the same length.  This should provide adequate resolution
    and allow the number of partitions to scale.

    For numeric data, one could instead use T-Digest for floats and Q-Digest
    for ints to calculate approximate percentiles.  Our current method works
    for any dtype.
                ?   g?      r   d   <   N)	intnplinspace
isinstancerandomRandomStateZrandconcatenatesort)num_oldnum_newZchunk_lengthupsamplerandom_stateZrandom_percentageZnum_percentilesZ	num_fixedZ
num_randomZq_fixedZq_randomZq_edgesqs r$   E/tmp/pip-unpacked-wheel-dbjnr7gq/dask/dataframe/partitionquantiles.pysample_percentilesX   s    7r&   Fc                 C   sP   | dk rd}nt t| }| | }|s0|dk rHdt t| | d S |S dS )zGenerate tree width suitable for ``merge_sorted`` given N inputs

    The larger N is, the more tasks are reduced in a single task.

    In theory, this is designed so all tasks are of comparable effort.
        r      N)r   mathlog)NZ	to_binary
group_size
num_groupsr$   r$   r%   
tree_width   s    r.   c                 C   st   | | }|}| ||  }d| | }g }t |D ]>}|dk rH|| n||d  |d| 8 }|d| 7 }q0|S )zoSplit an integer N into evenly sized and spaced groups.

    >>> tree_groups(16, 6)
    [3, 2, 3, 3, 2, 3]
    r   r   r   )rangeappend)r+   r-   r,   ZdxZdyDrv_r$   r$   r%   tree_groups   s    r4   c           
         s   d t |}t|}i }|dkrt|}t||} fddt|D }t||D ]\}}	| tt||f||	< qT|}t|} d7  q|S )a  Create a task tree that merges all the keys with a reduction function.

    Parameters
    ----------
    func: callable
        Reduction function that accepts a single list of values to reduce.
    keys: iterable
        Keys to reduce from the source dask graph.
    token: object
        Included in each key of the returned dict.

    This creates a k-ary tree where k depends on the current level and is
    greater the further away a node is from the root node.  This reduces the
    total number of nodes (thereby reducing scheduler overhead), but still
    has beneficial properties of trees.

    For reasonable numbers of keys, N < 1e5, the total number of nodes in the
    tree is roughly ``N**0.78``.  For 1e5 < N < 2e5, is it roughly ``N**0.8``.
    r   r   c                    s   g | ]} |fqS r$   r$   ).0ileveltokenr$   r%   
<listcomp>   s     z%create_merge_tree.<locals>.<listcomp>)leniterr.   r4   r/   ziplistr   )
funckeysr9   Z
prev_widthZ	prev_keysr2   widthgroupsnumkeyr$   r7   r%   create_merge_tree   s    

rE   c                 C   sJ   |dkrdS t | dd}d| |dd |dd   }| | fS )a  Weigh percentile values by length and the difference between percentiles

    >>> percentiles = np.array([0., 25., 50., 90., 100.])
    >>> values = np.array([2, 3, 5, 8, 13])
    >>> length = 10
    >>> percentiles_to_weights(percentiles, values, length)
    ([2, 3, 5, 8, 13], [125.0, 250.0, 325.0, 250.0, 50.0])

    The weight of the first element, ``2``, is determined by the difference
    between the first and second percentiles, and then scaled by length:

    >>> 0.5 * length * (percentiles[1] - percentiles[0])
    125.0

    The second weight uses the difference of percentiles on both sides, so
    it will be twice the first weight if the percentiles are equally spaced:

    >>> 0.5 * length * (percentiles[2] - percentiles[0])
    250.0
    r   r$           r   r   Nr   )r   Zediff1dtolist)r#   valslengthZdiffweightsr$   r$   r%   percentiles_to_weights   s
     rK   c           
      C   s   dd | D } | sdS t dd | D  }g }g }|j}|j}t| \}}\}}	|D ]4\}}||krn|	|7 }	qT|| ||	 || }}	qT||kr|| ||	 ||fS )zMerge and sort percentile summaries that are already sorted.

    Each item is a tuple like ``(vals, weights)`` where vals and weights
    are lists.  We sort both by vals.

    Equal values will be combined, their weights summed together.
    c                 S   s   g | ]}|r|qS r$   r$   )r5   xr$   r$   r%   r:     s      z0merge_and_compress_summaries.<locals>.<listcomp>r$   c                 S   s   g | ]\}}t ||qS r$   )r=   )r5   rL   yr$   r$   r%   r:     s     )r   r0   next)
vals_and_weightsitrH   rJ   Zvals_appendZweights_appendvalZweightprev_valZprev_weightr$   r$   r%   merge_and_compress_summaries  s&    
rS   c                 C   sP  |\}}| sBzt jd|dW S  tk
r@   t jdt jd Y S X | \}}t |}t |}t||d krv|}n\t||d k rt |jt jrt|st 	|}t 
|d |d |d }	t |	||}nDt j
dt|d |t| d td}
||
 }t ||g}|  n| | }||k}|| }||  }||  }|t| }t 	|}t 
d|d |d }	t j||	dd}t j||	ddd }t |d| t ||}|| }t ||g}|  t|rtj||d |d }nVt|rt||j}n8d	t|kr2tj||d}n|j|krLtj||d}|S )
a  Calculate final approximate percentiles given weighted vals

    ``vals_and_weights`` is assumed to be sorted.  We take a cumulative
    sum of the weights, which makes them percentile-like (their scale is
    [0, N] instead of [0, 100]).  Next we find the divisions to create
    partitions of approximately equal size.

    It is possible for adjacent values of the result to be the same.  Since
    these determine the divisions of the new partitions, some partitions
    may be empty.  This can happen if we under-sample the data, or if there
    aren't enough unique values in the column.  Increasing ``upsample``
    keyword argument in ``df.set_index`` may help.
    N)dtyper   r   r   left)ZsiderightZ
datetime64)r   array	ExceptionZfloat_r;   
issubdtyperT   numberr
   Zcumsumr   Zinterpr   r   r   sumZsearchsortedmaximumZminimumpdZCategoricalZ
from_codesr   ZDatetimeIndexZtz_localizetzstr)rO   npartitions
dtype_inforT   inforH   rJ   r2   Z	q_weightsZq_targetZduplicated_indexZduplicated_valsZtarget_weightZ
jumbo_maskZ
jumbo_valsZtrimmed_valsZtrimmed_weightsZtrimmed_npartitionsrU   rV   lowerZtrimmedr$   r$   r%   process_val_weights,  sb    


 
  





rd   c              	   C   s*  ddl m} ddlm} t| }|dkr,dS tj|}t|||||}	| }
d}t	|
rf|
j
j}
d}nt|
jszt|
jr~d}z|
j|	d |dj}W n0 ttfk
r   |||
|
j|	|\}}Y nX t|
r|dkrt|
jtjrt||
j}|	d dkr|
 |d< t|	||}|S )	a  Summarize data using percentiles and derived weights.

    These summaries can be merged, compressed, and converted back into
    approximate percentiles.

    Parameters
    ----------
    df: pandas.Series
        Data to summarize
    num_old: int
        Number of partitions of the current object
    num_new: int
        Number of partitions of the new object
    upsample: float
        Scale factor to increase the number of percentiles calculated in
        each partition.  Use to improve accuracy.
    r   )percentile_lookup)
array_safer$   ZlinearZnearestr   )qinterpolation)Zdask.array.dispatchre   Zdask.array.utilsrf   r;   r   r   r   r&   r
   catcodesr   rT   r   Zquantilevalues	TypeErrorNotImplementedErrorr   rY   integerroundZastypeminrK   )dfr   r    r!   stateZ_percentilerf   rI   r"   r#   datarh   rH   r3   rO   r$   r$   r%   percentiles_summary  s:    rt   c                 C   s(   d }t | r| j}|j|jf}| j|fS )N)r
   rk   
categoriesZorderedrT   )rq   rb   rs   r$   r$   r%   ra     s
    ra   c                    sB  t  tstt}tddd }t |}|dkrPt|dttjj	 }t
 j|}  }d| }	|	dft|d fi}
d|  fddtt||D }d	| }ttt||}|s|ddftt|d gfi}t	|}d
| }|dftjt||	dff|d jfi}t j|
|||}ddg}||| j|S )z7Approximate quantiles of Series used for repartitioningr   r   Nr(   zre-quantiles-0-zre-quantiles-1-c              	      s,   i | ]$\}\}}|ft | j|fqS r$   )rt   r`   )r5   r6   rr   rD   rq   Zname1r`   r!   r$   r%   
<dictcomp>  s   	
z'partition_quantiles.<locals>.<dictcomp>zre-quantiles-2-zre-quantiles-3-rF   r   )r   r	   AssertionErrorr   r   r   r   ZiinfoZint32maxr   r`   Z__dask_keys__ra   	enumerater=   rE   rS   sortedr>   r]   rd   namer   ZdaskZ_meta)rq   r`   r!   r"   return_typer#   r9   Z
state_dataZdf_keysZname0Z	dtype_dskZval_dskZname2Z	merge_dskZ
merged_keyZname3Zlast_dskZdskZnew_divisionsr$   rv   r%   partition_quantiles  s>    	
r~   )r   N)F)r   N)!__doc__r)   Znumpyr   Zpandasr]   Zpandas.api.typesr   r   r   Ztlzr   r   r   Z	dask.baser   Zdask.dataframe.corer	   Zdask.dataframe.utilsr
   Z
dask.utilsr   r   r&   r.   r4   rE   rK   rS   rd   rt   ra   r~   r$   r$   r$   r%   <module>   s&   F
L
&Z7