-  
-   Notifications  You must be signed in to change notification settings 
- Fork 19.2k
Description
Pandas version checks
-  I have checked that this issue has not already been reported. 
-  I have confirmed this bug exists on the latest version of pandas. 
-  I have confirmed this bug exists on the main branch of pandas. 
Reproducible Example
import pandas as pd import numpy as np print(pd.__version__) interval_testing = pd.DataFrame(columns=['data', 'interval', 'data_in_interval'],) interval_testing.data = np.linspace(0,1,100) + 0.000499 interval_testing.interval = pd.cut(interval_testing.data, bins=13, precision=2, ) # interval_testing.interval = pd.qcut(interval_testing.data, q=13, precision=2, ) interval_testing.data_in_interval = [(interval_testing.data[i] in interval_testing.interval[i] ) for i in range(len(interval_testing))] interval_testing.loc[interval_testing.data_in_interval==False]Issue Description
Intro: pd.cut splits the data into bins. It has a parameter precision which controls the precision of the bins. E.g. if precision=2 then bins will be sthg like (0.02, 0.04] or (0.014, 0.028] (precision uses significant figures, not decimal places).
I had expected that (1) the bins would be rounded, and only then will (2) data be binned into the rounded bins. So all data will be binned correctly.
However the way it seems to do it is to (1) bin the data, and THEN (2) round the bins. The obvious problem with this is that you end up with some datapoints being assigned bins they don't fit into.
The output of the MRE code above shows this:
 
 If in the MRE we set precision=4, all are binned correctly for this particular dataset.
NOTE 1: The same problem exists with pd.qcut which cuts the data into buckets based on data quantiles. In that case it could be argued that that is desirable, so that you have the correct proportion of data in each bin. E.g. if using the quartiles, then the way it currently works means that 25% of the data will get into each bucket. Whereas the way I am suggesting, you can get more or less data in each bucket. However that argument isn't very strong with pd.cut. And in any case, I think that correctly binning data should always be the primary consideration, and size of bins secondary to that.
NOTE 2: the pd.docs state
precision : int, default 3
The precision at which to store and display the bins labels.
Which implies it acts as it does. However it could at least be clearer on this point, most users won't expect incorrectly binned data; and particularly given that by default a precision of 3 is used, users who haven't specified precision at all could get incorrect results.
Expected Behavior
Expected behaviour would be to put e.g. 0.152014 in the bin (0.15, 0.23], not in (0.077, 0.15]. I.e. define the bins first, then do the binning.
Installed Versions
INSTALLED VERSIONS
commit : 2cb9652
 python : 3.9.4.final.0
 python-bits : 64
 OS : Windows
 OS-release : 10
 Version : 10.0.19041
 machine : AMD64
 processor : Intel64 Family 6 Model 126 Stepping 5, GenuineIntel
 byteorder : little
 LC_ALL : None
 LANG : None
 LOCALE : English_United Kingdom.1252
pandas : 1.2.4
 numpy : 1.23.1
 pytz : 2021.1
 dateutil : 2.8.1
 pip : 22.1.2
 setuptools : 63.1.0
 Cython : None
 pytest : None
 hypothesis : None
 sphinx : None
 blosc : None
 feather : None
 xlsxwriter : 1.3.8
 lxml.etree : None
 html5lib : None
 pymysql : None
 psycopg2 : None
 jinja2 : 3.1.2
 IPython : 8.4.0
 pandas_datareader: None
 bs4 : 4.9.3
 bottleneck : None
 fsspec : None
 fastparquet : None
 gcsfs : None
 matplotlib : 3.4.1
 numexpr : None
 odfpy : None
 openpyxl : 3.0.7
 pandas_gbq : None
 pyarrow : None
 pyxlsb : None
 s3fs : None
 scipy : 1.10.1
 sqlalchemy : None
 tables : None
 tabulate : 0.8.9
 xarray : None
 xlrd : None
 xlwt : None
 numba : None