- 2.9.0 (latest)
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.0
- 2.1.0
- 2.0.0
- 1.42.0
- 1.41.0
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.0
- 1.36.0
- 1.35.0
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.0
- 1.30.0
- 1.29.0
- 1.28.0
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.0
- 1.12.0
- 1.11.1
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.1
- 0.19.2
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.1
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.0
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
Summary of entries of Methods for bigframes.
bigframes.ml.metrics.accuracy_score
accuracy_score( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, normalize=True ) -> float
Accuracy classification score.
See more: bigframes.ml.metrics.accuracy_score
bigframes.ml.metrics.auc
auc( x: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> float
Compute Area Under the Curve (AUC) using the trapezoidal rule.
See more: bigframes.ml.metrics.auc
bigframes.ml.metrics.confusion_matrix
confusion_matrix( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> pandas.core.frame.DataFrame
Compute confusion matrix to evaluate the accuracy of a classification.
See more: bigframes.ml.metrics.confusion_matrix
bigframes.ml.metrics.f1_score
f1_score( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, average: str = "binary" ) -> pandas.core.series.Series
Compute the F1 score, also known as balanced F-score or F-measure.
See more: bigframes.ml.metrics.f1_score
bigframes.ml.metrics.mean_squared_error
mean_squared_error( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> float
Mean squared error regression loss.
See more: bigframes.ml.metrics.mean_squared_error
bigframes.ml.metrics.precision_score
precision_score( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, average: str = "binary" ) -> pandas.core.series.Series
Compute the precision.
See more: bigframes.ml.metrics.precision_score
bigframes.ml.metrics.r2_score
r2_score( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, force_finite=True ) -> float
:math:R^2
(coefficient of determination) regression score function.
See more: bigframes.ml.metrics.r2_score
bigframes.ml.metrics.recall_score
recall_score( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_pred: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, average: str = "binary" ) -> pandas.core.series.Series
Compute the recall.
See more: bigframes.ml.metrics.recall_score
bigframes.ml.metrics.roc_auc_score
roc_auc_score( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_score: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> float
Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores.
See more: bigframes.ml.metrics.roc_auc_score
bigframes.ml.metrics.roc_curve
roc_curve( y_true: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y_score: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, drop_intermediate: bool = True ) -> typing.Tuple[ bigframes.series.Series, bigframes.series.Series, bigframes.series.Series ]
Compute Receiver operating characteristic (ROC).
See more: bigframes.ml.metrics.roc_curve
bigframes.ml.metrics.pairwise.paired_cosine_distances
paired_cosine_distances( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], Y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.dataframe.DataFrame
Compute the paired cosine distances between X and Y.
See more: bigframes.ml.metrics.pairwise.paired_cosine_distances
bigframes.ml.metrics.pairwise.paired_euclidean_distances
paired_euclidean_distances( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], Y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.dataframe.DataFrame
Compute the paired euclidean distances between X and Y.
See more: bigframes.ml.metrics.pairwise.paired_euclidean_distances
bigframes.ml.metrics.pairwise.paired_manhattan_distance
paired_manhattan_distance( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], Y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.dataframe.DataFrame
Compute the L1 distances between the vectors in X and Y.
See more: bigframes.ml.metrics.pairwise.paired_manhattan_distance
bigframes.ml.model_selection.train_test_split
train_test_split( *arrays: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], test_size: typing.Optional[float] = None, train_size: typing.Optional[float] = None, random_state: typing.Optional[int] = None ) -> typing.List[typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series]]
Splits dataframes or series into random train and test subsets.
bigframes.pandas.concat
Concatenate BigQuery DataFrames objects along a particular axis.
See more: bigframes.pandas.concat
bigframes.pandas.cut
cut( x: bigframes.series.Series, bins: int, *, labels: typing.Optional[bool] = None ) -> bigframes.series.Series
Bin values into discrete intervals.
See more: bigframes.pandas.cut
bigframes.pandas.get_dummies
get_dummies( data: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], prefix: typing.Optional[typing.Union[typing.List, dict, str]] = None, prefix_sep: typing.Optional[typing.Union[typing.List, dict, str]] = "_", dummy_na: bool = False, columns: typing.Optional[typing.List] = None, drop_first: bool = False, dtype: typing.Optional[typing.Any] = None, ) -> bigframes.dataframe.DataFrame
Convert categorical variable into dummy/indicator variables.
See more: bigframes.pandas.get_dummies
bigframes.pandas.merge
merge( left: bigframes.dataframe.DataFrame, right: bigframes.dataframe.DataFrame, how: typing.Literal["inner", "left", "outer", "right", "cross"] = "inner", on: typing.Optional[str] = None, *, left_on: typing.Optional[str] = None, right_on: typing.Optional[str] = None, sort: bool = False, suffixes: tuple[str, str] = ("_x", "_y") ) -> bigframes.dataframe.DataFrame
Merge DataFrame objects with a database-style join.
See more: bigframes.pandas.merge
bigframes.pandas.qcut
qcut( x: bigframes.series.Series, q: int, *, labels: typing.Optional[bool] = None, duplicates: typing.Literal["drop", "error"] = "error" ) -> bigframes.series.Series
Quantile-based discretization function.
See more: bigframes.pandas.qcut
bigframes.pandas.read_csv
read_csv( filepath_or_buffer: typing.Union[str, typing.IO[bytes]], *, sep: typing.Optional[str] = ",", header: typing.Optional[int] = 0, names: typing.Optional[ typing.Union[ typing.MutableSequence[typing.Any], numpy.ndarray[typing.Any, typing.Any], typing.Tuple[typing.Any, ...], range, ] ] = None, index_col: typing.Optional[ typing.Union[ int, str, typing.Sequence[typing.Union[str, int]], typing.Literal[False] ] ] = None, usecols: typing.Optional[ typing.Union[ typing.MutableSequence[str], typing.Tuple[str, ...], typing.Sequence[int], pandas.core.series.Series, pandas.core.indexes.base.Index, numpy.ndarray[typing.Any, typing.Any], typing.Callable[[typing.Any], bool], ] ] = None, dtype: typing.Optional[typing.Dict] = None, engine: typing.Optional[ typing.Literal["c", "python", "pyarrow", "python-fwf", "bigquery"] ] = None, encoding: typing.Optional[str] = None, **kwargs ) -> bigframes.dataframe.DataFrame
Loads DataFrame from comma-separated values (csv) file locally or from Cloud Storage.
See more: bigframes.pandas.read_csv
bigframes.pandas.read_gbq
read_gbq( query_or_table: str, *, index_col: typing.Union[typing.Iterable[str], str] = (), columns: typing.Iterable[str] = (), configuration: typing.Optional[typing.Dict] = None, max_results: typing.Optional[int] = None, filters: typing.Union[ typing.Iterable[ typing.Tuple[ str, typing.Literal[ "in", "not in", "<",><=", "="=" ,"="" "!=", ">=", ">", "LIKE" ], typing.Any, ] ], typing.Iterable[ typing.Iterable[ typing.Tuple[ str, typing.Literal[ "in", "not in", "<",><=", "="=" ,"="" "!=", ">=", ">", "LIKE" ], typing.Any, ] ] ], ] = (), use_cache: typing.Optional[bool] = None, col_order: typing.Iterable[str] = () ) -> bigframes.dataframe.DataFrame
Loads a DataFrame from BigQuery.
See more: bigframes.pandas.read_gbq
bigframes.pandas.read_gbq_function
read_gbq_function(function_name: str)
Loads a BigQuery function from BigQuery.
See more: bigframes.pandas.read_gbq_function
bigframes.pandas.read_gbq_model
read_gbq_model(model_name: str)
Loads a BigQuery ML model from BigQuery.
See more: bigframes.pandas.read_gbq_model
bigframes.pandas.read_gbq_query
read_gbq_query( query: str, *, index_col: typing.Union[typing.Iterable[str], str] = (), columns: typing.Iterable[str] = (), configuration: typing.Optional[typing.Dict] = None, max_results: typing.Optional[int] = None, use_cache: typing.Optional[bool] = None, col_order: typing.Iterable[str] = () ) -> bigframes.dataframe.DataFrame
Turn a SQL query into a DataFrame.
See more: bigframes.pandas.read_gbq_query
bigframes.pandas.read_gbq_table
read_gbq_table( query: str, *, index_col: typing.Union[typing.Iterable[str], str] = (), columns: typing.Iterable[str] = (), max_results: typing.Optional[int] = None, filters: typing.Union[ typing.Iterable[ typing.Tuple[ str, typing.Literal[ "in", "not in", "<",><=", "="=" ,"="" "!=", ">=", ">", "LIKE" ], typing.Any, ] ], typing.Iterable[ typing.Iterable[ typing.Tuple[ str, typing.Literal[ "in", "not in", "<",><=", "="=" ,"="" "!=", ">=", ">", "LIKE" ], typing.Any, ] ] ], ] = (), use_cache: bool = True, col_order: typing.Iterable[str] = () ) -> bigframes.dataframe.DataFrame
Turn a BigQuery table into a DataFrame.
See more: bigframes.pandas.read_gbq_table
bigframes.pandas.read_json
read_json( path_or_buf: typing.Union[str, typing.IO[bytes]], *, orient: typing.Literal[ "split", "records", "index", "columns", "values", "table" ] = "columns", dtype: typing.Optional[typing.Dict] = None, encoding: typing.Optional[str] = None, lines: bool = False, engine: typing.Literal["ujson", "pyarrow", "bigquery"] = "ujson", **kwargs ) -> bigframes.dataframe.DataFrame
Convert a JSON string to DataFrame object.
See more: bigframes.pandas.read_json
bigframes.pandas.read_pandas
Loads DataFrame from a pandas DataFrame.
See more: bigframes.pandas.read_pandas
bigframes.pandas.read_parquet
read_parquet( path: typing.Union[str, typing.IO[bytes]], *, engine: str = "auto" ) -> bigframes.dataframe.DataFrame
Load a Parquet object from the file path (local or Cloud Storage), returning a DataFrame.
See more: bigframes.pandas.read_parquet
bigframes.pandas.read_pickle
read_pickle( filepath_or_buffer: FilePath | ReadPickleBuffer, compression: CompressionOptions = "infer", storage_options: StorageOptions = None, )
Load pickled BigFrames object (or any object) from file.
See more: bigframes.pandas.read_pickle
bigframes.pandas.remote_function
remote_function( input_types: typing.List[type], output_type: type, dataset: typing.Optional[str] = None, bigquery_connection: typing.Optional[str] = None, reuse: bool = True, name: typing.Optional[str] = None, packages: typing.Optional[typing.Sequence[str]] = None, cloud_function_service_account: typing.Optional[str] = None, cloud_function_kms_key_name: typing.Optional[str] = None, cloud_function_docker_repository: typing.Optional[str] = None, )
Decorator to turn a user defined function into a BigQuery remote function.
See more: bigframes.pandas.remote_function
bigframes.pandas.to_datetime
to_datetime( arg: typing.Union[ int, float, str, datetime.datetime, typing.Iterable, pandas.core.series.Series, pandas.core.frame.DataFrame, typing.Mapping, bigframes.series.Series, bigframes.dataframe.DataFrame, ], *, utc: bool = False, format: typing.Optional[str] = None, unit: typing.Optional[str] = None ) -> typing.Union[ pandas._libs.tslibs.timestamps.Timestamp, datetime.datetime, bigframes.series.Series ]
This function converts a scalar, array-like or Series to a datetime object.
See more: bigframes.pandas.to_datetime
bigframes.core.groupby.DataFrameGroupBy.agg
agg(func=None, **kwargs) -> bigframes.dataframe.DataFrame
Aggregate using one or more operations.
bigframes.core.groupby.DataFrameGroupBy.aggregate
aggregate(func=None, **kwargs) -> bigframes.dataframe.DataFrame
API documentation for aggregate
method.
bigframes.core.groupby.DataFrameGroupBy.all
all() -> bigframes.dataframe.DataFrame
Return True if all values in the group are true, else False.
bigframes.core.groupby.DataFrameGroupBy.any
any() -> bigframes.dataframe.DataFrame
Return True if any value in the group is true, else False.
bigframes.core.groupby.DataFrameGroupBy.count
count() -> bigframes.dataframe.DataFrame
Compute count of group, excluding missing values.
bigframes.core.groupby.DataFrameGroupBy.cumcount
cumcount(ascending: bool = True)
Number each item in each group from 0 to the length of that group - 1.
bigframes.core.groupby.DataFrameGroupBy.cummax
cummax( *args, numeric_only: bool = False, **kwargs ) -> bigframes.dataframe.DataFrame
Cumulative max for each group.
bigframes.core.groupby.DataFrameGroupBy.cummin
cummin( *args, numeric_only: bool = False, **kwargs ) -> bigframes.dataframe.DataFrame
Cumulative min for each group.
bigframes.core.groupby.DataFrameGroupBy.cumprod
cumprod(*args, **kwargs) -> bigframes.dataframe.DataFrame
Cumulative product for each group.
bigframes.core.groupby.DataFrameGroupBy.cumsum
cumsum( *args, numeric_only: bool = False, **kwargs ) -> bigframes.dataframe.DataFrame
Cumulative sum for each group.
bigframes.core.groupby.DataFrameGroupBy.diff
diff(periods=1) -> bigframes.series.Series
First discrete difference of element.
bigframes.core.groupby.DataFrameGroupBy.expanding
expanding(min_periods: int = 1) -> bigframes.core.window.Window
Provides expanding functionality.
bigframes.core.groupby.DataFrameGroupBy.kurt
kurt(*, numeric_only: bool = False) -> bigframes.dataframe.DataFrame
Return unbiased kurtosis over requested axis.
bigframes.core.groupby.DataFrameGroupBy.kurtosis
kurtosis(*, numeric_only: bool = False) -> bigframes.dataframe.DataFrame
API documentation for kurtosis
method.
bigframes.core.groupby.DataFrameGroupBy.max
max(numeric_only: bool = False, *args) -> bigframes.dataframe.DataFrame
Compute max of group values.
bigframes.core.groupby.DataFrameGroupBy.mean
mean(numeric_only: bool = False, *args) -> bigframes.dataframe.DataFrame
Compute mean of groups, excluding missing values.
bigframes.core.groupby.DataFrameGroupBy.median
median( numeric_only: bool = False, *, exact: bool = False ) -> bigframes.dataframe.DataFrame
Compute median of groups, excluding missing values.
bigframes.core.groupby.DataFrameGroupBy.min
min(numeric_only: bool = False, *args) -> bigframes.dataframe.DataFrame
Compute min of group values.
bigframes.core.groupby.DataFrameGroupBy.nunique
nunique() -> bigframes.dataframe.DataFrame
Return DataFrame with counts of unique elements in each position.
bigframes.core.groupby.DataFrameGroupBy.prod
prod(numeric_only: bool = False, min_count: int = 0)
Compute prod of group values.
bigframes.core.groupby.DataFrameGroupBy.rolling
rolling(window: int, min_periods=None) -> bigframes.core.window.Window
Returns a rolling grouper, providing rolling functionality per group.
bigframes.core.groupby.DataFrameGroupBy.shift
shift(periods=1) -> bigframes.series.Series
Shift each group by periods observations.
bigframes.core.groupby.DataFrameGroupBy.skew
skew(*, numeric_only: bool = False) -> bigframes.dataframe.DataFrame
Return unbiased skew within groups.
bigframes.core.groupby.DataFrameGroupBy.std
std(*, numeric_only: bool = False) -> bigframes.dataframe.DataFrame
Compute standard deviation of groups, excluding missing values.
bigframes.core.groupby.DataFrameGroupBy.sum
sum(numeric_only: bool = False, *args) -> bigframes.dataframe.DataFrame
Compute sum of group values.
bigframes.core.groupby.DataFrameGroupBy.var
var(*, numeric_only: bool = False) -> bigframes.dataframe.DataFrame
Compute variance of groups, excluding missing values.
bigframes.core.groupby.SeriesGroupBy.agg
agg( func=None, ) -> typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series]
Aggregate using one or more operations.
See more: bigframes.core.groupby.SeriesGroupBy.agg
bigframes.core.groupby.SeriesGroupBy.aggregate
aggregate( func=None, ) -> typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series]
API documentation for aggregate
method.
bigframes.core.groupby.SeriesGroupBy.all
all() -> bigframes.series.Series
Return True if all values in the group are true, else False.
See more: bigframes.core.groupby.SeriesGroupBy.all
bigframes.core.groupby.SeriesGroupBy.any
any() -> bigframes.series.Series
Return True if any value in the group is true, else False.
See more: bigframes.core.groupby.SeriesGroupBy.any
bigframes.core.groupby.SeriesGroupBy.count
count() -> bigframes.series.Series
Compute count of group, excluding missing values.
bigframes.core.groupby.SeriesGroupBy.cumcount
cumcount(*args, **kwargs) -> bigframes.series.Series
Number each item in each group from 0 to the length of that group - 1.
bigframes.core.groupby.SeriesGroupBy.cummax
cummax(*args, **kwargs) -> bigframes.series.Series
Cumulative max for each group.
bigframes.core.groupby.SeriesGroupBy.cummin
cummin(*args, **kwargs) -> bigframes.series.Series
Cumulative min for each group.
bigframes.core.groupby.SeriesGroupBy.cumprod
cumprod(*args, **kwargs) -> bigframes.series.Series
Cumulative product for each group.
bigframes.core.groupby.SeriesGroupBy.cumsum
cumsum(*args, **kwargs) -> bigframes.series.Series
Cumulative sum for each group.
bigframes.core.groupby.SeriesGroupBy.diff
diff(periods=1) -> bigframes.series.Series
First discrete difference of element.
bigframes.core.groupby.SeriesGroupBy.expanding
expanding(min_periods: int = 1) -> bigframes.core.window.Window
Provides expanding functionality.
bigframes.core.groupby.SeriesGroupBy.kurt
kurt(*args, **kwargs) -> bigframes.series.Series
Return unbiased kurtosis over requested axis.
bigframes.core.groupby.SeriesGroupBy.kurtosis
kurtosis(*args, **kwargs) -> bigframes.series.Series
API documentation for kurtosis
method.
bigframes.core.groupby.SeriesGroupBy.max
max(*args) -> bigframes.series.Series
Compute max of group values.
See more: bigframes.core.groupby.SeriesGroupBy.max
bigframes.core.groupby.SeriesGroupBy.mean
mean(*args) -> bigframes.series.Series
Compute mean of groups, excluding missing values.
bigframes.core.groupby.SeriesGroupBy.median
median(*args, **kwargs) -> bigframes.series.Series
Compute median of groups, excluding missing values.
bigframes.core.groupby.SeriesGroupBy.min
min(*args) -> bigframes.series.Series
Compute min of group values.
See more: bigframes.core.groupby.SeriesGroupBy.min
bigframes.core.groupby.SeriesGroupBy.nunique
nunique() -> bigframes.series.Series
Return number of unique elements in the group.
bigframes.core.groupby.SeriesGroupBy.prod
prod(*args) -> bigframes.series.Series
Compute prod of group values.
bigframes.core.groupby.SeriesGroupBy.rolling
rolling(window: int, min_periods=None) -> bigframes.core.window.Window
Returns a rolling grouper, providing rolling functionality per group.
bigframes.core.groupby.SeriesGroupBy.shift
shift(periods=1) -> bigframes.series.Series
Shift index by desired number of periods.
bigframes.core.groupby.SeriesGroupBy.skew
skew(*args, **kwargs) -> bigframes.series.Series
Return unbiased skew within groups.
bigframes.core.groupby.SeriesGroupBy.std
std(*args, **kwargs) -> bigframes.series.Series
Compute standard deviation of groups, excluding missing values.
See more: bigframes.core.groupby.SeriesGroupBy.std
bigframes.core.groupby.SeriesGroupBy.sum
sum(*args) -> bigframes.series.Series
Compute sum of group values.
See more: bigframes.core.groupby.SeriesGroupBy.sum
bigframes.core.groupby.SeriesGroupBy.var
var(*args, **kwargs) -> bigframes.series.Series
Compute variance of groups, excluding missing values.
See more: bigframes.core.groupby.SeriesGroupBy.var
bigframes.core.indexers.ILocDataFrameIndexer.__getitem__
__getitem__( key, ) -> typing.Union[bigframes.dataframe.DataFrame, pandas.core.series.Series]
Index dataframe using integer offsets.
See more: bigframes.core.indexers.ILocDataFrameIndexer.getitem
bigframes.core.indexers.IlocSeriesIndexer.__getitem__
__getitem__(key) -> typing.Union[typing.Any, bigframes.series.Series]
Index series using integer offsets.
bigframes.core.indexes.base.Index.all
all() -> bool
Return whether all elements are Truthy.
See more: bigframes.core.indexes.base.Index.all
bigframes.core.indexes.base.Index.any
any() -> bool
Return whether any element is Truthy.
See more: bigframes.core.indexes.base.Index.any
bigframes.core.indexes.base.Index.argmax
argmax() -> int
Return int position of the largest value in the Series.
See more: bigframes.core.indexes.base.Index.argmax
bigframes.core.indexes.base.Index.argmin
argmin() -> int
Return int position of the smallest value in the Series.
See more: bigframes.core.indexes.base.Index.argmin
bigframes.core.indexes.base.Index.astype
astype( dtype: typing.Union[ typing.Literal[ "boolean", "Float64", "Int64", "int64[pyarrow]", "string", "string[pyarrow]", "timestamp[us, tz=UTC][pyarrow]", "timestamp[us][pyarrow]", "date32[day][pyarrow]", "time64[us][pyarrow]", "decimal128(38, 9)[pyarrow]", "decimal256(76, 38)[pyarrow]", "binary[pyarrow]", ], pandas.core.arrays.boolean.BooleanDtype, pandas.core.arrays.floating.Float64Dtype, pandas.core.arrays.integer.Int64Dtype, pandas.core.arrays.string_.StringDtype, pandas.core.dtypes.dtypes.ArrowDtype, geopandas.array.GeometryDtype, ] ) -> bigframes.core.indexes.base.Index
Create an Index with values cast to dtypes.
See more: bigframes.core.indexes.base.Index.astype
bigframes.core.indexes.base.Index.copy
copy(name: typing.Optional[typing.Hashable] = None)
Make a copy of this object.
See more: bigframes.core.indexes.base.Index.copy
bigframes.core.indexes.base.Index.drop
drop(labels: typing.Any) -> bigframes.core.indexes.base.Index
Make new Index with passed list of labels deleted.
See more: bigframes.core.indexes.base.Index.drop
bigframes.core.indexes.base.Index.drop_duplicates
drop_duplicates(*, keep: str = "first") -> bigframes.core.indexes.base.Index
Return Index with duplicate values removed.
bigframes.core.indexes.base.Index.dropna
dropna(how: str = "any") -> bigframes.core.indexes.base.Index
Return Index without NA/NaN values.
See more: bigframes.core.indexes.base.Index.dropna
bigframes.core.indexes.base.Index.fillna
fillna(value=None) -> bigframes.core.indexes.base.Index
Fill NA/NaN values with the specified value.
See more: bigframes.core.indexes.base.Index.fillna
bigframes.core.indexes.base.Index.from_frame
from_frame( frame: typing.Union[bigframes.series.Series, bigframes.dataframe.DataFrame] ) -> bigframes.core.indexes.base.Index
API documentation for from_frame
method.
bigframes.core.indexes.base.Index.get_level_values
get_level_values(level) -> bigframes.core.indexes.base.Index
Return an Index of values for requested level.
See more: bigframes.core.indexes.base.Index.get_level_values
bigframes.core.indexes.base.Index.isin
isin(values) -> bigframes.core.indexes.base.Index
Return a boolean array where the index values are in values
.
See more: bigframes.core.indexes.base.Index.isin
bigframes.core.indexes.base.Index.max
max() -> typing.Any
Return the maximum value of the Index.
See more: bigframes.core.indexes.base.Index.max
bigframes.core.indexes.base.Index.min
min() -> typing.Any
Return the minimum value of the Index.
See more: bigframes.core.indexes.base.Index.min
bigframes.core.indexes.base.Index.nunique
nunique() -> int
Return number of unique elements in the object.
bigframes.core.indexes.base.Index.rename
rename( name: typing.Union[str, typing.Sequence[str]] ) -> bigframes.core.indexes.base.Index
Alter Index or MultiIndex name.
See more: bigframes.core.indexes.base.Index.rename
bigframes.core.indexes.base.Index.sort_values
sort_values(*, ascending: bool = True, na_position: str = "last")
Return a sorted copy of the index.
bigframes.core.indexes.base.Index.to_numpy
to_numpy(dtype=None, **kwargs) -> numpy.ndarray
A NumPy ndarray representing the values in this Series or Index.
bigframes.core.indexes.base.Index.to_pandas
to_pandas() -> pandas.core.indexes.base.Index
Gets the Index as a pandas Index.
bigframes.core.indexes.base.Index.to_series
to_series( index: typing.Optional[bigframes.core.indexes.base.Index] = None, name: typing.Optional[typing.Hashable] = None, ) -> bigframes.series.Series
Create a Series with both index and values equal to the index keys.
bigframes.core.indexes.base.Index.transpose
transpose() -> bigframes.core.indexes.base.Index
Return the transpose, which is by definition self.
bigframes.core.indexes.base.Index.value_counts
value_counts( normalize: bool = False, sort: bool = True, ascending: bool = False, *, dropna: bool = True )
Return a Series containing counts of unique values.
bigframes.core.window.Window.count
count()
Calculate the window count of non-NULL observations.
See more: bigframes.core.window.Window.count
bigframes.core.window.Window.max
max()
Calculate the weighted window maximum.
See more: bigframes.core.window.Window.max
bigframes.core.window.Window.mean
mean()
Calculate the weighted window mean.
See more: bigframes.core.window.Window.mean
bigframes.core.window.Window.min
min()
Calculate the weighted window minimum.
See more: bigframes.core.window.Window.min
bigframes.core.window.Window.std
std()
Calculate the weighted window standard deviation.
See more: bigframes.core.window.Window.std
bigframes.core.window.Window.sum
sum()
Calculate the weighted window sum.
See more: bigframes.core.window.Window.sum
bigframes.core.window.Window.var
var()
Calculate the weighted window variance.
See more: bigframes.core.window.Window.var
bigframes.dataframe.DataFrame.__array_ufunc__
__array_ufunc__( ufunc: numpy.ufunc, method: str, *inputs, **kwargs ) -> bigframes.dataframe.DataFrame
Used to support numpy ufuncs.
bigframes.dataframe.DataFrame.__getitem__
__getitem__( key: typing.Union[ typing.Hashable, typing.Sequence[typing.Hashable], pandas.core.indexes.base.Index, bigframes.series.Series, ] )
Gets the specified column(s) from the DataFrame.
See more: bigframes.dataframe.DataFrame.getitem
bigframes.dataframe.DataFrame.__repr__
__repr__() -> str
Converts a DataFrame to a string.
See more: bigframes.dataframe.DataFrame.repr
bigframes.dataframe.DataFrame.__setitem__
__setitem__(key: str, value: SingleItemValue)
Modify or insert a column into the DataFrame.
See more: bigframes.dataframe.DataFrame.setitem
bigframes.dataframe.DataFrame.abs
abs() -> bigframes.dataframe.DataFrame
Return a Series/DataFrame with absolute numeric value of each element.
See more: bigframes.dataframe.DataFrame.abs
bigframes.dataframe.DataFrame.add
add( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get addition of DataFrame and other, element-wise (binary operator +
).
See more: bigframes.dataframe.DataFrame.add
bigframes.dataframe.DataFrame.add_prefix
add_prefix( prefix: str, axis: int | str | None = None ) -> bigframes.dataframe.DataFrame
Prefix labels with string prefix
.
See more: bigframes.dataframe.DataFrame.add_prefix
bigframes.dataframe.DataFrame.add_suffix
add_suffix( suffix: str, axis: int | str | None = None ) -> bigframes.dataframe.DataFrame
Suffix labels with string suffix
.
See more: bigframes.dataframe.DataFrame.add_suffix
bigframes.dataframe.DataFrame.agg
agg( func: typing.Union[str, typing.Sequence[str]] ) -> bigframes.dataframe.DataFrame | bigframes.series.Series
Aggregate using one or more operations over columns.
See more: bigframes.dataframe.DataFrame.agg
bigframes.dataframe.DataFrame.aggregate
aggregate( func: typing.Union[str, typing.Sequence[str]] ) -> bigframes.dataframe.DataFrame | bigframes.series.Series
API documentation for aggregate
method.
See more: bigframes.dataframe.DataFrame.aggregate
bigframes.dataframe.DataFrame.align
align( other: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], join: str = "outer", axis: typing.Optional[typing.Union[str, int]] = None, ) -> typing.Tuple[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ]
Align two objects on their axes with the specified join method.
See more: bigframes.dataframe.DataFrame.align
bigframes.dataframe.DataFrame.all
all( axis: typing.Union[str, int] = 0, *, bool_only: bool = False ) -> bigframes.series.Series
Return whether all elements are True, potentially over an axis.
See more: bigframes.dataframe.DataFrame.all
bigframes.dataframe.DataFrame.any
any( *, axis: typing.Union[str, int] = 0, bool_only: bool = False ) -> bigframes.series.Series
Return whether any element is True, potentially over an axis.
See more: bigframes.dataframe.DataFrame.any
bigframes.dataframe.DataFrame.apply
apply(func, *, args: typing.Tuple = (), **kwargs)
Apply a function along an axis of the DataFrame.
See more: bigframes.dataframe.DataFrame.apply
bigframes.dataframe.DataFrame.applymap
applymap( func, na_action: typing.Optional[str] = None ) -> bigframes.dataframe.DataFrame
Apply a function to a Dataframe elementwise.
See more: bigframes.dataframe.DataFrame.applymap
bigframes.dataframe.DataFrame.assign
assign(**kwargs) -> bigframes.dataframe.DataFrame
Assign new columns to a DataFrame.
See more: bigframes.dataframe.DataFrame.assign
bigframes.dataframe.DataFrame.astype
astype( dtype: typing.Union[ typing.Literal[ "boolean", "Float64", "Int64", "int64[pyarrow]", "string", "string[pyarrow]", "timestamp[us, tz=UTC][pyarrow]", "timestamp[us][pyarrow]", "date32[day][pyarrow]", "time64[us][pyarrow]", "decimal128(38, 9)[pyarrow]", "decimal256(76, 38)[pyarrow]", "binary[pyarrow]", ], pandas.core.arrays.boolean.BooleanDtype, pandas.core.arrays.floating.Float64Dtype, pandas.core.arrays.integer.Int64Dtype, pandas.core.arrays.string_.StringDtype, pandas.core.dtypes.dtypes.ArrowDtype, geopandas.array.GeometryDtype, ] ) -> bigframes.dataframe.DataFrame
Cast a pandas object to a specified dtype dtype
.
See more: bigframes.dataframe.DataFrame.astype
bigframes.dataframe.DataFrame.bfill
bfill(*, limit: typing.Optional[int] = None) -> bigframes.dataframe.DataFrame
Fill NA/NaN values by using the next valid observation to fill the gap.
See more: bigframes.dataframe.DataFrame.bfill
bigframes.dataframe.DataFrame.combine
combine( other: bigframes.dataframe.DataFrame, func: typing.Callable[ [bigframes.series.Series, bigframes.series.Series], bigframes.series.Series ], fill_value=None, overwrite: bool = True, *, how: str = "outer" ) -> bigframes.dataframe.DataFrame
Perform column-wise combine with another DataFrame.
See more: bigframes.dataframe.DataFrame.combine
bigframes.dataframe.DataFrame.combine_first
combine_first(other: bigframes.dataframe.DataFrame)
Update null elements with value in the same location in other
.
bigframes.dataframe.DataFrame.copy
copy() -> bigframes.dataframe.DataFrame
Make a copy of this object's indices and data.
See more: bigframes.dataframe.DataFrame.copy
bigframes.dataframe.DataFrame.corr
corr( method="pearson", min_periods=None, numeric_only=False ) -> bigframes.dataframe.DataFrame
Compute pairwise correlation of columns, excluding NA/null values.
See more: bigframes.dataframe.DataFrame.corr
bigframes.dataframe.DataFrame.count
count(*, numeric_only: bool = False) -> bigframes.series.Series
Count non-NA cells for each column.
See more: bigframes.dataframe.DataFrame.count
bigframes.dataframe.DataFrame.cov
cov(*, numeric_only: bool = False) -> bigframes.dataframe.DataFrame
Compute pairwise covariance of columns, excluding NA/null values.
See more: bigframes.dataframe.DataFrame.cov
bigframes.dataframe.DataFrame.cummax
cummax() -> bigframes.dataframe.DataFrame
Return cumulative maximum over columns.
See more: bigframes.dataframe.DataFrame.cummax
bigframes.dataframe.DataFrame.cummin
cummin() -> bigframes.dataframe.DataFrame
Return cumulative minimum over columns.
See more: bigframes.dataframe.DataFrame.cummin
bigframes.dataframe.DataFrame.cumprod
cumprod() -> bigframes.dataframe.DataFrame
Return cumulative product over columns.
See more: bigframes.dataframe.DataFrame.cumprod
bigframes.dataframe.DataFrame.cumsum
cumsum()
Return cumulative sum over columns.
See more: bigframes.dataframe.DataFrame.cumsum
bigframes.dataframe.DataFrame.describe
describe() -> bigframes.dataframe.DataFrame
Generate descriptive statistics.
See more: bigframes.dataframe.DataFrame.describe
bigframes.dataframe.DataFrame.diff
diff(periods: int = 1) -> bigframes.dataframe.DataFrame
First discrete difference of element.
See more: bigframes.dataframe.DataFrame.diff
bigframes.dataframe.DataFrame.div
div( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for div
method.
See more: bigframes.dataframe.DataFrame.div
bigframes.dataframe.DataFrame.divide
divide( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for divide
method.
See more: bigframes.dataframe.DataFrame.divide
bigframes.dataframe.DataFrame.dot
dot(other: _DataFrameOrSeries) -> _DataFrameOrSeries
Compute the matrix multiplication between the DataFrame and other.
See more: bigframes.dataframe.DataFrame.dot
bigframes.dataframe.DataFrame.drop
drop( labels: typing.Any = None, *, axis: typing.Union[int, str] = 0, index: typing.Any = None, columns: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] = None, level: typing.Optional[typing.Hashable] = None ) -> bigframes.dataframe.DataFrame
Drop specified labels from columns.
See more: bigframes.dataframe.DataFrame.drop
bigframes.dataframe.DataFrame.drop_duplicates
drop_duplicates( subset: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] = None, *, keep: str = "first" ) -> bigframes.dataframe.DataFrame
Return DataFrame with duplicate rows removed.
bigframes.dataframe.DataFrame.droplevel
droplevel( level: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], axis: int | str = 0, )
Return DataFrame with requested index / column level(s) removed.
See more: bigframes.dataframe.DataFrame.droplevel
bigframes.dataframe.DataFrame.dropna
dropna( *, axis: int | str = 0, inplace: bool = False, how: str = "any", ignore_index=False ) -> bigframes.dataframe.DataFrame
Remove missing values.
See more: bigframes.dataframe.DataFrame.dropna
bigframes.dataframe.DataFrame.duplicated
duplicated(subset=None, keep: str = "first") -> bigframes.series.Series
Return boolean Series denoting duplicate rows.
See more: bigframes.dataframe.DataFrame.duplicated
bigframes.dataframe.DataFrame.eq
eq(other: typing.Any, axis: str | int = "columns") -> bigframes.dataframe.DataFrame
Get equal to of DataFrame and other, element-wise (binary operator eq
).
See more: bigframes.dataframe.DataFrame.eq
bigframes.dataframe.DataFrame.equals
equals( other: typing.Union[bigframes.series.Series, bigframes.dataframe.DataFrame] ) -> bool
Test whether two objects contain the same elements.
See more: bigframes.dataframe.DataFrame.equals
bigframes.dataframe.DataFrame.eval
eval(expr: str) -> bigframes.dataframe.DataFrame
Evaluate a string describing operations on DataFrame columns.
See more: bigframes.dataframe.DataFrame.eval
bigframes.dataframe.DataFrame.expanding
expanding(min_periods: int = 1) -> bigframes.core.window.Window
Provide expanding window calculations.
See more: bigframes.dataframe.DataFrame.expanding
bigframes.dataframe.DataFrame.explode
explode( column: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], *, ignore_index: typing.Optional[bool] = False ) -> bigframes.dataframe.DataFrame
Transform each element of an array to a row, replicating index values.
See more: bigframes.dataframe.DataFrame.explode
bigframes.dataframe.DataFrame.ffill
ffill(*, limit: typing.Optional[int] = None) -> bigframes.dataframe.DataFrame
Fill NA/NaN values by propagating the last valid observation to next valid.
See more: bigframes.dataframe.DataFrame.ffill
bigframes.dataframe.DataFrame.fillna
fillna(value=None) -> bigframes.dataframe.DataFrame
Fill NA/NaN values using the specified method.
See more: bigframes.dataframe.DataFrame.fillna
bigframes.dataframe.DataFrame.filter
filter( items: typing.Optional[typing.Iterable] = None, like: typing.Optional[str] = None, regex: typing.Optional[str] = None, axis: int | str | None = None, ) -> bigframes.dataframe.DataFrame
Subset the dataframe rows or columns according to the specified index labels.
See more: bigframes.dataframe.DataFrame.filter
bigframes.dataframe.DataFrame.first_valid_index
first_valid_index()
API documentation for first_valid_index
method.
bigframes.dataframe.DataFrame.floordiv
floordiv( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get integer division of DataFrame and other, element-wise (binary operator //
).
See more: bigframes.dataframe.DataFrame.floordiv
bigframes.dataframe.DataFrame.from_dict
from_dict( data: dict, orient: str = "columns", dtype=None, columns=None ) -> bigframes.dataframe.DataFrame
Construct DataFrame from dict of array-like or dicts.
See more: bigframes.dataframe.DataFrame.from_dict
bigframes.dataframe.DataFrame.from_records
from_records( data, index=None, exclude=None, columns=None, coerce_float: bool = False, nrows: typing.Optional[int] = None, ) -> bigframes.dataframe.DataFrame
Convert structured or record ndarray to DataFrame.
bigframes.dataframe.DataFrame.ge
ge(other: typing.Any, axis: str | int = "columns") -> bigframes.dataframe.DataFrame
Get 'greater than or equal to' of DataFrame and other, element-wise (binary operator >=
).
See more: bigframes.dataframe.DataFrame.ge
bigframes.dataframe.DataFrame.get
get(key, default=None)
Get item from object for given key (ex: DataFrame column).
See more: bigframes.dataframe.DataFrame.get
bigframes.dataframe.DataFrame.groupby
groupby( by: typing.Optional[ typing.Union[ typing.Hashable, bigframes.series.Series, typing.Sequence[typing.Union[typing.Hashable, bigframes.series.Series]], ] ] = None, *, level: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, as_index: bool = True, dropna: bool = True ) -> bigframes.core.groupby.DataFrameGroupBy
Group DataFrame by columns.
See more: bigframes.dataframe.DataFrame.groupby
bigframes.dataframe.DataFrame.gt
gt(other: typing.Any, axis: str | int = "columns") -> bigframes.dataframe.DataFrame
Get 'greater than' of DataFrame and other, element-wise (binary operator >
).
See more: bigframes.dataframe.DataFrame.gt
bigframes.dataframe.DataFrame.head
head(n: int = 5) -> bigframes.dataframe.DataFrame
Return the first n
rows.
See more: bigframes.dataframe.DataFrame.head
bigframes.dataframe.DataFrame.idxmax
idxmax() -> bigframes.series.Series
Return index of first occurrence of maximum over columns.
See more: bigframes.dataframe.DataFrame.idxmax
bigframes.dataframe.DataFrame.idxmin
idxmin() -> bigframes.series.Series
Return index of first occurrence of minimum over columns.
See more: bigframes.dataframe.DataFrame.idxmin
bigframes.dataframe.DataFrame.info
info( verbose: typing.Optional[bool] = None, buf=None, max_cols: typing.Optional[int] = None, memory_usage: typing.Optional[bool] = None, show_counts: typing.Optional[bool] = None, )
Print a concise summary of a DataFrame.
See more: bigframes.dataframe.DataFrame.info
bigframes.dataframe.DataFrame.interpolate
interpolate(method: str = "linear") -> bigframes.dataframe.DataFrame
Fill NaN values using an interpolation method.
bigframes.dataframe.DataFrame.isin
isin(values) -> bigframes.dataframe.DataFrame
Whether each element in the DataFrame is contained in values.
See more: bigframes.dataframe.DataFrame.isin
bigframes.dataframe.DataFrame.isna
isna() -> bigframes.dataframe.DataFrame
Detect missing values.
See more: bigframes.dataframe.DataFrame.isna
bigframes.dataframe.DataFrame.isnull
isnull() -> bigframes.dataframe.DataFrame
Detect missing values.
See more: bigframes.dataframe.DataFrame.isnull
bigframes.dataframe.DataFrame.items
items()
Iterate over (column name, Series) pairs.
See more: bigframes.dataframe.DataFrame.items
bigframes.dataframe.DataFrame.iterrows
iterrows() -> typing.Iterable[tuple[typing.Any, pandas.core.series.Series]]
Iterate over DataFrame rows as (index, Series) pairs.
See more: bigframes.dataframe.DataFrame.iterrows
bigframes.dataframe.DataFrame.itertuples
itertuples( index: bool = True, name: typing.Optional[str] = "Pandas" ) -> typing.Iterable[tuple[typing.Any, ...]]
Iterate over DataFrame rows as namedtuples.
See more: bigframes.dataframe.DataFrame.itertuples
bigframes.dataframe.DataFrame.join
join( other: bigframes.dataframe.DataFrame, *, on: typing.Optional[str] = None, how: str = "left" ) -> bigframes.dataframe.DataFrame
Join columns of another DataFrame.
See more: bigframes.dataframe.DataFrame.join
bigframes.dataframe.DataFrame.keys
keys() -> pandas.core.indexes.base.Index
Get the 'info axis'.
See more: bigframes.dataframe.DataFrame.keys
bigframes.dataframe.DataFrame.kurt
kurt(*, numeric_only: bool = False)
Return unbiased kurtosis over columns.
See more: bigframes.dataframe.DataFrame.kurt
bigframes.dataframe.DataFrame.kurtosis
kurtosis(*, numeric_only: bool = False)
API documentation for kurtosis
method.
See more: bigframes.dataframe.DataFrame.kurtosis
bigframes.dataframe.DataFrame.le
le(other: typing.Any, axis: str | int = "columns") -> bigframes.dataframe.DataFrame
Get 'less than or equal to' of dataframe and other, element-wise (binary operator <=
).
See more: bigframes.dataframe.DataFrame.le
bigframes.dataframe.DataFrame.lt
lt(other: typing.Any, axis: str | int = "columns") -> bigframes.dataframe.DataFrame
Get 'less than' of DataFrame and other, element-wise (binary operator <
).
See more: bigframes.dataframe.DataFrame.lt
bigframes.dataframe.DataFrame.map
map(func, na_action: typing.Optional[str] = None) -> bigframes.dataframe.DataFrame
Apply a function to a Dataframe elementwise.
See more: bigframes.dataframe.DataFrame.map
bigframes.dataframe.DataFrame.max
max( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return the maximum of the values over the requested axis.
See more: bigframes.dataframe.DataFrame.max
bigframes.dataframe.DataFrame.mean
mean( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return the mean of the values over the requested axis.
See more: bigframes.dataframe.DataFrame.mean
bigframes.dataframe.DataFrame.median
median( *, numeric_only: bool = False, exact: bool = False ) -> bigframes.series.Series
Return the median of the values over colunms.
See more: bigframes.dataframe.DataFrame.median
bigframes.dataframe.DataFrame.melt
melt( id_vars: typing.Optional[typing.Iterable[typing.Hashable]] = None, value_vars: typing.Optional[typing.Iterable[typing.Hashable]] = None, var_name: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] = None, value_name: typing.Hashable = "value", )
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
See more: bigframes.dataframe.DataFrame.melt
bigframes.dataframe.DataFrame.memory_usage
memory_usage(index: bool = True)
Return the memory usage of each column in bytes.
bigframes.dataframe.DataFrame.merge
merge( right: bigframes.dataframe.DataFrame, how: typing.Literal["inner", "left", "outer", "right", "cross"] = "inner", on: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, *, left_on: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, right_on: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, sort: bool = False, suffixes: tuple[str, str] = ("_x", "_y") ) -> bigframes.dataframe.DataFrame
Merge DataFrame objects with a database-style join.
See more: bigframes.dataframe.DataFrame.merge
bigframes.dataframe.DataFrame.min
min( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return the minimum of the values over the requested axis.
See more: bigframes.dataframe.DataFrame.min
bigframes.dataframe.DataFrame.mod
mod( other: int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get modulo of DataFrame and other, element-wise (binary operator %
).
See more: bigframes.dataframe.DataFrame.mod
bigframes.dataframe.DataFrame.mul
mul( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get multiplication of DataFrame and other, element-wise (binary operator *
).
See more: bigframes.dataframe.DataFrame.mul
bigframes.dataframe.DataFrame.multiply
multiply( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for multiply
method.
See more: bigframes.dataframe.DataFrame.multiply
bigframes.dataframe.DataFrame.ne
ne(other: typing.Any, axis: str | int = "columns") -> bigframes.dataframe.DataFrame
Get not equal to of DataFrame and other, element-wise (binary operator ne
).
See more: bigframes.dataframe.DataFrame.ne
bigframes.dataframe.DataFrame.nlargest
nlargest( n: int, columns: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], keep: str = "first", ) -> bigframes.dataframe.DataFrame
Return the first n
rows ordered by columns
in descending order.
See more: bigframes.dataframe.DataFrame.nlargest
bigframes.dataframe.DataFrame.notna
notna() -> bigframes.dataframe.DataFrame
Detect existing (non-missing) values.
See more: bigframes.dataframe.DataFrame.notna
bigframes.dataframe.DataFrame.notnull
notnull() -> bigframes.dataframe.DataFrame
Detect existing (non-missing) values.
See more: bigframes.dataframe.DataFrame.notnull
bigframes.dataframe.DataFrame.nsmallest
nsmallest( n: int, columns: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], keep: str = "first", ) -> bigframes.dataframe.DataFrame
Return the first n
rows ordered by columns
in ascending order.
See more: bigframes.dataframe.DataFrame.nsmallest
bigframes.dataframe.DataFrame.nunique
nunique() -> bigframes.series.Series
Count number of distinct elements in each column.
See more: bigframes.dataframe.DataFrame.nunique
bigframes.dataframe.DataFrame.pct_change
pct_change(periods: int = 1) -> bigframes.dataframe.DataFrame
Fractional change between the current and a prior element.
See more: bigframes.dataframe.DataFrame.pct_change
bigframes.dataframe.DataFrame.peek
peek(n: int = 5, *, force: bool = True) -> pandas.core.frame.DataFrame
Preview n arbitrary rows from the dataframe.
See more: bigframes.dataframe.DataFrame.peek
bigframes.dataframe.DataFrame.pipe
pipe(func: Callable[..., T] | tuple[Callable[..., T], str], *args, **kwargs) -> T
Apply chainable functions that expect Series or DataFrames.
See more: bigframes.dataframe.DataFrame.pipe
bigframes.dataframe.DataFrame.pivot
pivot( *, columns: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], index: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, values: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None ) -> bigframes.dataframe.DataFrame
Return reshaped DataFrame organized by given index / column values.
See more: bigframes.dataframe.DataFrame.pivot
bigframes.dataframe.DataFrame.pow
pow( other: int | bigframes.series.Series, axis: str | int = "columns" ) -> bigframes.dataframe.DataFrame
Get Exponential power of dataframe and other, element-wise (binary operator **
).
See more: bigframes.dataframe.DataFrame.pow
bigframes.dataframe.DataFrame.prod
prod( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return the product of the values over the requested axis.
See more: bigframes.dataframe.DataFrame.prod
bigframes.dataframe.DataFrame.product
product( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
API documentation for product
method.
See more: bigframes.dataframe.DataFrame.product
bigframes.dataframe.DataFrame.query
query(expr: str) -> bigframes.dataframe.DataFrame
Query the columns of a DataFrame with a boolean expression.
See more: bigframes.dataframe.DataFrame.query
bigframes.dataframe.DataFrame.radd
radd( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for radd
method.
See more: bigframes.dataframe.DataFrame.radd
bigframes.dataframe.DataFrame.rank
rank( axis=0, method: str = "average", numeric_only=False, na_option: str = "keep", ascending=True, ) -> bigframes.dataframe.DataFrame
Compute numerical data ranks (1 through n) along axis.
See more: bigframes.dataframe.DataFrame.rank
bigframes.dataframe.DataFrame.rdiv
rdiv( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for rdiv
method.
See more: bigframes.dataframe.DataFrame.rdiv
bigframes.dataframe.DataFrame.reindex
reindex( labels=None, *, index=None, columns=None, axis: typing.Optional[typing.Union[str, int]] = None, validate: typing.Optional[bool] = None )
Conform DataFrame to new index with optional filling logic.
See more: bigframes.dataframe.DataFrame.reindex
bigframes.dataframe.DataFrame.reindex_like
reindex_like( other: bigframes.dataframe.DataFrame, *, validate: typing.Optional[bool] = None )
Return an object with matching indices as other object.
bigframes.dataframe.DataFrame.rename
rename( *, columns: typing.Mapping[typing.Hashable, typing.Hashable] ) -> bigframes.dataframe.DataFrame
Rename columns.
See more: bigframes.dataframe.DataFrame.rename
bigframes.dataframe.DataFrame.rename_axis
rename_axis( mapper: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], **kwargs ) -> bigframes.dataframe.DataFrame
Set the name of the axis for the index.
bigframes.dataframe.DataFrame.reorder_levels
reorder_levels( order: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], axis: int | str = 0, )
Rearrange index levels using input order.
bigframes.dataframe.DataFrame.replace
replace(to_replace: typing.Any, value: typing.Any = None, *, regex: bool = False)
Replace values given in to_replace
with value
.
See more: bigframes.dataframe.DataFrame.replace
bigframes.dataframe.DataFrame.reset_index
reset_index(*, drop: bool = False) -> bigframes.dataframe.DataFrame
Reset the index.
bigframes.dataframe.DataFrame.rfloordiv
rfloordiv( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get integer division of DataFrame and other, element-wise (binary operator //
).
See more: bigframes.dataframe.DataFrame.rfloordiv
bigframes.dataframe.DataFrame.rmod
rmod( other: int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get modulo of DataFrame and other, element-wise (binary operator %
).
See more: bigframes.dataframe.DataFrame.rmod
bigframes.dataframe.DataFrame.rmul
rmul( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for rmul
method.
See more: bigframes.dataframe.DataFrame.rmul
bigframes.dataframe.DataFrame.rolling
rolling(window: int, min_periods=None) -> bigframes.core.window.Window
Provide rolling window calculations.
See more: bigframes.dataframe.DataFrame.rolling
bigframes.dataframe.DataFrame.rpow
rpow( other: int | bigframes.series.Series, axis: str | int = "columns" ) -> bigframes.dataframe.DataFrame
Get Exponential power of dataframe and other, element-wise (binary operator rpow
).
See more: bigframes.dataframe.DataFrame.rpow
bigframes.dataframe.DataFrame.rsub
rsub( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get subtraction of DataFrame and other, element-wise (binary operator -
).
See more: bigframes.dataframe.DataFrame.rsub
bigframes.dataframe.DataFrame.rtruediv
rtruediv( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get floating division of DataFrame and other, element-wise (binary operator /
).
See more: bigframes.dataframe.DataFrame.rtruediv
bigframes.dataframe.DataFrame.sample
sample( n: typing.Optional[int] = None, frac: typing.Optional[float] = None, *, random_state: typing.Optional[int] = None, sort: typing.Optional[typing.Union[bool, typing.Literal["random"]]] = "random" ) -> bigframes.dataframe.DataFrame
Return a random sample of items from an axis of object.
See more: bigframes.dataframe.DataFrame.sample
bigframes.dataframe.DataFrame.select_dtypes
select_dtypes(include=None, exclude=None) -> bigframes.dataframe.DataFrame
Return a subset of the DataFrame's columns based on the column dtypes.
bigframes.dataframe.DataFrame.set_index
set_index( keys: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], append: bool = False, drop: bool = True, ) -> bigframes.dataframe.DataFrame
Set the DataFrame index using existing columns.
See more: bigframes.dataframe.DataFrame.set_index
bigframes.dataframe.DataFrame.shift
shift(periods: int = 1) -> bigframes.dataframe.DataFrame
Shift index by desired number of periods.
See more: bigframes.dataframe.DataFrame.shift
bigframes.dataframe.DataFrame.skew
skew(*, numeric_only: bool = False)
Return unbiased skew over columns.
See more: bigframes.dataframe.DataFrame.skew
bigframes.dataframe.DataFrame.sort_index
sort_index( ascending: bool = True, na_position: typing.Literal["first", "last"] = "last" ) -> bigframes.dataframe.DataFrame
Sort object by labels (along an axis).
See more: bigframes.dataframe.DataFrame.sort_index
bigframes.dataframe.DataFrame.sort_values
sort_values( by: typing.Union[str, typing.Sequence[str]], *, ascending: typing.Union[bool, typing.Sequence[bool]] = True, kind: str = "quicksort", na_position: typing.Literal["first", "last"] = "last" ) -> bigframes.dataframe.DataFrame
Sort by the values along row axis.
bigframes.dataframe.DataFrame.stack
stack(level: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] = -1)
Stack the prescribed level(s) from columns to index.
See more: bigframes.dataframe.DataFrame.stack
bigframes.dataframe.DataFrame.std
std( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return sample standard deviation over columns.
See more: bigframes.dataframe.DataFrame.std
bigframes.dataframe.DataFrame.sub
sub( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get subtraction of DataFrame and other, element-wise (binary operator -
).
See more: bigframes.dataframe.DataFrame.sub
bigframes.dataframe.DataFrame.subtract
subtract( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
API documentation for subtract
method.
See more: bigframes.dataframe.DataFrame.subtract
bigframes.dataframe.DataFrame.sum
sum( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return the sum of the values over the requested axis.
See more: bigframes.dataframe.DataFrame.sum
bigframes.dataframe.DataFrame.swaplevel
swaplevel(i: int = -2, j: int = -1, axis: int | str = 0)
Swap levels i and j in a MultiIndex
.
See more: bigframes.dataframe.DataFrame.swaplevel
bigframes.dataframe.DataFrame.tail
tail(n: int = 5) -> bigframes.dataframe.DataFrame
Return the last n
rows.
See more: bigframes.dataframe.DataFrame.tail
bigframes.dataframe.DataFrame.to_csv
to_csv( path_or_buf: str, sep=",", *, header: bool = True, index: bool = True ) -> None
Write object to a comma-separated values (csv) file on Cloud Storage.
See more: bigframes.dataframe.DataFrame.to_csv
bigframes.dataframe.DataFrame.to_dict
to_dict(orient: typing.Literal['dict', 'list', 'series', 'split', 'tight', 'records', 'index'] = 'dict', into: type[dict] =
Convert the DataFrame to a dictionary.
See more: bigframes.dataframe.DataFrame.to_dict
bigframes.dataframe.DataFrame.to_excel
to_excel(excel_writer, sheet_name: str = "Sheet1", **kwargs) -> None
Write DataFrame to an Excel sheet.
See more: bigframes.dataframe.DataFrame.to_excel
bigframes.dataframe.DataFrame.to_gbq
to_gbq( destination_table: typing.Optional[str] = None, *, if_exists: typing.Optional[typing.Literal["fail", "replace", "append"]] = None, index: bool = True, ordering_id: typing.Optional[str] = None, clustering_columns: typing.Union[ pandas.core.indexes.base.Index, typing.Iterable[typing.Hashable] ] = () ) -> str
Write a DataFrame to a BigQuery table.
See more: bigframes.dataframe.DataFrame.to_gbq
bigframes.dataframe.DataFrame.to_html
to_html( buf=None, columns: typing.Optional[typing.Sequence[str]] = None, col_space=None, header: bool = True, index: bool = True, na_rep: str = "NaN", formatters=None, float_format=None, sparsify: bool | None = None, index_names: bool = True, justify: str | None = None, max_rows: int | None = None, max_cols: int | None = None, show_dimensions: bool = False, decimal: str = ".", bold_rows: bool = True, classes: str | list | tuple | None = None, escape: bool = True, notebook: bool = False, border: int | None = None, table_id: str | None = None, render_links: bool = False, encoding: str | None = None, ) -> str
Render a DataFrame as an HTML table.
See more: bigframes.dataframe.DataFrame.to_html
bigframes.dataframe.DataFrame.to_json
to_json( path_or_buf: str, orient: typing.Literal[ "split", "records", "index", "columns", "values", "table" ] = "columns", *, lines: bool = False, index: bool = True ) -> None
Convert the object to a JSON string, written to Cloud Storage.
See more: bigframes.dataframe.DataFrame.to_json
bigframes.dataframe.DataFrame.to_latex
to_latex( buf=None, columns: typing.Optional[typing.Sequence] = None, header: typing.Union[bool, typing.Sequence[str]] = True, index: bool = True, **kwargs ) -> str | None
Render object to a LaTeX tabular, longtable, or nested table.
See more: bigframes.dataframe.DataFrame.to_latex
bigframes.dataframe.DataFrame.to_markdown
to_markdown(buf=None, mode: str = "wt", index: bool = True, **kwargs) -> str | None
Print DataFrame in Markdown-friendly format.
bigframes.dataframe.DataFrame.to_numpy
to_numpy(dtype=None, copy=False, na_value=None, **kwargs) -> numpy.ndarray
Convert the DataFrame to a NumPy array.
See more: bigframes.dataframe.DataFrame.to_numpy
bigframes.dataframe.DataFrame.to_orc
to_orc(path=None, **kwargs) -> bytes | None
Write a DataFrame to the ORC format.
See more: bigframes.dataframe.DataFrame.to_orc
bigframes.dataframe.DataFrame.to_pandas
to_pandas( max_download_size: typing.Optional[int] = None, sampling_method: typing.Optional[str] = None, random_state: typing.Optional[int] = None, *, ordered: bool = True ) -> pandas.core.frame.DataFrame
Write DataFrame to pandas DataFrame.
See more: bigframes.dataframe.DataFrame.to_pandas
bigframes.dataframe.DataFrame.to_pandas_batches
to_pandas_batches() -> typing.Iterable[pandas.core.frame.DataFrame]
Stream DataFrame results to an iterable of pandas DataFrame.
bigframes.dataframe.DataFrame.to_parquet
to_parquet( path: str, *, compression: typing.Optional[typing.Literal["snappy", "gzip"]] = "snappy", index: bool = True ) -> None
Write a DataFrame to the binary Parquet format.
See more: bigframes.dataframe.DataFrame.to_parquet
bigframes.dataframe.DataFrame.to_pickle
to_pickle(path, **kwargs) -> None
Pickle (serialize) object to file.
See more: bigframes.dataframe.DataFrame.to_pickle
bigframes.dataframe.DataFrame.to_records
to_records( index: bool = True, column_dtypes=None, index_dtypes=None ) -> numpy.recarray
Convert DataFrame to a NumPy record array.
See more: bigframes.dataframe.DataFrame.to_records
bigframes.dataframe.DataFrame.to_string
to_string( buf=None, columns: typing.Optional[typing.Sequence[str]] = None, col_space=None, header: typing.Union[bool, typing.Sequence[str]] = True, index: bool = True, na_rep: str = "NaN", formatters=None, float_format=None, sparsify: bool | None = None, index_names: bool = True, justify: str | None = None, max_rows: int | None = None, max_cols: int | None = None, show_dimensions: bool = False, decimal: str = ".", line_width: int | None = None, min_rows: int | None = None, max_colwidth: int | None = None, encoding: str | None = None, ) -> str | None
Render a DataFrame to a console-friendly tabular output.
See more: bigframes.dataframe.DataFrame.to_string
bigframes.dataframe.DataFrame.truediv
truediv( other: float | int | bigframes.series.Series | bigframes.dataframe.DataFrame, axis: str | int = "columns", ) -> bigframes.dataframe.DataFrame
Get floating division of DataFrame and other, element-wise (binary operator /
).
See more: bigframes.dataframe.DataFrame.truediv
bigframes.dataframe.DataFrame.unstack
unstack( level: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] = -1 )
Pivot a level of the (necessarily hierarchical) index labels.
See more: bigframes.dataframe.DataFrame.unstack
bigframes.dataframe.DataFrame.update
update(other, join: str = "left", overwrite=True, filter_func=None)
Modify in place using non-NA values from another DataFrame.
See more: bigframes.dataframe.DataFrame.update
bigframes.dataframe.DataFrame.value_counts
value_counts( subset: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] = None, normalize: bool = False, sort: bool = True, ascending: bool = False, dropna: bool = True, )
Return a Series containing counts of unique rows in the DataFrame.
bigframes.dataframe.DataFrame.var
var( axis: typing.Union[str, int] = 0, *, numeric_only: bool = False ) -> bigframes.series.Series
Return unbiased variance over requested axis.
See more: bigframes.dataframe.DataFrame.var
bigframes.ml.cluster.KMeans.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.cluster.KMeans.repr
bigframes.ml.cluster.KMeans.detect_anomalies
detect_anomalies( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, contamination: float = 0.1 ) -> bigframes.dataframe.DataFrame
Detect the anomaly data points of the input.
bigframes.ml.cluster.KMeans.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.ml.base._T
Compute k-means clustering.
See more: bigframes.ml.cluster.KMeans.fit
bigframes.ml.cluster.KMeans.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.cluster.KMeans.get_params
bigframes.ml.cluster.KMeans.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the closest cluster each sample in X belongs to.
See more: bigframes.ml.cluster.KMeans.predict
bigframes.ml.cluster.KMeans.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.cluster.KMeans.register
bigframes.ml.cluster.KMeans.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.dataframe.DataFrame
Calculate evaluation metrics of the model.
See more: bigframes.ml.cluster.KMeans.score
bigframes.ml.cluster.KMeans.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.cluster.KMeans
Save the model to BigQuery.
See more: bigframes.ml.cluster.KMeans.to_gbq
bigframes.ml.compose.ColumnTransformer.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.compose.ColumnTransformer.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.ml.compose.ColumnTransformer
Fit all transformers using X.
bigframes.ml.compose.ColumnTransformer.fit_transform
fit_transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
API documentation for fit_transform
method.
See more: bigframes.ml.compose.ColumnTransformer.fit_transform
bigframes.ml.compose.ColumnTransformer.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.compose.ColumnTransformer.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
bigframes.ml.compose.ColumnTransformer.transform
transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Transform X separately by each transformer, concatenate results.
bigframes.ml.decomposition.PCA.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.decomposition.PCA.repr
bigframes.ml.decomposition.PCA.detect_anomalies
detect_anomalies( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, contamination: float = 0.1 ) -> bigframes.dataframe.DataFrame
Detect the anomaly data points of the input.
bigframes.ml.decomposition.PCA.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.ml.base._T
Fit the model according to the given training data.
See more: bigframes.ml.decomposition.PCA.fit
bigframes.ml.decomposition.PCA.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.decomposition.PCA.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the closest cluster for each sample in X.
See more: bigframes.ml.decomposition.PCA.predict
bigframes.ml.decomposition.PCA.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.decomposition.PCA.register
bigframes.ml.decomposition.PCA.score
score(X=None, y=None) -> bigframes.dataframe.DataFrame
Calculate evaluation metrics of the model.
See more: bigframes.ml.decomposition.PCA.score
bigframes.ml.decomposition.PCA.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.decomposition.PCA
Save the model to BigQuery.
See more: bigframes.ml.decomposition.PCA.to_gbq
bigframes.ml.ensemble.RandomForestClassifier.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.ensemble.RandomForestClassifier.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
Build a forest of trees from the training set (X, y).
bigframes.ml.ensemble.RandomForestClassifier.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.ensemble.RandomForestClassifier.get_params
bigframes.ml.ensemble.RandomForestClassifier.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict regression target for X.
See more: bigframes.ml.ensemble.RandomForestClassifier.predict
bigframes.ml.ensemble.RandomForestClassifier.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.ensemble.RandomForestClassifier.register
bigframes.ml.ensemble.RandomForestClassifier.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], )
Calculate evaluation metrics of the model.
See more: bigframes.ml.ensemble.RandomForestClassifier.score
bigframes.ml.ensemble.RandomForestClassifier.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.ensemble.RandomForestClassifier
Save the model to BigQuery.
See more: bigframes.ml.ensemble.RandomForestClassifier.to_gbq
bigframes.ml.ensemble.RandomForestRegressor.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.ensemble.RandomForestRegressor.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
Build a forest of trees from the training set (X, y).
bigframes.ml.ensemble.RandomForestRegressor.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.ensemble.RandomForestRegressor.get_params
bigframes.ml.ensemble.RandomForestRegressor.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict regression target for X.
See more: bigframes.ml.ensemble.RandomForestRegressor.predict
bigframes.ml.ensemble.RandomForestRegressor.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.ensemble.RandomForestRegressor.register
bigframes.ml.ensemble.RandomForestRegressor.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], )
Calculate evaluation metrics of the model.
bigframes.ml.ensemble.RandomForestRegressor.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.ensemble.RandomForestRegressor
Save the model to BigQuery.
See more: bigframes.ml.ensemble.RandomForestRegressor.to_gbq
bigframes.ml.ensemble.XGBClassifier.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.ensemble.XGBClassifier.repr
bigframes.ml.ensemble.XGBClassifier.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
Fit gradient boosting model.
See more: bigframes.ml.ensemble.XGBClassifier.fit
bigframes.ml.ensemble.XGBClassifier.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.ensemble.XGBClassifier.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict using the XGB model.
bigframes.ml.ensemble.XGBClassifier.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
bigframes.ml.ensemble.XGBClassifier.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], )
Return the mean accuracy on the given test data and labels.
bigframes.ml.ensemble.XGBClassifier.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.ensemble.XGBClassifier
Save the model to BigQuery.
bigframes.ml.ensemble.XGBRegressor.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.ensemble.XGBRegressor.repr
bigframes.ml.ensemble.XGBRegressor.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
Fit gradient boosting model.
See more: bigframes.ml.ensemble.XGBRegressor.fit
bigframes.ml.ensemble.XGBRegressor.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.ensemble.XGBRegressor.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict using the XGB model.
bigframes.ml.ensemble.XGBRegressor.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
bigframes.ml.ensemble.XGBRegressor.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], )
Calculate evaluation metrics of the model.
See more: bigframes.ml.ensemble.XGBRegressor.score
bigframes.ml.ensemble.XGBRegressor.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.ensemble.XGBRegressor
Save the model to BigQuery.
bigframes.ml.forecasting.ARIMAPlus.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.forecasting.ARIMAPlus.repr
bigframes.ml.forecasting.ARIMAPlus.detect_anomalies
detect_anomalies( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, anomaly_prob_threshold: float = 0.95 ) -> bigframes.dataframe.DataFrame
Detect the anomaly data points of the input.
See more: bigframes.ml.forecasting.ARIMAPlus.detect_anomalies
bigframes.ml.forecasting.ARIMAPlus.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
API documentation for fit
method.
See more: bigframes.ml.forecasting.ARIMAPlus.fit
bigframes.ml.forecasting.ARIMAPlus.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.forecasting.ARIMAPlus.predict
predict( X=None, *, horizon: int = 3, confidence_level: float = 0.95 ) -> bigframes.dataframe.DataFrame
Forecast time series at future horizon.
bigframes.ml.forecasting.ARIMAPlus.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
bigframes.ml.forecasting.ARIMAPlus.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.dataframe.DataFrame
Calculate evaluation metrics of the model.
See more: bigframes.ml.forecasting.ARIMAPlus.score
bigframes.ml.forecasting.ARIMAPlus.summary
summary(show_all_candidate_models: bool = False) -> bigframes.dataframe.DataFrame
Summary of the evaluation metrics of the time series model.
bigframes.ml.forecasting.ARIMAPlus.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.forecasting.ARIMAPlus
Save the model to BigQuery.
bigframes.ml.imported.ONNXModel.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.imported.ONNXModel.repr
bigframes.ml.imported.ONNXModel.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.imported.ONNXModel.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
See more: bigframes.ml.imported.ONNXModel.predict
bigframes.ml.imported.ONNXModel.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.imported.ONNXModel.register
bigframes.ml.imported.ONNXModel.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.imported.ONNXModel
Save the model to BigQuery.
See more: bigframes.ml.imported.ONNXModel.to_gbq
bigframes.ml.imported.TensorFlowModel.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.imported.TensorFlowModel.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.imported.TensorFlowModel.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
bigframes.ml.imported.TensorFlowModel.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
bigframes.ml.imported.TensorFlowModel.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.imported.TensorFlowModel
Save the model to BigQuery.
bigframes.ml.imported.XGBoostModel.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.imported.XGBoostModel.repr
bigframes.ml.imported.XGBoostModel.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.imported.XGBoostModel.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
bigframes.ml.imported.XGBoostModel.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
bigframes.ml.imported.XGBoostModel.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.imported.XGBoostModel
Save the model to BigQuery.
bigframes.ml.linear_model.LinearRegression.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.linear_model.LinearRegression.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
Fit linear model.
bigframes.ml.linear_model.LinearRegression.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.linear_model.LinearRegression.get_params
bigframes.ml.linear_model.LinearRegression.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict using the linear model.
See more: bigframes.ml.linear_model.LinearRegression.predict
bigframes.ml.linear_model.LinearRegression.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.linear_model.LinearRegression.register
bigframes.ml.linear_model.LinearRegression.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.dataframe.DataFrame
Calculate evaluation metrics of the model.
bigframes.ml.linear_model.LinearRegression.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.linear_model.LinearRegression
Save the model to BigQuery.
bigframes.ml.linear_model.LogisticRegression.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.linear_model.LogisticRegression.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.ml.base._T
Fit the model according to the given training data.
bigframes.ml.linear_model.LogisticRegression.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.linear_model.LogisticRegression.get_params
bigframes.ml.linear_model.LogisticRegression.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict class labels for samples in X.
See more: bigframes.ml.linear_model.LogisticRegression.predict
bigframes.ml.linear_model.LogisticRegression.register
register(vertex_ai_model_id: typing.Optional[str] = None) -> bigframes.ml.base._T
Register the model to Vertex AI.
See more: bigframes.ml.linear_model.LogisticRegression.register
bigframes.ml.linear_model.LogisticRegression.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], ) -> bigframes.dataframe.DataFrame
Return the mean accuracy on the given test data and labels.
See more: bigframes.ml.linear_model.LogisticRegression.score
bigframes.ml.linear_model.LogisticRegression.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.linear_model.LogisticRegression
Save the model to BigQuery.
See more: bigframes.ml.linear_model.LogisticRegression.to_gbq
bigframes.ml.llm.GeminiTextGenerator.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.llm.GeminiTextGenerator.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.llm.GeminiTextGenerator.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, temperature: float = 0.9, max_output_tokens: int = 8192, top_k: int = 40, top_p: float = 1.0 ) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
bigframes.ml.llm.GeminiTextGenerator.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.llm.GeminiTextGenerator
Save the model to BigQuery.
bigframes.ml.llm.PaLM2TextEmbeddingGenerator.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.llm.PaLM2TextEmbeddingGenerator.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.llm.PaLM2TextEmbeddingGenerator.get_params
bigframes.ml.llm.PaLM2TextEmbeddingGenerator.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
See more: bigframes.ml.llm.PaLM2TextEmbeddingGenerator.predict
bigframes.ml.llm.PaLM2TextEmbeddingGenerator.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.llm.PaLM2TextEmbeddingGenerator
Save the model to BigQuery.
See more: bigframes.ml.llm.PaLM2TextEmbeddingGenerator.to_gbq
bigframes.ml.llm.PaLM2TextGenerator.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.llm.PaLM2TextGenerator.repr
bigframes.ml.llm.PaLM2TextGenerator.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.llm.PaLM2TextGenerator.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], *, temperature: float = 0.0, max_output_tokens: int = 128, top_k: int = 40, top_p: float = 0.95 ) -> bigframes.dataframe.DataFrame
Predict the result from input DataFrame.
bigframes.ml.llm.PaLM2TextGenerator.to_gbq
to_gbq( model_name: str, replace: bool = False ) -> bigframes.ml.llm.PaLM2TextGenerator
Save the model to BigQuery.
bigframes.ml.pipeline.Pipeline.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.pipeline.Pipeline.repr
bigframes.ml.pipeline.Pipeline.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.ml.pipeline.Pipeline
Fit the model.
See more: bigframes.ml.pipeline.Pipeline.fit
bigframes.ml.pipeline.Pipeline.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.pipeline.Pipeline.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
API documentation for predict
method.
See more: bigframes.ml.pipeline.Pipeline.predict
bigframes.ml.pipeline.Pipeline.score
score( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
API documentation for score
method.
See more: bigframes.ml.pipeline.Pipeline.score
bigframes.ml.pipeline.Pipeline.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.pipeline.Pipeline
Save the pipeline to BigQuery.
See more: bigframes.ml.pipeline.Pipeline.to_gbq
bigframes.ml.preprocessing.KBinsDiscretizer.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.preprocessing.KBinsDiscretizer.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.ml.preprocessing.KBinsDiscretizer
Fit the estimator.
bigframes.ml.preprocessing.KBinsDiscretizer.fit_transform
fit_transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
Fit to data, then transform it.
See more: bigframes.ml.preprocessing.KBinsDiscretizer.fit_transform
bigframes.ml.preprocessing.KBinsDiscretizer.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.preprocessing.KBinsDiscretizer.get_params
bigframes.ml.preprocessing.KBinsDiscretizer.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
See more: bigframes.ml.preprocessing.KBinsDiscretizer.to_gbq
bigframes.ml.preprocessing.KBinsDiscretizer.transform
transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Discretize the data.
See more: bigframes.ml.preprocessing.KBinsDiscretizer.transform
bigframes.ml.preprocessing.LabelEncoder.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.preprocessing.LabelEncoder.fit
fit( y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.ml.preprocessing.LabelEncoder
Fit label encoder.
bigframes.ml.preprocessing.LabelEncoder.fit_transform
fit_transform( y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
API documentation for fit_transform
method.
See more: bigframes.ml.preprocessing.LabelEncoder.fit_transform
bigframes.ml.preprocessing.LabelEncoder.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.preprocessing.LabelEncoder.get_params
bigframes.ml.preprocessing.LabelEncoder.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
bigframes.ml.preprocessing.LabelEncoder.transform
transform( y: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Transform y using label encoding.
bigframes.ml.preprocessing.MaxAbsScaler.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.preprocessing.MaxAbsScaler.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.ml.preprocessing.MaxAbsScaler
Compute the maximum absolute value to be used for later scaling.
bigframes.ml.preprocessing.MaxAbsScaler.fit_transform
fit_transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
Fit to data, then transform it.
See more: bigframes.ml.preprocessing.MaxAbsScaler.fit_transform
bigframes.ml.preprocessing.MaxAbsScaler.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.preprocessing.MaxAbsScaler.get_params
bigframes.ml.preprocessing.MaxAbsScaler.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
bigframes.ml.preprocessing.MaxAbsScaler.transform
transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Scale the data.
bigframes.ml.preprocessing.MinMaxScaler.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.preprocessing.MinMaxScaler.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.ml.preprocessing.MinMaxScaler
Compute the minimum and maximum to be used for later scaling.
bigframes.ml.preprocessing.MinMaxScaler.fit_transform
fit_transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
Fit to data, then transform it.
See more: bigframes.ml.preprocessing.MinMaxScaler.fit_transform
bigframes.ml.preprocessing.MinMaxScaler.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.preprocessing.MinMaxScaler.get_params
bigframes.ml.preprocessing.MinMaxScaler.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
bigframes.ml.preprocessing.MinMaxScaler.transform
transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Scale the data.
bigframes.ml.preprocessing.OneHotEncoder.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.preprocessing.OneHotEncoder.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.ml.preprocessing.OneHotEncoder
Fit OneHotEncoder to X.
bigframes.ml.preprocessing.OneHotEncoder.fit_transform
fit_transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
API documentation for fit_transform
method.
See more: bigframes.ml.preprocessing.OneHotEncoder.fit_transform
bigframes.ml.preprocessing.OneHotEncoder.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.preprocessing.OneHotEncoder.get_params
bigframes.ml.preprocessing.OneHotEncoder.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
bigframes.ml.preprocessing.OneHotEncoder.transform
transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Transform X using one-hot encoding.
See more: bigframes.ml.preprocessing.OneHotEncoder.transform
bigframes.ml.preprocessing.StandardScaler.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
bigframes.ml.preprocessing.StandardScaler.fit
fit( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y=None ) -> bigframes.ml.preprocessing.StandardScaler
Compute the mean and std to be used for later scaling.
bigframes.ml.preprocessing.StandardScaler.fit_transform
fit_transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series], y: typing.Optional[ typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ] = None, ) -> bigframes.dataframe.DataFrame
Fit to data, then transform it.
See more: bigframes.ml.preprocessing.StandardScaler.fit_transform
bigframes.ml.preprocessing.StandardScaler.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
See more: bigframes.ml.preprocessing.StandardScaler.get_params
bigframes.ml.preprocessing.StandardScaler.to_gbq
to_gbq(model_name: str, replace: bool = False) -> bigframes.ml.base._T
Save the transformer as a BigQuery model.
bigframes.ml.preprocessing.StandardScaler.transform
transform( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Perform standardization by centering and scaling.
See more: bigframes.ml.preprocessing.StandardScaler.transform
bigframes.ml.remote.VertexAIModel.__repr__
__repr__()
Print the estimator's constructor with all non-default parameter values.
See more: bigframes.ml.remote.VertexAIModel.repr
bigframes.ml.remote.VertexAIModel.get_params
get_params(deep: bool = True) -> typing.Dict[str, typing.Any]
Get parameters for this estimator.
bigframes.ml.remote.VertexAIModel.predict
predict( X: typing.Union[bigframes.dataframe.DataFrame, bigframes.series.Series] ) -> bigframes.dataframe.DataFrame
Predict the result from the input DataFrame.
bigframes.operations.datetimes.DatetimeMethods.floor
floor(freq: str) -> bigframes.series.Series
Perform floor operation on the data to the specified freq.
See more: bigframes.operations.datetimes.DatetimeMethods.floor
bigframes.operations.datetimes.DatetimeMethods.normalize
normalize() -> bigframes.series.Series
Convert times to midnight.
See more: bigframes.operations.datetimes.DatetimeMethods.normalize
bigframes.operations.datetimes.DatetimeMethods.strftime
strftime(date_format: str) -> bigframes.series.Series
Convert to string Series using specified date_format.
See more: bigframes.operations.datetimes.DatetimeMethods.strftime
bigframes.operations.plotting.PlotAccessor.area
area( x: typing.Optional[typing.Hashable] = None, y: typing.Optional[typing.Hashable] = None, stacked: bool = True, **kwargs )
Draw a stacked area plot.
bigframes.operations.plotting.PlotAccessor.hist
hist(by: typing.Optional[typing.Sequence[str]] = None, bins: int = 10, **kwargs)
Draw one histogram of the DataFrame’s columns.
bigframes.operations.plotting.PlotAccessor.line
line( x: typing.Optional[typing.Hashable] = None, y: typing.Optional[typing.Hashable] = None, **kwargs )
Plot Series or DataFrame as lines.
bigframes.operations.plotting.PlotAccessor.scatter
scatter( x: typing.Optional[typing.Hashable] = None, y: typing.Optional[typing.Hashable] = None, s: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, c: typing.Optional[ typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]] ] = None, **kwargs )
Create a scatter plot with varying marker point size and color.
See more: bigframes.operations.plotting.PlotAccessor.scatter
bigframes.operations.plotting.PlotAccessor
PlotAccessor(data)
Make plots of Series or DataFrame with the matplotlib
backend.
bigframes.operations.strings.StringMethods.capitalize
capitalize() -> bigframes.series.Series
Convert strings in the Series/Index to be capitalized.
See more: bigframes.operations.strings.StringMethods.capitalize
bigframes.operations.strings.StringMethods.cat
cat( others: typing.Union[str, bigframes.series.Series], *, join: typing.Literal["outer", "left"] = "left" ) -> bigframes.series.Series
Concatenate strings in the Series/Index with given separator.
bigframes.operations.strings.StringMethods.center
center(width: int, fillchar: str = " ") -> bigframes.series.Series
Pad left and right side of strings in the Series/Index.
bigframes.operations.strings.StringMethods.contains
contains( pat, case: bool = True, flags: int = 0, *, regex: bool = True ) -> bigframes.series.Series
Test if pattern or regex is contained within a string of a Series or Index.
See more: bigframes.operations.strings.StringMethods.contains
bigframes.operations.strings.StringMethods.endswith
endswith(pat: typing.Union[str, tuple[str, ...]]) -> bigframes.series.Series
Test if the end of each string element matches a pattern.
See more: bigframes.operations.strings.StringMethods.endswith
bigframes.operations.strings.StringMethods.extract
extract(pat: str, flags: int = 0) -> bigframes.dataframe.DataFrame
Extract capture groups in the regex pat
as columns in a DataFrame.
See more: bigframes.operations.strings.StringMethods.extract
bigframes.operations.strings.StringMethods.find
find( sub: str, start: typing.Optional[int] = None, end: typing.Optional[int] = None ) -> bigframes.series.Series
Return lowest indexes in each strings in the Series/Index.
bigframes.operations.strings.StringMethods.fullmatch
fullmatch(pat, case=True, flags=0) -> bigframes.series.Series
Determine if each string entirely matches a regular expression.
See more: bigframes.operations.strings.StringMethods.fullmatch
bigframes.operations.strings.StringMethods.get
get(i: int) -> bigframes.series.Series
Extract element from each component at specified position or with specified key.
bigframes.operations.strings.StringMethods.isalnum
isalnum() -> bigframes.series.Series
Check whether all characters in each string are alphanumeric.
See more: bigframes.operations.strings.StringMethods.isalnum
bigframes.operations.strings.StringMethods.isalpha
isalpha() -> bigframes.series.Series
Check whether all characters in each string are alphabetic.
See more: bigframes.operations.strings.StringMethods.isalpha
bigframes.operations.strings.StringMethods.isdecimal
isdecimal() -> bigframes.series.Series
Check whether all characters in each string are decimal.
See more: bigframes.operations.strings.StringMethods.isdecimal
bigframes.operations.strings.StringMethods.isdigit
isdigit() -> bigframes.series.Series
Check whether all characters in each string are digits.
See more: bigframes.operations.strings.StringMethods.isdigit
bigframes.operations.strings.StringMethods.islower
islower() -> bigframes.series.Series
Check whether all characters in each string are lowercase.
See more: bigframes.operations.strings.StringMethods.islower
bigframes.operations.strings.StringMethods.isnumeric
isnumeric() -> bigframes.series.Series
Check whether all characters in each string are numeric.
See more: bigframes.operations.strings.StringMethods.isnumeric
bigframes.operations.strings.StringMethods.isspace
isspace() -> bigframes.series.Series
Check whether all characters in each string are whitespace.
See more: bigframes.operations.strings.StringMethods.isspace
bigframes.operations.strings.StringMethods.isupper
isupper() -> bigframes.series.Series
Check whether all characters in each string are uppercase.
See more: bigframes.operations.strings.StringMethods.isupper
bigframes.operations.strings.StringMethods.len
len() -> bigframes.series.Series
Compute the length of each element in the Series/Index.
bigframes.operations.strings.StringMethods.ljust
ljust(width, fillchar=" ") -> bigframes.series.Series
Pad right side of strings in the Series/Index up to width.
bigframes.operations.strings.StringMethods.lower
lower() -> bigframes.series.Series
Convert strings in the Series/Index to lowercase.
bigframes.operations.strings.StringMethods.lstrip
lstrip() -> bigframes.series.Series
Remove leading characters.
bigframes.operations.strings.StringMethods.match
match(pat, case=True, flags=0) -> bigframes.series.Series
Determine if each string starts with a match of a regular expression.
bigframes.operations.strings.StringMethods.pad
pad(width, side="left", fillchar=" ") -> bigframes.series.Series
Pad strings in the Series/Index up to width.
bigframes.operations.strings.StringMethods.repeat
repeat(repeats: int) -> bigframes.series.Series
Duplicate each string in the Series or Index.
bigframes.operations.strings.StringMethods.replace
replace( pat: typing.Union[str, re.Pattern], repl: str, *, case: typing.Optional[bool] = None, flags: int = 0, regex: bool = False ) -> bigframes.series.Series
Replace each occurrence of pattern/regex in the Series/Index.
See more: bigframes.operations.strings.StringMethods.replace
bigframes.operations.strings.StringMethods.reverse
reverse() -> bigframes.series.Series
Reverse strings in the Series.
See more: bigframes.operations.strings.StringMethods.reverse
bigframes.operations.strings.StringMethods.rjust
rjust(width, fillchar=" ") -> bigframes.series.Series
Pad left side of strings in the Series/Index up to width.
bigframes.operations.strings.StringMethods.rstrip
rstrip() -> bigframes.series.Series
Remove trailing characters.
bigframes.operations.strings.StringMethods.slice
slice( start: typing.Optional[int] = None, stop: typing.Optional[int] = None ) -> bigframes.series.Series
Slice substrings from each element in the Series or Index.
bigframes.operations.strings.StringMethods.startswith
startswith(pat: typing.Union[str, tuple[str, ...]]) -> bigframes.series.Series
Test if the start of each string element matches a pattern.
See more: bigframes.operations.strings.StringMethods.startswith
bigframes.operations.strings.StringMethods.strip
strip() -> bigframes.series.Series
Remove leading and trailing characters.
bigframes.operations.strings.StringMethods.upper
upper() -> bigframes.series.Series
Convert strings in the Series/Index to uppercase.
bigframes.operations.strings.StringMethods.zfill
zfill(width: int) -> bigframes.series.Series
Pad strings in the Series/Index by prepending '0' characters.
bigframes.operations.structs.StructAccessor.explode
explode() -> bigframes.dataframe.DataFrame
Extract all child fields of a struct as a DataFrame.
See more: bigframes.operations.structs.StructAccessor.explode
bigframes.operations.structs.StructAccessor.field
field(name_or_index: str | int) -> bigframes.series.Series
Extract a child field of a struct as a Series.
bigframes.pandas.NamedAgg
NamedAgg(column, aggfunc)
Create new instance of NamedAgg(column, aggfunc).
See more: bigframes.pandas.NamedAgg
bigframes.series.Series.__array_ufunc__
__array_ufunc__( ufunc: numpy.ufunc, method: str, *inputs, **kwargs ) -> bigframes.series.Series
Used to support numpy ufuncs.
See more: bigframes.series.Series.array_ufunc
bigframes.series.Series.__rmatmul__
__rmatmul__(other)
Matrix multiplication using binary @
operator in Python>=3.5.
See more: bigframes.series.Series.rmatmul
bigframes.series.Series.abs
abs() -> bigframes.series.Series
Return a Series/DataFrame with absolute numeric value of each element.
See more: bigframes.series.Series.abs
bigframes.series.Series.add
add(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return addition of Series and other, element-wise (binary operator add).
See more: bigframes.series.Series.add
bigframes.series.Series.add_prefix
add_prefix(prefix: str, axis: int | str | None = None) -> bigframes.series.Series
Prefix labels with string prefix
.
See more: bigframes.series.Series.add_prefix
bigframes.series.Series.add_suffix
add_suffix(suffix: str, axis: int | str | None = None) -> bigframes.series.Series
Suffix labels with string suffix
.
See more: bigframes.series.Series.add_suffix
bigframes.series.Series.agg
agg( func: typing.Union[str, typing.Sequence[str]] ) -> typing.Union[typing.Any, bigframes.series.Series]
Aggregate using one or more operations over the specified axis.
See more: bigframes.series.Series.agg
bigframes.series.Series.aggregate
aggregate( func: typing.Union[str, typing.Sequence[str]] ) -> typing.Union[typing.Any, bigframes.series.Series]
API documentation for aggregate
method.
See more: bigframes.series.Series.aggregate
bigframes.series.Series.all
all() -> bool
Return whether all elements are True, potentially over an axis.
See more: bigframes.series.Series.all
bigframes.series.Series.any
any() -> bool
Return whether any element is True, potentially over an axis.
See more: bigframes.series.Series.any
bigframes.series.Series.apply
apply( func, by_row: typing.Union[typing.Literal["compat"], bool] = "compat" ) -> bigframes.series.Series
Invoke function on values of a Series.
See more: bigframes.series.Series.apply
bigframes.series.Series.argmax
argmax() -> int
Return int position of the smallest value in the Series.
See more: bigframes.series.Series.argmax
bigframes.series.Series.argmin
argmin() -> int
Return int position of the largest value in the Series.
See more: bigframes.series.Series.argmin
bigframes.series.Series.astype
astype( dtype: typing.Union[ typing.Literal[ "boolean", "Float64", "Int64", "int64[pyarrow]", "string", "string[pyarrow]", "timestamp[us, tz=UTC][pyarrow]", "timestamp[us][pyarrow]", "date32[day][pyarrow]", "time64[us][pyarrow]", "decimal128(38, 9)[pyarrow]", "decimal256(76, 38)[pyarrow]", "binary[pyarrow]", ], pandas.core.arrays.boolean.BooleanDtype, pandas.core.arrays.floating.Float64Dtype, pandas.core.arrays.integer.Int64Dtype, pandas.core.arrays.string_.StringDtype, pandas.core.dtypes.dtypes.ArrowDtype, geopandas.array.GeometryDtype, ] ) -> bigframes.series.Series
Cast a pandas object to a specified dtype dtype
.
See more: bigframes.series.Series.astype
bigframes.series.Series.between
between(left, right, inclusive="both")
Return boolean Series equivalent to left <= series <= right.
See more: bigframes.series.Series.between
bigframes.series.Series.bfill
bfill(*, limit: typing.Optional[int] = None) -> bigframes.series.Series
Fill NA/NaN values by using the next valid observation to fill the gap.
See more: bigframes.series.Series.bfill
bigframes.series.Series.clip
clip(lower, upper)
Trim values at input threshold(s).
See more: bigframes.series.Series.clip
bigframes.series.Series.copy
copy() -> bigframes.series.Series
Make a copy of this object's indices and data.
See more: bigframes.series.Series.copy
bigframes.series.Series.corr
corr(other: bigframes.series.Series, method="pearson", min_periods=None) -> float
Compute the correlation with the other Series.
See more: bigframes.series.Series.corr
bigframes.series.Series.count
count() -> int
Return number of non-NA/null observations in the Series.
See more: bigframes.series.Series.count
bigframes.series.Series.cov
cov(other: bigframes.series.Series) -> float
Compute covariance with Series, excluding missing values.
See more: bigframes.series.Series.cov
bigframes.series.Series.cummax
cummax() -> bigframes.series.Series
Return cumulative maximum over a DataFrame or Series axis.
See more: bigframes.series.Series.cummax
bigframes.series.Series.cummin
cummin() -> bigframes.series.Series
Return cumulative minimum over a DataFrame or Series axis.
See more: bigframes.series.Series.cummin
bigframes.series.Series.cumprod
cumprod() -> bigframes.series.Series
Return cumulative product over a DataFrame or Series axis.
See more: bigframes.series.Series.cumprod
bigframes.series.Series.cumsum
cumsum() -> bigframes.series.Series
Return cumulative sum over a DataFrame or Series axis.
See more: bigframes.series.Series.cumsum
bigframes.series.Series.diff
diff(periods: int = 1) -> bigframes.series.Series
First discrete difference of element.
See more: bigframes.series.Series.diff
bigframes.series.Series.div
div(other: float | int | bigframes.series.Series) -> bigframes.series.Series
API documentation for div
method.
See more: bigframes.series.Series.div
bigframes.series.Series.divide
divide(other: float | int | bigframes.series.Series) -> bigframes.series.Series
API documentation for divide
method.
See more: bigframes.series.Series.divide
bigframes.series.Series.divmod
divmod(other) -> typing.Tuple[bigframes.series.Series, bigframes.series.Series]
Return integer division and modulo of Series and other, element-wise (binary operator divmod).
See more: bigframes.series.Series.divmod
bigframes.series.Series.dot
dot(other)
Compute the dot product between the Series and the columns of other.
See more: bigframes.series.Series.dot
bigframes.series.Series.drop
drop( labels: typing.Any = None, *, axis: typing.Union[int, str] = 0, index: typing.Any = None, columns: typing.Union[typing.Hashable, typing.Iterable[typing.Hashable]] = None, level: typing.Optional[typing.Union[str, int]] = None ) -> bigframes.series.Series
Return Series with specified index labels removed.
See more: bigframes.series.Series.drop
bigframes.series.Series.drop_duplicates
drop_duplicates(*, keep: str = "first") -> bigframes.series.Series
Return Series with duplicate values removed.
See more: bigframes.series.Series.drop_duplicates
bigframes.series.Series.droplevel
droplevel( level: typing.Union[str, int, typing.Sequence[typing.Union[str, int]]], axis: int | str = 0, )
Return Series with requested index / column level(s) removed.
See more: bigframes.series.Series.droplevel
bigframes.series.Series.dropna
dropna( *, axis: int = 0, inplace: bool = False, how: typing.Optional[str] = None, ignore_index: bool = False ) -> bigframes.series.Series
Return a new Series with missing values removed.
See more: bigframes.series.Series.dropna
bigframes.series.Series.duplicated
duplicated(keep: str = "first") -> bigframes.series.Series
Indicate duplicate Series values.
See more: bigframes.series.Series.duplicated
bigframes.series.Series.eq
eq(other: object) -> bigframes.series.Series
Return equal of Series and other, element-wise (binary operator eq).
See more: bigframes.series.Series.eq
bigframes.series.Series.equals
equals( other: typing.Union[bigframes.series.Series, bigframes.dataframe.DataFrame] ) -> bool
API documentation for equals
method.
See more: bigframes.series.Series.equals
bigframes.series.Series.expanding
expanding(min_periods: int = 1) -> bigframes.core.window.Window
Provide expanding window calculations.
See more: bigframes.series.Series.expanding
bigframes.series.Series.explode
explode(*, ignore_index: typing.Optional[bool] = False) -> bigframes.series.Series
Transform each element of a list-like to a row.
See more: bigframes.series.Series.explode
bigframes.series.Series.ffill
ffill(*, limit: typing.Optional[int] = None) -> bigframes.series.Series
Fill NA/NaN values by propagating the last valid observation to next valid.
See more: bigframes.series.Series.ffill
bigframes.series.Series.fillna
fillna(value=None) -> bigframes.series.Series
Fill NA/NaN values using the specified method.
See more: bigframes.series.Series.fillna
bigframes.series.Series.filter
filter( items: typing.Optional[typing.Iterable] = None, like: typing.Optional[str] = None, regex: typing.Optional[str] = None, axis: typing.Optional[typing.Union[str, int]] = None, ) -> bigframes.series.Series
Subset the dataframe rows or columns according to the specified index labels.
See more: bigframes.series.Series.filter
bigframes.series.Series.floordiv
floordiv(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return integer division of Series and other, element-wise (binary operator floordiv).
See more: bigframes.series.Series.floordiv
bigframes.series.Series.ge
ge(other) -> bigframes.series.Series
Get 'greater than or equal to' of Series and other, element-wise (binary operator >=
).
See more: bigframes.series.Series.ge
bigframes.series.Series.get
get(key, default=None)
Get item from object for given key (ex: DataFrame column).
See more: bigframes.series.Series.get
bigframes.series.Series.groupby
groupby( by: typing.Union[ typing.Hashable, bigframes.series.Series, typing.Sequence[typing.Union[typing.Hashable, bigframes.series.Series]], ] = None, axis=0, level: typing.Optional[ typing.Union[int, str, typing.Sequence[int], typing.Sequence[str]] ] = None, as_index: bool = True, *, dropna: bool = True ) -> bigframes.core.groupby.SeriesGroupBy
Group Series using a mapper or by a Series of columns.
See more: bigframes.series.Series.groupby
bigframes.series.Series.gt
gt(other) -> bigframes.series.Series
Get 'less than or equal to' of Series and other, element-wise (binary operator <=
).
See more: bigframes.series.Series.gt
bigframes.series.Series.head
head(n: int = 5) -> bigframes.series.Series
Return the first n
rows.
See more: bigframes.series.Series.head
bigframes.series.Series.idxmax
idxmax() -> typing.Hashable
Return the row label of the maximum value.
See more: bigframes.series.Series.idxmax
bigframes.series.Series.idxmin
idxmin() -> typing.Hashable
Return the row label of the minimum value.
See more: bigframes.series.Series.idxmin
bigframes.series.Series.interpolate
interpolate(method: str = "linear") -> bigframes.series.Series
Fill NaN values using an interpolation method.
See more: bigframes.series.Series.interpolate
bigframes.series.Series.isin
isin(values) -> "Series" | None
Whether elements in Series are contained in values.
See more: bigframes.series.Series.isin
bigframes.series.Series.isna
isna() -> bigframes.series.Series
Detect missing values.
See more: bigframes.series.Series.isna
bigframes.series.Series.isnull
isnull() -> bigframes.series.Series
Detect missing values.
See more: bigframes.series.Series.isnull
bigframes.series.Series.kurt
kurt()
Return unbiased kurtosis over requested axis.
See more: bigframes.series.Series.kurt
bigframes.series.Series.kurtosis
kurtosis()
API documentation for kurtosis
method.
See more: bigframes.series.Series.kurtosis
bigframes.series.Series.le
le(other) -> bigframes.series.Series
Get 'less than or equal to' of Series and other, element-wise (binary operator <=
).
See more: bigframes.series.Series.le
bigframes.series.Series.lt
lt(other) -> bigframes.series.Series
Get 'less than' of Series and other, element-wise (binary operator <
).
See more: bigframes.series.Series.lt
bigframes.series.Series.map
map( arg: typing.Union[typing.Mapping, bigframes.series.Series], na_action: typing.Optional[str] = None, *, verify_integrity: bool = False ) -> bigframes.series.Series
Map values of Series according to an input mapping or function.
See more: bigframes.series.Series.map
bigframes.series.Series.mask
mask(cond, other=None) -> bigframes.series.Series
Replace values where the condition is True.
See more: bigframes.series.Series.mask
bigframes.series.Series.max
max() -> typing.Any
Return the maximum of the values over the requested axis.
See more: bigframes.series.Series.max
bigframes.series.Series.mean
mean() -> float
Return the mean of the values over the requested axis.
See more: bigframes.series.Series.mean
bigframes.series.Series.median
median(*, exact: bool = False) -> float
Return the median of the values over the requested axis.
See more: bigframes.series.Series.median
bigframes.series.Series.min
min() -> typing.Any
Return the maximum of the values over the requested axis.
See more: bigframes.series.Series.min
bigframes.series.Series.mod
mod(other) -> bigframes.series.Series
Return modulo of Series and other, element-wise (binary operator mod).
See more: bigframes.series.Series.mod
bigframes.series.Series.mode
mode() -> bigframes.series.Series
Return the mode(s) of the Series.
See more: bigframes.series.Series.mode
bigframes.series.Series.mul
mul(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return multiplication of Series and other, element-wise (binary operator mul).
See more: bigframes.series.Series.mul
bigframes.series.Series.multiply
multiply(other: float | int | bigframes.series.Series) -> bigframes.series.Series
API documentation for multiply
method.
See more: bigframes.series.Series.multiply
bigframes.series.Series.ne
ne(other: object) -> bigframes.series.Series
Return not equal of Series and other, element-wise (binary operator ne).
See more: bigframes.series.Series.ne
bigframes.series.Series.nlargest
nlargest(n: int = 5, keep: str = "first") -> bigframes.series.Series
Return the largest n
elements.
See more: bigframes.series.Series.nlargest
bigframes.series.Series.notna
notna() -> bigframes.series.Series
Detect existing (non-missing) values.
See more: bigframes.series.Series.notna
bigframes.series.Series.notnull
notnull() -> bigframes.series.Series
Detect existing (non-missing) values.
See more: bigframes.series.Series.notnull
bigframes.series.Series.nsmallest
nsmallest(n: int = 5, keep: str = "first") -> bigframes.series.Series
Return the smallest n
elements.
See more: bigframes.series.Series.nsmallest
bigframes.series.Series.nunique
nunique() -> int
Return number of unique elements in the object.
See more: bigframes.series.Series.nunique
bigframes.series.Series.pad
pad(*, limit: typing.Optional[int] = None) -> bigframes.series.Series
API documentation for pad
method.
See more: bigframes.series.Series.pad
bigframes.series.Series.pct_change
pct_change(periods: int = 1) -> bigframes.series.Series
Fractional change between the current and a prior element.
See more: bigframes.series.Series.pct_change
bigframes.series.Series.pipe
pipe(func: Callable[..., T] | tuple[Callable[..., T], str], *args, **kwargs) -> T
Apply chainable functions that expect Series or DataFrames.
See more: bigframes.series.Series.pipe
bigframes.series.Series.pow
pow(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return Exponential power of series and other, element-wise (binary operator pow
).
See more: bigframes.series.Series.pow
bigframes.series.Series.prod
prod() -> float
Return the product of the values over the requested axis.
See more: bigframes.series.Series.prod
bigframes.series.Series.product
product() -> float
API documentation for product
method.
See more: bigframes.series.Series.product
bigframes.series.Series.radd
radd(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return addition of Series and other, element-wise (binary operator radd).
See more: bigframes.series.Series.radd
bigframes.series.Series.rank
rank( axis=0, method: str = "average", numeric_only=False, na_option: str = "keep", ascending: bool = True, ) -> bigframes.series.Series
Compute numerical data ranks (1 through n) along axis.
See more: bigframes.series.Series.rank
bigframes.series.Series.rdiv
rdiv(other: float | int | bigframes.series.Series) -> bigframes.series.Series
API documentation for rdiv
method.
See more: bigframes.series.Series.rdiv
bigframes.series.Series.rdivmod
rdivmod(other) -> typing.Tuple[bigframes.series.Series, bigframes.series.Series]
Return integer division and modulo of Series and other, element-wise (binary operator rdivmod).
See more: bigframes.series.Series.rdivmod
bigframes.series.Series.reindex
reindex(index=None, *, validate: typing.Optional[bool] = None)
Conform Series to new index with optional filling logic.
See more: bigframes.series.Series.reindex
bigframes.series.Series.reindex_like
reindex_like( other: bigframes.series.Series, *, validate: typing.Optional[bool] = None )
Return an object with matching indices as other object.
See more: bigframes.series.Series.reindex_like
bigframes.series.Series.rename
rename( index: typing.Union[typing.Hashable, typing.Mapping[typing.Any, typing.Any]] = None, **kwargs ) -> bigframes.series.Series
Alter Series index labels or name.
See more: bigframes.series.Series.rename
bigframes.series.Series.rename_axis
rename_axis( mapper: typing.Union[typing.Hashable, typing.Sequence[typing.Hashable]], **kwargs ) -> bigframes.series.Series
Set the name of the axis for the index or columns.
See more: bigframes.series.Series.rename_axis
bigframes.series.Series.reorder_levels
reorder_levels( order: typing.Union[str, int, typing.Sequence[typing.Union[str, int]]], axis: int | str = 0, )
Rearrange index levels using input order.
See more: bigframes.series.Series.reorder_levels
bigframes.series.Series.replace
replace(to_replace: typing.Any, value: typing.Any = None, *, regex: bool = False)
Replace values given in to_replace
with value
.
See more: bigframes.series.Series.replace
bigframes.series.Series.reset_index
reset_index( *, name: typing.Optional[str] = None, drop: bool = False ) -> bigframes.dataframe.DataFrame | bigframes.series.Series
Generate a new DataFrame or Series with the index reset.
See more: bigframes.series.Series.reset_index
bigframes.series.Series.rfloordiv
rfloordiv(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return integer division of Series and other, element-wise (binary operator rfloordiv).
See more: bigframes.series.Series.rfloordiv
bigframes.series.Series.rmod
rmod(other) -> bigframes.series.Series
Return modulo of Series and other, element-wise (binary operator mod).
See more: bigframes.series.Series.rmod
bigframes.series.Series.rmul
rmul(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return multiplication of Series and other, element-wise (binary operator mul).
See more: bigframes.series.Series.rmul
bigframes.series.Series.rolling
rolling(window: int, min_periods=None) -> bigframes.core.window.Window
Provide rolling window calculations.
See more: bigframes.series.Series.rolling
bigframes.series.Series.round
round(decimals=0) -> bigframes.series.Series
Round each value in a Series to the given number of decimals.
See more: bigframes.series.Series.round
bigframes.series.Series.rpow
rpow(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return Exponential power of series and other, element-wise (binary operator rpow
).
See more: bigframes.series.Series.rpow
bigframes.series.Series.rsub
rsub(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return subtraction of Series and other, element-wise (binary operator rsub).
See more: bigframes.series.Series.rsub
bigframes.series.Series.rtruediv
rtruediv(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return floating division of Series and other, element-wise (binary operator rtruediv).
See more: bigframes.series.Series.rtruediv
bigframes.series.Series.sample
sample( n: typing.Optional[int] = None, frac: typing.Optional[float] = None, *, random_state: typing.Optional[int] = None, sort: typing.Optional[typing.Union[bool, typing.Literal["random"]]] = "random" ) -> bigframes.series.Series
Return a random sample of items from an axis of object.
See more: bigframes.series.Series.sample
bigframes.series.Series.shift
shift(periods: int = 1) -> bigframes.series.Series
Shift index by desired number of periods.
See more: bigframes.series.Series.shift
bigframes.series.Series.skew
skew()
Return unbiased skew over requested axis.
See more: bigframes.series.Series.skew
bigframes.series.Series.sort_index
sort_index( *, axis=0, ascending=True, na_position="last" ) -> bigframes.series.Series
Sort Series by index labels.
See more: bigframes.series.Series.sort_index
bigframes.series.Series.sort_values
sort_values( *, axis=0, ascending=True, kind: str = "quicksort", na_position="last" ) -> bigframes.series.Series
Sort by the values.
See more: bigframes.series.Series.sort_values
bigframes.series.Series.std
std() -> float
Return sample standard deviation over requested axis.
See more: bigframes.series.Series.std
bigframes.series.Series.sub
sub(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return subtraction of Series and other, element-wise (binary operator sub).
See more: bigframes.series.Series.sub
bigframes.series.Series.subtract
subtract(other: float | int | bigframes.series.Series) -> bigframes.series.Series
API documentation for subtract
method.
See more: bigframes.series.Series.subtract
bigframes.series.Series.sum
sum() -> float
Return the sum of the values over the requested axis.
See more: bigframes.series.Series.sum
bigframes.series.Series.swaplevel
swaplevel(i: int = -2, j: int = -1)
Swap levels i and j in a MultiIndex
.
See more: bigframes.series.Series.swaplevel
bigframes.series.Series.tail
tail(n: int = 5) -> bigframes.series.Series
Return the last n
rows.
See more: bigframes.series.Series.tail
bigframes.series.Series.to_csv
to_csv( path_or_buf: str, sep=",", *, header: bool = True, index: bool = True ) -> None
Write object to a comma-separated values (csv) file on Cloud Storage.
See more: bigframes.series.Series.to_csv
bigframes.series.Series.to_dict
to_dict(into: type[dict] =
Convert Series to {label -> value} dict or dict-like object.
See more: bigframes.series.Series.to_dict
bigframes.series.Series.to_excel
to_excel(excel_writer, sheet_name="Sheet1", **kwargs) -> None
Write Series to an Excel sheet.
See more: bigframes.series.Series.to_excel
bigframes.series.Series.to_frame
to_frame(name: typing.Hashable = None) -> bigframes.dataframe.DataFrame
Convert Series to DataFrame.
See more: bigframes.series.Series.to_frame
bigframes.series.Series.to_json
to_json( path_or_buf: str, orient: typing.Literal[ "split", "records", "index", "columns", "values", "table" ] = "columns", *, lines: bool = False, index: bool = True ) -> None
Convert the object to a JSON string, written to Cloud Storage.
See more: bigframes.series.Series.to_json
bigframes.series.Series.to_latex
to_latex( buf=None, columns=None, header=True, index=True, **kwargs ) -> typing.Optional[str]
Render object to a LaTeX tabular, longtable, or nested table.
See more: bigframes.series.Series.to_latex
bigframes.series.Series.to_list
to_list() -> list
Return a list of the values.
See more: bigframes.series.Series.to_list
bigframes.series.Series.to_markdown
to_markdown( buf: typing.Optional[typing.IO[str]] = None, mode: str = "wt", index: bool = True, **kwargs ) -> typing.Optional[str]
Print {klass} in Markdown-friendly format.
See more: bigframes.series.Series.to_markdown
bigframes.series.Series.to_numpy
to_numpy(dtype=None, copy=False, na_value=None, **kwargs) -> numpy.ndarray
A NumPy ndarray representing the values in this Series or Index.
See more: bigframes.series.Series.to_numpy
bigframes.series.Series.to_pandas
to_pandas( max_download_size: typing.Optional[int] = None, sampling_method: typing.Optional[str] = None, random_state: typing.Optional[int] = None, *, ordered: bool = True ) -> pandas.core.series.Series
Writes Series to pandas Series.
See more: bigframes.series.Series.to_pandas
bigframes.series.Series.to_pickle
to_pickle(path, **kwargs) -> None
Pickle (serialize) object to file.
See more: bigframes.series.Series.to_pickle
bigframes.series.Series.to_string
to_string( buf=None, na_rep="NaN", float_format=None, header=True, index=True, length=False, dtype=False, name=False, max_rows=None, min_rows=None, ) -> typing.Optional[str]
Render a string representation of the Series.
See more: bigframes.series.Series.to_string
bigframes.series.Series.to_xarray
to_xarray()
Return an xarray object from the pandas object.
See more: bigframes.series.Series.to_xarray
bigframes.series.Series.tolist
tolist() -> list
Return a list of the values.
See more: bigframes.series.Series.tolist
bigframes.series.Series.transpose
transpose() -> bigframes.series.Series
Return the transpose, which is by definition self.
See more: bigframes.series.Series.transpose
bigframes.series.Series.truediv
truediv(other: float | int | bigframes.series.Series) -> bigframes.series.Series
Return floating division of Series and other, element-wise (binary operator truediv).
See more: bigframes.series.Series.truediv
bigframes.series.Series.unique
unique() -> bigframes.series.Series
Return unique values of Series object.
See more: bigframes.series.Series.unique
bigframes.series.Series.unstack
unstack( level: typing.Union[str, int, typing.Sequence[typing.Union[str, int]]] = -1 )
Unstack, also known as pivot, Series with MultiIndex to produce DataFrame.
See more: bigframes.series.Series.unstack
bigframes.series.Series.value_counts
value_counts( normalize: bool = False, sort: bool = True, ascending: bool = False, *, dropna: bool = True )
Return a Series containing counts of unique values.
See more: bigframes.series.Series.value_counts
bigframes.series.Series.var
var() -> float
Return unbiased variance over requested axis.
See more: bigframes.series.Series.var
bigframes.series.Series.where
where(cond, other=None)
Replace values where the condition is False.
See more: bigframes.series.Series.where
bigframes.session.Session.close
close()
No-op.
See more: bigframes.session.Session.close
bigframes.session.Session.read_csv
read_csv( filepath_or_buffer: str | IO["bytes"], *, sep: Optional[str] = ",", header: Optional[int] = 0, names: Optional[ Union[MutableSequence[Any], np.ndarray[Any, Any], Tuple[Any, ...], range] ] = None, index_col: Optional[ Union[int, str, Sequence[Union[str, int]], Literal[False]] ] = None, usecols: Optional[ Union[ MutableSequence[str], Tuple[str, ...], Sequence[int], pandas.Series, pandas.Index, np.ndarray[Any, Any], Callable[[Any], bool], ] ] = None, dtype: Optional[Dict] = None, engine: Optional[ Literal["c", "python", "pyarrow", "python-fwf", "bigquery"] ] = None, encoding: Optional[str] = None, **kwargs ) -> dataframe.DataFrame
Loads DataFrame from comma-separated values (csv) file locally or from Cloud Storage.
See more: bigframes.session.Session.read_csv
bigframes.session.Session.read_gbq
read_gbq( query_or_table: str, *, index_col: Iterable[str] | str = (), columns: Iterable[str] = (), configuration: Optional[Dict] = None, max_results: Optional[int] = None, filters: third_party_pandas_gbq.FiltersType = (), use_cache: Optional[bool] = None, col_order: Iterable[str] = () ) -> dataframe.DataFrame
Loads a DataFrame from BigQuery.
See more: bigframes.session.Session.read_gbq
bigframes.session.Session.read_gbq_function
read_gbq_function(function_name: str)
Loads a BigQuery function from BigQuery.
bigframes.session.Session.read_gbq_model
read_gbq_model(model_name: str)
Loads a BigQuery ML model from BigQuery.
See more: bigframes.session.Session.read_gbq_model
bigframes.session.Session.read_gbq_query
read_gbq_query( query: str, *, index_col: Iterable[str] | str = (), columns: Iterable[str] = (), configuration: Optional[Dict] = None, max_results: Optional[int] = None, use_cache: Optional[bool] = None, col_order: Iterable[str] = () ) -> dataframe.DataFrame
Turn a SQL query into a DataFrame.
See more: bigframes.session.Session.read_gbq_query
bigframes.session.Session.read_gbq_table
read_gbq_table( query: str, *, index_col: Iterable[str] | str = (), columns: Iterable[str] = (), max_results: Optional[int] = None, filters: third_party_pandas_gbq.FiltersType = (), use_cache: bool = True, col_order: Iterable[str] = () ) -> dataframe.DataFrame
Turn a BigQuery table into a DataFrame.
See more: bigframes.session.Session.read_gbq_table
bigframes.session.Session.read_json
read_json( path_or_buf: str | IO["bytes"], *, orient: Literal[ "split", "records", "index", "columns", "values", "table" ] = "columns", dtype: Optional[Dict] = None, encoding: Optional[str] = None, lines: bool = False, engine: Literal["ujson", "pyarrow", "bigquery"] = "ujson", **kwargs ) -> dataframe.DataFrame
Convert a JSON string to DataFrame object.
See more: bigframes.session.Session.read_json
bigframes.session.Session.read_pandas
Loads DataFrame from a pandas DataFrame.
See more: bigframes.session.Session.read_pandas
bigframes.session.Session.read_parquet
read_parquet( path: str | IO["bytes"], *, engine: str = "auto" ) -> dataframe.DataFrame
Load a Parquet object from the file path (local or Cloud Storage), returning a DataFrame.
See more: bigframes.session.Session.read_parquet
bigframes.session.Session.read_pickle
read_pickle( filepath_or_buffer: FilePath | ReadPickleBuffer, compression: CompressionOptions = "infer", storage_options: StorageOptions = None, )
Load pickled BigFrames object (or any object) from file.
See more: bigframes.session.Session.read_pickle
bigframes.session.Session.remote_function
remote_function( input_types: typing.List[type], output_type: type, dataset: typing.Optional[str] = None, bigquery_connection: typing.Optional[str] = None, reuse: bool = True, name: typing.Optional[str] = None, packages: typing.Optional[typing.Sequence[str]] = None, cloud_function_service_account: typing.Optional[str] = None, cloud_function_kms_key_name: typing.Optional[str] = None, cloud_function_docker_repository: typing.Optional[str] = None, )
Decorator to turn a user defined function into a BigQuery remote function.