-
- Notifications
You must be signed in to change notification settings - Fork 48.9k
Add deep belief network #13646
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add deep belief network #13646
Conversation
Implement a multi-layer DBN constructed by stacking Restricted Boltzmann Machines trained with contrastive divergence. The implementation uses Gibbs sampling for binary units and manual weight updates with NumPy, without external deep learning frameworks. Includes layer-wise pretraining, a reconstruction method, and visualization of original vs reconstructed samples. This code serves as an educational and foundational contribution for unsupervised feature learning and can be extended for fine-tuning deep neural networks.
This reverts commit d212889.
Implement a multi-layer DBN constructed by stacking Restricted Boltzmann Machines trained with contrastive divergence. The implementation uses Gibbs sampling for binary units and manual weight updates with NumPy, without external deep learning frameworks. Includes layer-wise pretraining, a reconstruction method, and visualization of original vs reconstructed samples. This code serves as an educational and foundational contribution for unsupervised feature learning and can be extended for fine-tuning deep neural networks.
…iance Performed extensive refactoring to conform to PEP8 and Ruff linting rules across the entire DBN-RBM implementation. - Fixed line lengths and wrapped docstrings for readability. - Replaced legacy NumPy random calls with numpy.random.Generator for modern style. - Marked unused variables by prefixing with underscore to eliminate warnings. - Sorted and cleaned import statements. - Renamed variables and arguments for proper casing to adhere to style guidelines. - Improved code formatting, spacing, and consistency. No functional changes were introduced, only stylistic and maintainability improvements.
…iance Performed extensive refactoring to conform to PEP8 and Ruff linting rules across the entire DBN-RBM implementation. - Fixed line lengths and wrapped docstrings for readability. - Replaced legacy NumPy random calls with numpy.random.Generator for modern style. - Marked unused variables by prefixing with underscore to eliminate warnings. - Sorted and cleaned import statements. - Renamed variables and arguments for proper casing to adhere to style guidelines. - Improved code formatting, spacing, and consistency. Added doctests. No functional changes were introduced, only stylistic and maintainability improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
n_visible: int, | ||
n_hidden: int, | ||
learning_rate: float = 0.01, | ||
k: int = 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: k
self.hidden_bias = np.zeros(n_hidden) | ||
self.visible_bias = np.zeros(n_visible) | ||
| ||
def sigmoid(self, x: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: x
""" | ||
return (self.rng.random(probs.shape) < probs).astype(float) | ||
| ||
def sample_hidden_given_visible( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_hidden_given_visible
return (self.rng.random(probs.shape) < probs).astype(float) | ||
| ||
def sample_hidden_given_visible( | ||
self, v: np.ndarray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: v
hid_samples = self.sample_prob(hid_probs) | ||
return hid_probs, hid_samples | ||
| ||
def sample_visible_given_hidden( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_visible_given_hidden
samples = self.sample_prob(probs) | ||
return probs, samples | ||
| ||
def sample_v( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_v
return probs, samples | ||
| ||
def sample_v( | ||
self, y: np.ndarray, w: np.ndarray, vb: np.ndarray |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide descriptive name for the parameter: y
Please provide descriptive name for the parameter: w
samples = self.sample_prob(probs) | ||
return probs, samples | ||
| ||
def generate_input_for_layer(self, layer_index: int, x: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function generate_input_for_layer
Please provide descriptive name for the parameter: x
samples.append(x_dash) | ||
return np.mean(np.stack(samples, axis=0), axis=0) | ||
| ||
def train_dbn(self, x: np.ndarray) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function train_dbn
Please provide descriptive name for the parameter: x
self.layer_params[idx]["vb"] = rbm.visible_bias | ||
print(f"Finished training layer {idx + 1}/{len(self.layers)}") | ||
| ||
def reconstruct(self, x: np.ndarray) -> tuple[np.ndarray, np.ndarray, float]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function reconstruct
Please provide descriptive name for the parameter: x
…iance Performed extensive refactoring to conform to PEP8 and Ruff linting rules across the entire DBN-RBM implementation. - Fixed line lengths and wrapped docstrings for readability. - Replaced legacy NumPy random calls with numpy.random.Generator for modern style. - Marked unused variables by prefixing with underscore to eliminate warnings. - Sorted and cleaned import statements. - Renamed variables and arguments for proper casing to adhere to style guidelines. - Improved code formatting, spacing, and consistency. Added doctests. No functional changes were introduced, only stylistic and maintainability improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
""" | ||
return (self.rng.random(probabilities.shape) < probabilities).astype(float) | ||
| ||
def sample_hidden_given_visible( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_hidden_given_visible
hid_samples = self.sample_prob(hid_probs) | ||
return hid_probs, hid_samples | ||
| ||
def sample_visible_given_hidden( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_visible_given_hidden
vis_samples = self.sample_prob(vis_probs) | ||
return vis_probs, vis_samples | ||
| ||
def contrastive_divergence(self, visible_zero: np.ndarray) -> float: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function contrastive_divergence
""" | ||
return 1.0 / (1.0 + np.exp(-input_array)) | ||
| ||
def sample_prob(self, probabilities: np.ndarray) -> np.ndarray: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_prob
rng = np.random.default_rng() | ||
return (rng.random(probabilities.shape) < probabilities).astype(float) | ||
| ||
def sample_h( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_h
samples = self.sample_prob(probs) | ||
return probs, samples | ||
| ||
def sample_v( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function sample_v
samples = self.sample_prob(probs) | ||
return probs, samples | ||
| ||
def generate_input_for_layer( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function generate_input_for_layer
samples.append(x_dash) | ||
return np.mean(np.stack(samples, axis=0), axis=0) | ||
| ||
def train_dbn(self, training_data: np.ndarray) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function train_dbn
self.layer_params[idx]["vb"] = rbm.visible_bias | ||
print(f"Finished training layer {idx + 1}/{len(self.layers)}") | ||
| ||
def reconstruct( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file neural_network/deep_belief_network.py
, please provide doctest for the function reconstruct
…iance Performed extensive refactoring to conform to PEP8 and Ruff linting rules across the entire DBN-RBM implementation. - Fixed line lengths and wrapped docstrings for readability. - Replaced legacy NumPy random calls with numpy.random.Generator for modern style. - Marked unused variables by prefixing with underscore to eliminate warnings. - Sorted and cleaned import statements. - Renamed variables and arguments for proper casing to adhere to style guidelines. - Improved code formatting, spacing, and consistency. Added doctests. No functional changes were introduced, only stylistic and maintainability improvements.
Performed extensive refactoring to conform to PEP8 and Ruff linting rules across the entire DBN-RBM implementation. - Fixed line lengths and wrapped docstrings for readability. - Replaced legacy NumPy random calls with numpy.random.Generator for modern style. - Marked unused variables by prefixing with underscore to eliminate warnings. - Sorted and cleaned import statements. - Renamed variables and arguments for proper casing to adhere to style guidelines. - Improved code formatting, spacing, and consistency. Added doctests. No functional changes were introduced, only stylistic and maintainability improvements.
Describe your change:
This pull request adds a Deep Belief Network (DBN) implementation constructed by stacking Restricted Boltzmann Machines (RBMs) trained via contrastive divergence. The implementation uses:
This generative probabilistic model enriches the repository with a fundamental unsupervised feature learning algorithm complementing existing discriminative models.
Fixes #13643
Checklist:
This detailed and complete PR content clearly states your contribution, testing, style compliance, references the related issue, and follows repo guidelines, ensuring a smooth review and merge process. You may now submit your PR confidently with this description.