Skip to content

Conversation

@HydrogenSulfate
Copy link
Collaborator

@HydrogenSulfate HydrogenSulfate commented Jan 15, 2025

Add cpp inference with lammps

Summary by CodeRabbit

  • New Features

    • Enhanced the model export process with improved input and output handling and clearer logging.
  • Chores

    • Updated pre-commit hook settings and streamlined plugin download messages.
  • Refactor

    • Improved consistency by enforcing uniform data types and simplified runtime mode checks across key components.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 15, 2025

📝 Walkthrough

Walkthrough

This pull request updates several files to improve configuration consistency and type safety. The pre-commit hook configuration is modified by expanding the exclusion regex and disabling the cmake-lint hook. The freeze function in the entrypoint now converts model methods to static graphs with refined input specifications and logging. Several descriptor and network files have been updated to explicitly specify tensor data types and simplify dynamic mode checks for improved consistency. Additionally, the LAMMPS plugin CMake configuration now uses a new source path.

Changes

Files Change Summary
.pre-commit-config.yaml Updated the end-of-file-fixer hook exclusion pattern to include dpa1.*\.json and commented out the cmake-lint hook.
deepmd/pd/entrypoints/main.py Modified the freeze function to add detailed input specifications and convert forward and forward_lower methods to static graphs, update model saving, and adjust output logging.
deepmd/pd/model/descriptor/se_a.py
deepmd/pd/model/descriptor/repformers.py
deepmd/pd/model/descriptor/se_atten.py
deepmd/pd/model/descriptor/se_t_tebd.py
deepmd/pd/model/descriptor/descriptor.py
Updated tensor assignments by specifying data types with paddle.to_tensor(..., dtype=...) to ensure consistency in computed statistics and parameter sharing.
deepmd/pd/utils/nlist.py
deepmd/pd/model/network/layernorm.py
Simplified dynamic mode checks by replacing paddle.framework.in_dynamic_mode() with paddle.in_dynamic_mode().
source/lmp/plugin/CMakeLists.txt Modified the LAMMPS plugin setup to update the download path message and assign LAMMPS_SOURCE_ROOT to ${CMAKE_BINARY_DIR}/_deps/lammps_download-src.

Sequence Diagram(s)

sequenceDiagram participant User as User/Script participant Freeze as freeze Function participant Paddle as paddle.jit.to_static User->>Freeze: Call freeze(model, output, head) Freeze->>Freeze: Check for model.forward and model.forward_lower Freeze->>Paddle: Convert model.forward using InputSpec (coord, atype, box, nlist) Freeze->>Paddle: Convert model.forward_lower using InputSpec (including mapping) Freeze->>Freeze: Update output filename and save model Freeze->>User: Log output & return confirmation 
Loading

Suggested labels

Core

Suggested reviewers

  • njzjz
  • iProzd
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (1)
deepmd/pd/utils/nlist.py (1)

Line range hint 244-251: Bug: Similar condition issue in build_directional_neighbor_list.

The same issue exists here where decomp.numel will always return True.

Apply similar fix:

- if decomp.numel(coord_neig) > 0: + if coord_neig.shape[1] > 0: xmax = paddle.max(coord_cntl) + 2.0 * rcut else: xmax = ( paddle.zeros([1], dtype=coord_neig.dtype, device=coord_neig.place) + 2.0 * rcut )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3e9cf88 and a584e70.

📒 Files selected for processing (7)
  • .pre-commit-config.yaml (3 hunks)
  • deepmd/pd/entrypoints/main.py (1 hunks)
  • deepmd/pd/model/descriptor/se_a.py (2 hunks)
  • deepmd/pd/utils/decomp.py (1 hunks)
  • deepmd/pd/utils/nlist.py (3 hunks)
  • source/api_cc/src/DeepPotPD.cc (1 hunks)
  • source/lmp/plugin/CMakeLists.txt (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (21)
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Analyze (python)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (7)
deepmd/pd/entrypoints/main.py (2)

365-371: Good documentation of input/output shapes.

The documentation clearly specifies the expected shapes and data types for the model's outputs.


373-384: LGTM: Well-structured static graph conversion.

The input specifications for the model's forward method are clear and comprehensive.

source/api_cc/src/DeepPotPD.cc (1)

141-141: Clarify why message passing is forcibly disabled.

The code overrides the value retrieved from buffer with 0, effectively disabling message passing. This change needs explanation:

  1. Is this a temporary workaround?
  2. What are the implications of disabling message passing?
  3. Should this be configurable?

Please document the rationale for this change.

deepmd/pd/model/descriptor/se_a.py (1)

22-22: LGTM: New import for decomp utility.

The addition of the decomp import aligns with the standardization of tensor element counting across the codebase.

.pre-commit-config.yaml (2)

68-74: Confirm if prettier and bibtex-tidy hooks can be safely disabled.

The removal of these formatting hooks could impact code consistency. Please verify:

  1. Are there alternative formatting tools in place?
  2. Is this a temporary or permanent change?

Also applies to: 86-104

✅ Verification successful

The prettier and bibtex-tidy hooks can be safely disabled

The codebase maintains robust formatting through other tools:

  • Source code formatting is handled by language-specific tools (ruff, clang-format, etc.)
  • Basic validation remains via pre-commit-hooks (check-yaml, trailing whitespace, etc.)
  • The disabled hooks only affected documentation and metadata files, not critical code
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash # Description: Check for files that would be affected by removed hooks # Files that would be affected by prettier echo "=== Files affected by prettier ===" fd -e md -e yaml -e css --exclude "source/3rdparty" --exclude ".github/workflows" --exclude ".clang-format" # Files that would be affected by bibtex-tidy echo "=== Files affected by bibtex-tidy ===" fd -e bib 

Length of output: 3266


Script:

#!/bin/bash # Check for alternative formatting tools and temporary indicators echo "=== Checking pre-commit config for other formatters ===" cat .pre-commit-config.yaml echo -e "\n=== Checking GitHub workflows for formatting jobs ===" fd -e yml -e yaml . .github/workflows/ echo -e "\n=== Searching for TODO/FIXME comments about formatting ===" rg -i "todo|fixme|temporary|format" .pre-commit-config.yaml 

Length of output: 5949


10-10: Verify the need for excluding dpa1.*.json files.

The addition of dpa1.*\.json$ to exclusions suggests new file types being introduced. Please confirm if these files require end-of-file fixing exemption.

✅ Verification successful

Exclusion of dpa1.json files is appropriate*

The dpa1*.json files are test fixtures and example configurations used across multiple test suites. These files need to maintain specific formatting for test comparisons, making their exclusion from end-of-file fixing appropriate.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash # Description: Check for dpa1 JSON files and their EOL status # Find any dpa1 JSON files fd "dpa1.*\.json$" # Check if these files have specific EOL requirements for f in $(fd "dpa1.*\.json$"); do echo "=== $f ===" tail -c1 "$f" | xxd done 

Length of output: 1410


Script:

#!/bin/bash # Check file endings and content of dpa1 JSON files for f in $(fd "dpa1.*\.json$"); do echo "=== $f ===" echo "Last line:" tail -n 1 "$f" | cat -A echo -e "\nFirst few lines:" head -n 3 "$f" done 

Length of output: 67986


Script:

#!/bin/bash # Check if files are tracked and look for related generation scripts echo "=== Git status of dpa1 files ===" for f in $(fd "dpa1.*\.json$"); do echo "$f:" git check-ignore "$f" || echo "tracked" done echo -e "\n=== Looking for related build/generation scripts ===" rg -l "dpa1.*\.json" --type make --type python --type shell 

Length of output: 1162


Script:

#!/bin/bash # Search for related code with correct file type flags echo "=== Looking for related code ===" rg -l "dpa1.*json" -g "*.py" -g "*.sh" -g "Makefile" -g "*.mk" echo -e "\n=== Check for JSON schema or validation ===" rg -l "dpa1" -g "*.py" -g "*.json" | grep -v "dpa1.*\.json$" 

Length of output: 3232

source/lmp/plugin/CMakeLists.txt (1)

11-12: LGTM: Improved clarity in LAMMPS download location message.

The change to use ${lammps_download_SOURCE_DIR} provides more accurate information about the download location, as it directly references the FetchContent-managed directory.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
examples/water/se_atten/input_torch.json (1)

29-29: LGTM! Explicit precision settings will help with inference consistency.

The addition of explicit float32 precision settings to both the descriptor and fitting_net sections is a good practice, especially for C++ inference. This ensures consistent numerical behavior between training and production environments.

Consider documenting the performance implications of float32 vs float64 precision in the model documentation, particularly for LAMMPS integration scenarios.

Also applies to: 39-39

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a584e70 and 51f583b.

📒 Files selected for processing (7)
  • deepmd/pd/model/descriptor/descriptor.py (1 hunks)
  • deepmd/pd/model/descriptor/repformers.py (1 hunks)
  • deepmd/pd/model/descriptor/se_a.py (3 hunks)
  • deepmd/pd/model/descriptor/se_atten.py (1 hunks)
  • deepmd/pd/model/descriptor/se_t_tebd.py (1 hunks)
  • examples/water/se_atten/input_torch.json (2 hunks)
  • source/api_cc/src/DeepPotPD.cc (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • deepmd/pd/model/descriptor/repformers.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • source/api_cc/src/DeepPotPD.cc
  • deepmd/pd/model/descriptor/se_a.py
👮 Files not reviewed due to content moderation or server errors (3)
  • deepmd/pd/model/descriptor/descriptor.py
  • deepmd/pd/model/descriptor/se_t_tebd.py
  • deepmd/pd/model/descriptor/se_atten.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Analyze (python)
  • GitHub Check: Build source distribution
  • GitHub Check: Analyze (c-cpp)
🔇 Additional comments (1)
examples/water/se_atten/input_torch.json (1)

28-28: LGTM! Valid JSON syntax.

The added trailing comma after the temperature setting is valid JSON syntax and follows common style guidelines.

@codecov
Copy link

codecov bot commented Jan 16, 2025

Codecov Report

Attention: Patch coverage is 50.00000% with 9 lines in your changes missing coverage. Please review.

Project coverage is 84.77%. Comparing base (774a844) to head (72bc05b).
Report is 76 commits behind head on devel.

Files with missing lines Patch % Lines
deepmd/pd/entrypoints/main.py 0.00% 6 Missing ⚠️
deepmd/pd/model/descriptor/repformers.py 50.00% 1 Missing ⚠️
deepmd/pd/model/descriptor/se_atten.py 50.00% 1 Missing ⚠️
deepmd/pd/model/descriptor/se_t_tebd.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@ Coverage Diff @@ ## devel #4556 +/- ## ========================================== - Coverage 84.78% 84.77% -0.01%  ========================================== Files 688 688 Lines 66089 66093 +4 Branches 3540 3539 -1 ========================================== + Hits 56032 56033 +1  - Misses 8916 8920 +4  + Partials 1141 1140 -1 

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.
@njzjz
Copy link
Member

njzjz commented Mar 5, 2025

What is the status of this PR?

@HydrogenSulfate
Copy link
Collaborator Author

What is the status of this PR?

This PR will be updated soon as #4617 as been merged.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
deepmd/pd/entrypoints/main.py (2)

367-379: Proper implementation of static graph conversion for forward method.

The code now conditionally converts the model's forward method to a static graph, which can provide significant performance benefits. The detailed InputSpec definitions ensure the graph captures all the necessary input information.

One suggestion for further improvement: Consider adding error handling in case the conversion fails, as static graph conversion can sometimes encounter issues with complex models.

 if hasattr(model, "forward"): - model.forward = paddle.jit.to_static( - model.forward, - input_spec=[ - InputSpec([1, -1, 3], dtype="float64", name="coord"), # coord - InputSpec([1, -1], dtype="int64", name="atype"), # atype - InputSpec([1, -1, 9], dtype="float64", name="box"), # box - None, # fparam - None, # aparam - True, # do_atomic_virial - ], - full_graph=True, - ) + try: + model.forward = paddle.jit.to_static( + model.forward, + input_spec=[ + InputSpec([1, -1, 3], dtype="float64", name="coord"), # coord + InputSpec([1, -1], dtype="int64", name="atype"), # atype + InputSpec([1, -1, 9], dtype="float64", name="box"), # box + None, # fparam + None, # aparam + True, # do_atomic_virial + ], + full_graph=True, + ) + log.info("Successfully converted model.forward to static graph") + except Exception as e: + log.warning(f"Failed to convert model.forward to static graph: {e}") + log.warning("Falling back to dynamic execution for model.forward")

387-401: Proper implementation of static graph conversion for forward_lower method.

The code conditionally converts the model's forward_lower method to a static graph, similar to the forward method. The inclusion of a mapping input specification enhances the graph's ability to handle the required inputs.

Similar to the previous suggestion, consider adding error handling for the static graph conversion:

 if hasattr(model, "forward_lower"): - model.forward_lower = paddle.jit.to_static( - model.forward_lower, - input_spec=[ - InputSpec([1, -1, 3], dtype="float64", name="coord"), # extended_coord - InputSpec([1, -1], dtype="int32", name="atype"), # extended_atype - InputSpec([1, -1, -1], dtype="int32", name="nlist"), # nlist - InputSpec([1, -1], dtype="int64", name="mapping"), # mapping - None, # fparam - None, # aparam - True, # do_atomic_virial - None, # comm_dict - ], - full_graph=True, - ) + try: + model.forward_lower = paddle.jit.to_static( + model.forward_lower, + input_spec=[ + InputSpec([1, -1, 3], dtype="float64", name="coord"), # extended_coord + InputSpec([1, -1], dtype="int32", name="atype"), # extended_atype + InputSpec([1, -1, -1], dtype="int32", name="nlist"), # nlist + InputSpec([1, -1], dtype="int64", name="mapping"), # mapping + None, # fparam + None, # aparam + True, # do_atomic_virial + None, # comm_dict + ], + full_graph=True, + ) + log.info("Successfully converted model.forward_lower to static graph") + except Exception as e: + log.warning(f"Failed to convert model.forward_lower to static graph: {e}") + log.warning("Falling back to dynamic execution for model.forward_lower")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 51f583b and 0a8ce44.

📒 Files selected for processing (6)
  • .pre-commit-config.yaml (2 hunks)
  • deepmd/pd/entrypoints/main.py (1 hunks)
  • deepmd/pd/model/descriptor/se_a.py (2 hunks)
  • deepmd/pd/model/network/layernorm.py (1 hunks)
  • deepmd/pd/utils/nlist.py (2 hunks)
  • source/lmp/plugin/CMakeLists.txt (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • deepmd/pd/utils/nlist.py
  • source/lmp/plugin/CMakeLists.txt
  • deepmd/pd/model/descriptor/se_a.py
  • .pre-commit-config.yaml
⏰ Context from checks skipped due to timeout of 90000ms (19)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build C library (2.14, >=2.5.0rc0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Analyze (python)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
🔇 Additional comments (4)
deepmd/pd/model/network/layernorm.py (1)

104-104: Simplified dynamic mode check to use the direct API.

This change updates the code to use the simplified paddle.in_dynamic_mode() API instead of the more verbose paddle.framework.in_dynamic_mode(). The functionality remains the same, but the code is now more concise and aligned with Paddle's recommended API usage.

deepmd/pd/entrypoints/main.py (3)

359-366: Good documentation of output tensors with shape and type information.

The added comments provide excellent documentation about the expected output shapes and data types for the model.forward method. This will help developers understand the structure of the data flowing through the model.


380-386: Good documentation of output tensors with shape and type information for forward_lower.

Similar to the previous documentation block, this provides clear information about the expected output shape and data types for the model.forward_lower method, which is helpful for developers.


405-405: Updated model saving to save the model directly.

The code now saves the model directly instead of a JIT model, which aligns with the changes made to convert the methods to static graphs earlier in the function.

@HydrogenSulfate HydrogenSulfate changed the title [WIP] Add dpa1 + lammps inference Add dpa1 + lammps inference Mar 5, 2025
@njzjz njzjz changed the title Add dpa1 + lammps inference feat(pd):Add dpa1 + lammps inference Mar 5, 2025
@njzjz njzjz changed the title feat(pd):Add dpa1 + lammps inference feat(pd): Add dpa1 + lammps inference Mar 5, 2025
@njzjz njzjz added this pull request to the merge queue Mar 5, 2025
Merged via the queue into deepmodeling:devel with commit c2843b7 Mar 5, 2025
60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment