Skip to content

Conversation

@jan-janssen
Copy link
Member

@jan-janssen jan-janssen commented Aug 24, 2025

Summary by CodeRabbit

  • New Features

    • Optional executor support across the library, enabling pluggable execution backends.
    • Added top-level init_function export for easier initialization.
    • Standardized multi-core result handling for consistent returns.
  • Refactor

    • Migrated concurrent execution to an executor-based model for improved reliability.
    • Streamlined lifecycle management and file handling.
  • Behavior Changes

    • In multi-core runs, aggregated results are returned on the main rank; non-root ranks return empty placeholders.
  • Tests

    • Added end-to-end tests for the new executor-based LAMMPS interface.
  • Chores

    • Updated dependency: executorlib 1.2.0 → 1.3.0.
@coderabbitai
Copy link

coderabbitai bot commented Aug 24, 2025

Walkthrough

The PR introduces an executor-based execution model, adds top-level exports, centralizes Future result handling, adjusts MPI data return to root-only, updates tests to the new API, and bumps an executorlib dependency pin. Legacy async/queue paths and the standalone MPI wrapper entrypoint are removed.

Changes

Cohort / File(s) Summary
Dependency pin
.ci_support/environment-old.yml
Update executorlib from 1.2.0 to 1.3.0.
Top-level export
pylammpsmpi/__init__.py
Export init_function from pylammpsmpi.wrapper.concurrent via import and __all__.
MPI root-centric behavior
pylammpsmpi/mpi/lmpmpi.py
Remove standalone MPI-LAMMPS launcher code and __main__ path; modify data gathering so only rank 0 returns aggregated/converted data; non-root ranks return empty placeholders.
Unified Future handling helper
pylammpsmpi/wrapper/base.py
Add get_result(future, cores) and replace direct future.result() across many wrappers; minor delegation tweak for natoms.
Executor-based concurrent refactor
pylammpsmpi/wrapper/concurrent.py
Replace thread/queue async flow with pluggable executor model; add helpers: init_function, start_lmp, shutdown_lmp, call_function_lmp; accept optional executor in LammpsConcurrent.__init__; remove execute_async; file I/O guard and safe shutdown adjustments.
Executor integration in wrappers
pylammpsmpi/wrapper/extended.py, pylammpsmpi/wrapper/ase.py
Add optional executor parameter to LammpsLibrary and LammpsASELibrary and forward it; switch result retrieval to get_result(..., cores=self.cores).
Tests for executor interface
tests/test_executor.py, tests/test_pylammpsmpi_local.py
Add tests using init_function, start_lmp, call_function_lmp, shutdown_lmp; add SingleNodeExecutor path for LammpsLibrary and validate version retrieval.

Sequence Diagram(s)

sequenceDiagram autonumber actor User participant L as LammpsLibrary participant C as LammpsConcurrent participant E as Executor participant W as Worker(ParallelLammps) User->>L: instantiate(cores, executor=?) L->>C: new(cores, executor=?) alt no executor provided C->>E: create SingleNodeExecutor(workers=1, init=init_function) E-->>C: executor ready end note over E,W: init_function creates ParallelLammps instance User->>L: lmp.some_call(args) L->>C: submit task(dict{command,args}) C->>E: submit(call_function_lmp, input_dict) E->>W: call_function_lmp(lmp, input_dict) W-->>E: result E-->>C: Future C-->>L: Future L->>L: get_result(Future, cores) L-->>User: value User->>L: close/shutdown L->>C: shutdown C->>E: submit(shutdown_lmp) E->>W: shutdown_lmp(lmp) E-->>C: ack 
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

dependencies

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title "Use executorlib directly" is concise, directly reflects the primary change in the PR (switching internal execution to use executorlib for MPI startup), and is clear enough for a reviewer scanning the history to understand the main intent of the changeset. It avoids noise and accurately targets the core change described in the diffs and PR objectives.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

Poem

I thump my paw—new futures hop and run,
Executors hum, the queues are done.
Root gathers carrots, workers keep mum,
A whisker-twitch, results do come.
With gentle hops through LAMMPS we glide—
Version squeaks back, ears high with pride. 🥕

✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch executorlib

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
pylammpsmpi/wrapper/concurrent.py (2)

277-305: generate_atoms → create_atoms call is broken (missing n and wrong kwarg name)

generate_atoms() calls create_atoms(ids=...) but create_atoms() expects id and a required positional/keyword n. This will raise at runtime.

Apply this diff to compute n and pass correct argument names:

 def generate_atoms( self, ids=None, type=None, x=None, v=None, image=None, shrinkexceed=False ) -> Future: @@ - return self.create_atoms( - ids=ids, type=type, x=x, v=v, image=image, shrinkexceed=shrinkexceed - ) + # Determine number of atoms + if ids is not None: + n = len(ids) + elif x is not None: + n = len(x) + else: + raise TypeError("Either 'ids' or 'x' must be provided to infer atom count.") + return self.create_atoms( + n=n, id=ids, type=type, x=x, v=v, image=image, shrinkexceed=shrinkexceed + )

387-394: Fix command name in installed_packages property

The installed_packages property in pylammpsmpi/wrapper/concurrent.py is calling the wrong command string. There is no get_installed_packages command registered in the worker’s select_cmd mapping—only installed_packages exists—so this will fail at runtime.

Affected location:

  • File: pylammpsmpi/wrapper/concurrent.py
    Lines: 392–393

Proposed diff:

 @property def installed_packages(self): - return self._send_and_receive_dict(command="get_installed_packages", data=[]) + return self._send_and_receive_dict(command="installed_packages", data=[])

This ensures the property’s command name matches the installed_packages function registered in the MPI worker.

🧹 Nitpick comments (4)
pylammpsmpi/wrapper/concurrent.py (4)

96-109: Executor configuration: sensible defaults; minor robustness nit

Configuration looks correct for a single LAMMPS worker bound to N MPI ranks. One small robustness tweak: normalize cmdargs earlier (before constructing args in the worker) is already handled; consider validating working_directory exists before passing to the executor to avoid late failures in remote context.

Example:

- self.working_directory = working_directory + self.working_directory = working_directory + if not os.path.isdir(self.working_directory): + os.makedirs(self.working_directory, exist_ok=True)

140-142: Improve FileNotFoundError message

Include the missing path to aid debugging.

- if not os.path.exists(inputfile): - raise FileNotFoundError("Input file does not exist") + if not os.path.exists(inputfile): + raise FileNotFoundError(f"Input file does not exist: {inputfile}")

306-338: create_atoms signature uses type/id names; OK but consider avoiding builtins

Using type and id shadows Python builtins. Not a blocker, but if you can, consider atom_type and atom_id to reduce confusion (API compatibility permitting).


630-631: Harden del against partially-initialized instances

If construction fails before _exe is set, __del__ may raise. Use getattr with a default.

- if self._exe is not None: + if getattr(self, "_exe", None) is not None: self.close()
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c26b4f5 and c9aed54.

📒 Files selected for processing (1)
  • pylammpsmpi/wrapper/concurrent.py (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pylammpsmpi/wrapper/concurrent.py (2)
pylammpsmpi/mpi/lmpmpi.py (2)
  • select_cmd (376-426)
  • command (371-373)
pylammpsmpi/wrapper/base.py (1)
  • command (426-439)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: unittest_openmpi (3.12)
  • GitHub Check: unittest_openmpi (3.11)
  • GitHub Check: unittest_mpich (3.13)
  • GitHub Check: unittest_openmpi (3.13)
  • GitHub Check: unittest_mpich (3.11)
  • GitHub Check: unittest_mpich (3.12)
  • GitHub Check: unittest_old
  • GitHub Check: coverage
🔇 Additional comments (6)
pylammpsmpi/wrapper/concurrent.py (6)

4-9: Imports and executorlib adoption look good

Moving to SingleNodeExecutor and keeping Future typing is clean and keeps the public surface stable.


58-61: Explicit start wrapper is fine

Starting the LAMMPS instance via a dedicated function makes the init path explicit for the executor.


111-113: Submit path is concise and consistent

Returning the Future directly keeps the wrapper behavior consistent with the rest of the codebase.


475-496: Command dispatch is correct and covers list/multiline cases

The routing to commands_string vs command is sound and preserves previous semantics.


529-540: gather_atoms arguments are packed correctly

Appending len(ids) before ids for subset mode matches the typical LAMMPS FFI calling convention.


548-557: scatter_atoms subset packing looks correct

Mirrors the gather subset pattern; looks good.

Comment on lines 22 to 52
def init_function():
from lammps import lammps # noqa: PLC0415
from mpi4py import MPI # noqa: PLC0415

from pylammpsmpi.mpi.lmpmpi import select_cmd # noqa: PLC0415

class ParallelLammps:
def __init__(self):
self._job = None

def start(self, argument_lst):
args = ["-screen", "none"]
if len(argument_lst) > 0:
args.extend(argument_lst)
self._job = lammps(cmdargs=args)

def shutdown(self):
if self._job is not None:
self._job.close()

def command(self, input_dict):
output = select_cmd(input_dict["command"])(
job=self._job, funct_args=input_dict["args"]
)
if MPI.COMM_WORLD.rank == 0:
return output
else:
return True

return {"lmp": ParallelLammps()}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against unknown LAMMPS commands in the worker

If select_cmd(input_dict["command"]) returns None (typo or unsupported command), the current code will raise a TypeError when attempting to call it. Better to fail fast with a clear error.

Apply this diff in ParallelLammps.command:

- def command(self, input_dict): - output = select_cmd(input_dict["command"])( - job=self._job, funct_args=input_dict["args"] - ) + def command(self, input_dict): + func = select_cmd(input_dict["command"]) + if func is None: + raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") + output = func(job=self._job, funct_args=input_dict["args"]) if MPI.COMM_WORLD.rank == 0: return output else: return True
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def init_function():
from lammps import lammps # noqa: PLC0415
from mpi4py import MPI # noqa: PLC0415
from pylammpsmpi.mpi.lmpmpi import select_cmd # noqa: PLC0415
class ParallelLammps:
def __init__(self):
self._job = None
def start(self, argument_lst):
args = ["-screen", "none"]
if len(argument_lst) > 0:
args.extend(argument_lst)
self._job = lammps(cmdargs=args)
def shutdown(self):
if self._job is not None:
self._job.close()
def command(self, input_dict):
output = select_cmd(input_dict["command"])(
job=self._job, funct_args=input_dict["args"]
)
if MPI.COMM_WORLD.rank == 0:
return output
else:
return True
return {"lmp": ParallelLammps()}
def command(self, input_dict):
func = select_cmd(input_dict["command"])
if func is None:
raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}")
output = func(job=self._job, funct_args=input_dict["args"])
if MPI.COMM_WORLD.rank == 0:
return output
else:
return True
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/concurrent.py around lines 22 to 52, guard the ParallelLammps.command method against unknown/unsupported LAMMPS commands by checking the result of select_cmd(input_dict["command"]) before calling it; if the selector returns None, raise a clear ValueError (or RuntimeError) that includes the offending command name and context (e.g., available/expected commands) so the worker fails fast and with an informative message, otherwise call the command as before and preserve the MPI rank 0 return behavior. 
Comment on lines 623 to 627
with contextlib.suppress(AttributeError):
self._exe.submit(shutdown_lmp).result()
self._exe.shutdown(wait=True)
self._exe = None

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make close() resilient to worker errors and always shutdown the executor

If the remote shutdown_lmp raises, executor.shutdown() may be skipped and leak resources. Wrap in try/finally and suppress remote exceptions.

- with contextlib.suppress(AttributeError): - self._exe.submit(shutdown_lmp).result() - self._exe.shutdown(wait=True) - self._exe = None + try: + fut = None + with contextlib.suppress(AttributeError): + fut = self._exe.submit(shutdown_lmp) + if fut is not None: + with contextlib.suppress(Exception): + fut.result() + finally: + with contextlib.suppress(Exception): + self._exe.shutdown(wait=True) + self._exe = None
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with contextlib.suppress(AttributeError):
self._exe.submit(shutdown_lmp).result()
self._exe.shutdown(wait=True)
self._exe = None
try:
fut = None
with contextlib.suppress(AttributeError):
fut = self._exe.submit(shutdown_lmp)
if fut is not None:
with contextlib.suppress(Exception):
fut.result()
finally:
with contextlib.suppress(Exception):
self._exe.shutdown(wait=True)
self._exe = None
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/concurrent.py around lines 623-627, the current close() calls self._exe.submit(shutdown_lmp).result() inside a contextlib.suppress(AttributeError) so if the remote shutdown_lmp raises another exception the executor.shutdown(wait=True) may be skipped and leak resources; change the code to call the submit/result inside a try block and always call executor.shutdown(wait=True) in a finally block, suppressing any exceptions from the remote call (except AttributeError is already suppressed) so that shutdown is guaranteed to run, and then set self._exe = None after shutdown. 
@jan-janssen jan-janssen marked this pull request as draft August 24, 2025 07:18
@codecov
Copy link

codecov bot commented Aug 24, 2025

Codecov Report

❌ Patch coverage is 84.21053% with 15 lines in your changes missing coverage. Please review.
✅ Project coverage is 84.47%. Comparing base (f47095f) to head (bcc80ab).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
pylammpsmpi/wrapper/base.py 70.00% 9 Missing ⚠️
pylammpsmpi/mpi/lmpmpi.py 62.50% 3 Missing ⚠️
pylammpsmpi/wrapper/concurrent.py 95.83% 2 Missing ⚠️
pylammpsmpi/wrapper/extended.py 83.33% 1 Missing ⚠️
Additional details and impacted files
@@ Coverage Diff @@ ## main #371 +/- ## ========================================== + Coverage 81.26% 84.47% +3.21%  ========================================== Files 6 6 Lines 790 773 -17 ========================================== + Hits 642 653 +11  + Misses 148 120 -28 

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
@jan-janssen jan-janssen marked this pull request as ready for review August 24, 2025 10:36
@jan-janssen
Copy link
Member Author

@srmnitc and @raynol-dsouza I updated the internals of pylammpsmpi to use executorlib directly rather than just the communication interface of executorlib. This allows users to directly use the SlurmJobExecutor or FluxJobExecutor if SLURM or flux are required to start the MPI parallel process rather than just using mpiexec. The tests pass and the coverage is increased, still as you two are the ones using pylammpsmpi a lot I wanted to give you a heads up before I merge these changes and release a new version. Can you give me feedback until the end of the week if these changes work with your applications?

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
pylammpsmpi/wrapper/concurrent.py (2)

44-51: Guard against unknown LAMMPS commands in worker dispatch

Calling the selector result without validation can raise a TypeError for typos/unsupported commands. Fail fast with a clear error.

- def command(self, input_dict): - output = select_cmd(input_dict["command"])( - job=self._job, funct_args=input_dict["args"] - ) + def command(self, input_dict): + func = select_cmd(input_dict["command"]) + if func is None: + raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") + output = func(job=self._job, funct_args=input_dict["args"]) if MPI.COMM_WORLD.rank == 0: return output else: return True

630-634: Always shutdown the executor even if remote shutdown raises

If submit()/result() raises, executor.shutdown may be skipped, leaking resources. Use try/finally and suppress remote errors.

- with contextlib.suppress(AttributeError): - self._exe.submit(shutdown_lmp).result() - self._exe.shutdown(wait=True) - self._exe = None + try: + fut = None + with contextlib.suppress(AttributeError): + fut = self._exe.submit(shutdown_lmp) + if fut is not None: + with contextlib.suppress(Exception): + fut.result() + finally: + with contextlib.suppress(Exception): + self._exe.shutdown(wait=True) + self._exe = None
🧹 Nitpick comments (10)
pylammpsmpi/mpi/lmpmpi.py (1)

427-433: Prefer itertools.chain for flattening gathered lists

Slight readability/perf win and avoids building an intermediate list in Python loops.

Apply within this function:

 def _gather_data_from_all_processors(data): - data_gather = MPI.COMM_WORLD.gather(data, root=0) - if MPI.COMM_WORLD.rank == 0: - return [v for vl in data_gather for v in vl] + data_gather = MPI.COMM_WORLD.gather(data, root=0) + if MPI.COMM_WORLD.rank == 0: + return list(chain.from_iterable(data_gather)) else: return []

And add this import at the top of the file:

from itertools import chain
pylammpsmpi/wrapper/concurrent.py (3)

147-149: Use isfile to avoid directories passing the existence check

More precise check; prevents passing a directory path.

- if not os.path.exists(inputfile): + if not os.path.isfile(inputfile): raise FileNotFoundError("Input file does not exist")

23-53: Optional: add a barrier during shutdown for MPI ranks

Not strictly necessary, but a barrier can make shutdown sequencing more predictable across ranks in tightly coupled runs.

 def shutdown(self): if self._job is not None: - self._job.close() + self._job.close() + MPI.COMM_WORLD.Barrier() self._job = None

495-503: Accept tuple-of-commands and avoid split cost for single-line strings

Minor ergonomics/readability improvement.

- if isinstance(cmd, list): - return self._send_and_receive_dict( - command="commands_string", data="\n".join(cmd) - ) - elif len(cmd.split("\n")) > 1: + if isinstance(cmd, (list, tuple)): + return self._send_and_receive_dict( + command="commands_string", data="\n".join(cmd) + ) + elif "\n" in cmd: return self._send_and_receive_dict(command="commands_string", data=cmd) else: return self._send_and_receive_dict(command="command", data=cmd)
tests/test_executor.py (1)

56-66: Future-proof the version assertion

Hard-coding a finite list of release dates will make the test brittle when a new LAMMPS version drops. Assert shape/type and a sane lower bound instead.

- self.assertTrue( - call_function_lmp( - lmp=lmp, input_dict={"command": "get_version", "args": []} - ) - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + ver = call_function_lmp( + lmp=lmp, input_dict={"command": "get_version", "args": []} + ) + self.assertIsInstance(ver, int) + # LAMMPS versions are YYYYMMDD integers; accept reasonably recent builds. + self.assertGreaterEqual(ver, 20160101)
pylammpsmpi/wrapper/ase.py (3)

67-69: Avoid passing None for working_directory

If working_directory is None, some backends may expect a string path and raise. Safest to default to "." here.

Apply this minimal diff:

- working_directory=working_directory, + working_directory=working_directory or ".",

If you prefer, I can run a repo-wide check to confirm whether LammpsBase tolerates None.


29-29: Clarify/guard executor usage when cores == 1

Executor is only used for cores > 1; if a caller passes executor with cores == 1 it’s silently ignored. Either document this explicitly or warn once to avoid confusion.

- executor: Executor to use for parallel execution (default: None) + executor: Executor to use for parallel execution when cores > 1 (default: None)

Outside the changed lines, consider:

# near the start of __init__ if executor is not None and self._cores == 1: warnings.warn("executor is ignored for cores == 1 in LammpsASELibrary.", stacklevel=2)

Also applies to: 41-41


11-11: Optional: make BaseExecutor import type-only to keep soft dependency

If you want executorlib to be optional at runtime for users who never touch the multi-core path, wrap the import under TYPE_CHECKING and quote the annotation. Otherwise, current direct import is fine given the PR’s direction.

tests/test_pylammpsmpi_local.py (1)

137-141: Future-proof the version assertion

Enumerating fixed version IDs is brittle and will fail when newer LAMMPS releases land. Assert type and a lower bound instead.

- self.assertTrue( - self.lmp.version - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + v = self.lmp.version + self.assertIsInstance(v, int) + self.assertGreaterEqual(v, 20220623)

Optional follow-up: apply the same assertion style to TestLocalLammpsLibrary.test_version for consistency.

pylammpsmpi/wrapper/extended.py (1)

242-253: Docstring tweak to reflect backend and new parameter

Docstring still says “provided by LammpsBase” while this class composes LammpsConcurrent (via LammpsBase elsewhere). Also, add the cores > 1 note for executor to match behavior.

- Top level class which manages the lammps library provided by LammpsBase + Top level class which manages the LAMMPS library via the concurrent backend. @@ - executor: Executor to use for parallel execution (default: None) + executor: Executor to use for parallel execution when cores > 1 (default: None)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c9aed54 and eff4421.

📒 Files selected for processing (9)
  • .ci_support/environment-old.yml (1 hunks)
  • pylammpsmpi/__init__.py (1 hunks)
  • pylammpsmpi/mpi/lmpmpi.py (3 hunks)
  • pylammpsmpi/wrapper/ase.py (4 hunks)
  • pylammpsmpi/wrapper/base.py (20 hunks)
  • pylammpsmpi/wrapper/concurrent.py (5 hunks)
  • pylammpsmpi/wrapper/extended.py (5 hunks)
  • tests/test_executor.py (2 hunks)
  • tests/test_pylammpsmpi_local.py (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .ci_support/environment-old.yml
🧰 Additional context used
🧬 Code graph analysis (6)
pylammpsmpi/__init__.py (2)
pylammpsmpi/wrapper/concurrent.py (2)
  • LammpsConcurrent (70-638)
  • init_function (23-53)
pylammpsmpi/wrapper/extended.py (1)
  • LammpsLibrary (242-331)
tests/test_executor.py (1)
pylammpsmpi/wrapper/concurrent.py (4)
  • init_function (23-53)
  • shutdown_lmp (65-67)
  • call_function_lmp (56-57)
  • start_lmp (60-62)
tests/test_pylammpsmpi_local.py (3)
pylammpsmpi/wrapper/extended.py (2)
  • LammpsLibrary (242-331)
  • close (316-320)
pylammpsmpi/wrapper/concurrent.py (4)
  • init_function (23-53)
  • file (134-149)
  • close (618-633)
  • version (123-132)
pylammpsmpi/wrapper/base.py (2)
  • file (40-51)
  • version (30-38)
pylammpsmpi/wrapper/base.py (2)
pylammpsmpi/wrapper/concurrent.py (24)
  • version (123-132)
  • extract_setting (152-153)
  • extract_global (155-169)
  • extract_box (171-180)
  • extract_atom (182-196)
  • extract_fix (198-212)
  • extract_variable (214-228)
  • get_natoms (242-251)
  • set_variable (253-267)
  • has_exceptions (347-356)
  • has_gzip_support (359-368)
  • has_png_support (371-380)
  • has_jpeg_support (383-392)
  • has_ffmpeg_support (395-396)
  • installed_packages (399-400)
  • get_neighlist (407-414)
  • find_pair_neighlist (416-438)
  • find_fix_neighlist (440-451)
  • find_compute_neighlist (453-464)
  • get_neighlist_size (466-475)
  • get_neighlist_element_neighbors (477-480)
  • gather_atoms (504-546)
  • get_thermo (565-580)
  • extract_compute (583-616)
pylammpsmpi/mpi/lmpmpi.py (23)
  • extract_setting (102-103)
  • extract_global (106-107)
  • extract_box (110-111)
  • extract_atom (114-140)
  • extract_fix (143-144)
  • extract_variable (147-164)
  • get_natoms (167-168)
  • set_variable (171-172)
  • has_exceptions (265-266)
  • has_gzip_support (269-270)
  • has_png_support (273-274)
  • has_jpeg_support (277-278)
  • has_ffmpeg_support (281-282)
  • installed_packages (285-286)
  • get_neighlist (294-295)
  • find_pair_neighlist (298-299)
  • find_fix_neighlist (302-303)
  • find_compute_neighlist (306-307)
  • get_neighlist_size (310-311)
  • get_neighlist_element_neighbors (314-315)
  • gather_atoms (180-200)
  • get_thermo (318-319)
  • extract_compute (42-80)
pylammpsmpi/wrapper/extended.py (3)
pylammpsmpi/wrapper/concurrent.py (4)
  • LammpsConcurrent (70-638)
  • get_thermo (565-580)
  • command (44-51)
  • command (482-502)
pylammpsmpi/wrapper/base.py (3)
  • get_result (21-25)
  • get_thermo (435-450)
  • command (375-388)
pylammpsmpi/mpi/lmpmpi.py (2)
  • get_thermo (318-319)
  • command (369-371)
pylammpsmpi/wrapper/concurrent.py (2)
pylammpsmpi/mpi/lmpmpi.py (2)
  • select_cmd (374-424)
  • command (369-371)
pylammpsmpi/wrapper/base.py (1)
  • command (375-388)
🔇 Additional comments (12)
pylammpsmpi/mpi/lmpmpi.py (2)

75-78: Rank-conditional return: verify downstream expectations and document contract

For style==1, rank 0 returns converted data while non-root returns the raw gathered placeholder (currently []). This is fine with the executor path (which only consumes rank 0), but direct usage of pylammpsmpi.mpi.lmpmpi without the wrapper may observe empty returns on non-root ranks.

  • Please add a brief note to the function docstring clarifying “root-only meaningful return”.
  • Confirm no callers expect a meaningful value on non-root ranks. If any do, consider returning a consistent sentinel (e.g., None) to make the contract explicit.

154-157: Per-variable extraction now root-only: ensure compatibility with existing callers

When funct_args[2] == 1, non-root ranks now return an empty array. This aligns with the executor aggregation pattern, but could break direct MPI-rank-local consumers.

  • Double-check downstream usage (especially in any legacy MPI paths) and add docstring notes mirroring extract_compute.
pylammpsmpi/wrapper/base.py (1)

38-38: Consistent centralization via get_result — nice cleanup

Great use of get_result across the API to abstract executor aggregation details. This simplifies call sites and keeps MPI/executor specifics localized.

Also applies to: 66-66, 80-80, 95-95, 110-110, 124-124, 138-138, 159-159, 173-173, 281-281, 285-285, 289-289, 293-293, 297-297, 301-301, 313-313, 335-335, 368-368, 371-373, 422-424, 450-450, 485-490

pylammpsmpi/wrapper/concurrent.py (1)

100-116: Executor injection and resource wiring — good extensibility

Allowing an external executor and passing MPI-related resources cleanly through resource_dict makes the design flexible without complicating call sites.

pylammpsmpi/__init__.py (1)

3-3: Publicly exporting init_function — LGTM

This aligns the top-level API with the new executor-based workflow and simplifies test and user access.

Also applies to: 6-6

tests/test_executor.py (1)

8-13: Direct import of new concurrency helpers — LGTM

Tests now exercise the start/call/shutdown flow directly; this provides good coverage of the executor-side worker surface.

pylammpsmpi/wrapper/ase.py (1)

66-69: Executor wiring for multi-core path — LGTM

Passing the executor through to LammpsBase in the multi-core branch looks correct and aligns with the PR objective to use executorlib directly.

tests/test_pylammpsmpi_local.py (2)

112-131: Executor-backed path coverage — LGTM

Good addition to ensure the executor-backed LammpsLibrary path initializes, loads input, and returns version under SingleNodeExecutor.


9-11: init_function export verified

  • pylammpsmpi/__init__.py imports init_function from pylammpsmpi.wrapper.concurrent and includes it in __all__.
  • The actual def init_function(): implementation exists in pylammpsmpi/wrapper/concurrent.py (line 23).

No further changes needed.

pylammpsmpi/wrapper/extended.py (3)

6-9: Imports — LGTM

Pulling BaseExecutor and central get_result into this module is appropriate and keeps async handling consistent.


262-274: Executor parameter propagation — LGTM

Accepting an Optional[BaseExecutor] and passing it through to LammpsConcurrent aligns with the executorlib integration plan.


286-312: Centralized Future handling via get_result — check all call sites return Future

The wrappers now uniformly call get_result(future=..., cores=self.cores). Ensure every attribute/method referenced here in LammpsConcurrent returns a Future; otherwise get_result will raise.

If any of these return immediate values instead of Futures, we can add a small adapter in LammpsConcurrent or guard in get_result.

Comment on lines +21 to +26
def get_result(future: Future, cores: int) -> Any:
if cores > 1:
return future.result()[0]
else:
return future.result()

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden get_result against non-list results and empty aggregates

Future results from custom/external executors or edge cases may not be indexable or could be empty. Guard the access to avoid IndexError/TypeError.

-def get_result(future: Future, cores: int) -> Any: - if cores > 1: - return future.result()[0] - else: - return future.result() +def get_result(future: Future, cores: int) -> Any: + res = future.result() + if cores > 1 and isinstance(res, (list, tuple)) and len(res) > 0: + return res[0] + return res
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def get_result(future: Future, cores: int) -> Any:
if cores > 1:
return future.result()[0]
else:
return future.result()
def get_result(future: Future, cores: int) -> Any:
res = future.result()
if cores > 1 and isinstance(res, (list, tuple)) and len(res) > 0:
return res[0]
return res
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/base.py around lines 21 to 26, get_result currently assumes future.result() is indexable and non-empty when cores>1; instead, assign result = future.result() and, if cores>1, check whether result is a Sequence (from collections.abc) but not a str/bytes/bytearray and has len(result) > 0 before returning result[0]; otherwise return result as-is (this avoids IndexError/TypeError for non-list/empty aggregates and preserves non-sequence results). 
@raynol-dsouza
Copy link
Contributor

@jan-janssen I did a preliminary check, and all of my scripts work as expected. I will update later this week if something breaks. :)

@srmnitc
Copy link
Member

srmnitc commented Aug 31, 2025

@jan-janssen I have not gotten around to testing it yet. Quick question: does anything change for the mode - local?

@jan-janssen
Copy link
Member Author

@jan-janssen I have not gotten around to testing it yet. Quick question: does anything change for the mode - local?

Yes, the serial execution is not affected but whenever we use MPI parallel LAMMPS it is now handled via executorlib.

@srmnitc
Copy link
Member

srmnitc commented Aug 31, 2025

@jan-janssen thanks. This definitely needs to be checked from the calphy side. I will try to do this until tomorrow eod. I hope thats ok.

@jan-janssen
Copy link
Member Author

@srmnitc Did you have the chance to test these changes with calphy?

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pylammpsmpi/wrapper/concurrent.py (1)

149-151: Don’t check file existence locally for remote executors.

os.path.exists(inputfile) runs on the client, not on the remote worker (Slurm/Flux). This wrongly blocks valid remote paths.

- if not os.path.exists(inputfile): - raise FileNotFoundError("Input file does not exist") + # Skip local existence checks; worker CWD may differ. LAMMPS will error if missing remotely.
♻️ Duplicate comments (3)
pylammpsmpi/wrapper/base.py (1)

21-26: Harden get_result against non-sequence/empty results

Repeat of earlier feedback; guards avoid IndexError/TypeError with custom executors or edge cases.

-def get_result(future: Future, cores: int) -> Any: - if cores > 1: - return future.result()[0] - else: - return future.result() +def get_result(future: Future, cores: int) -> Any: + res = future.result() + if cores > 1 and isinstance(res, (list, tuple)) and len(res) > 0: + return res[0] + return res
pylammpsmpi/wrapper/concurrent.py (2)

44-51: Guard against unknown LAMMPS commands to avoid TypeError.

Calling the result of select_cmd(...) without checking can explode if it returns None.

- def command(self, input_dict): - output = select_cmd(input_dict["command"])( - job=self._job, funct_args=input_dict["args"] - ) + def command(self, input_dict): + func = select_cmd(input_dict["command"]) + if func is None: + raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") + output = func(job=self._job, funct_args=input_dict["args"]) if MPI.COMM_WORLD.rank == 0: return output else: return True

632-636: Always shut down executor even if remote shutdown raises.

Ensure no resource leaks by using try/finally and suppress remote errors.

- with contextlib.suppress(AttributeError): - self._exe.submit(shutdown_lmp).result() - if not self._external_executor: - self._exe.shutdown(wait=True) - self._exe = None + try: + fut = None + with contextlib.suppress(AttributeError): + fut = self._exe.submit(shutdown_lmp) + if fut is not None: + with contextlib.suppress(Exception): + fut.result() + finally: + if not self._external_executor: + with contextlib.suppress(Exception): + self._exe.shutdown(wait=True) + self._exe = None
🧹 Nitpick comments (13)
pylammpsmpi/mpi/lmpmpi.py (2)

75-78: Non-root return type inconsistency (list vs ndarray).

extract_compute(style=1) now returns a Python list on non-root, while extract_variable returns an empty ndarray. Suggest making both return np.ndarray for consistency.

- else: - return val + else: + # Keep type consistent with extract_variable non-root path + return np.array([])

427-433: Make gather robust and memory-friendly; handle empty partitions.

The list-flatten approach can be slow and awkward for 2D arrays; also guard against the “all-empty” case.

 def _gather_data_from_all_processors(data): data_gather = MPI.COMM_WORLD.gather(data, root=0) if MPI.COMM_WORLD.rank == 0: - return [v for vl in data_gather for v in vl] + # Tolerate empty local partitions and return ndarray when possible + non_empty = [a for a in data_gather if hasattr(a, "size") and a.size > 0] + if not non_empty: + return np.array([]) + return np.concatenate(non_empty, axis=0) else: return []
pylammpsmpi/wrapper/ase.py (1)

29-30: Docstring: add explicit type for executor.

Minor clarity: match signature types in the docstring.

- executor: Executor to use for parallel execution (default: None) + executor (Optional[BaseExecutor]): Executor to use for parallel execution (default: None).
tests/test_pylammpsmpi_local.py (1)

112-134: Tighten assertion style and avoid version-list duplication

  • Prefer assertIn over membership checks for clearer failures.

Apply within this hunk:

- self.assertTrue( - lmp.version - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + self.assertIn( + lmp.version, + [20220623, 20230802, 20231121, 20240207, 20240627, 20240829], + )

Also consider updating the earlier TestLocalLammpsLibrary.test_version (Lines 27–31) similarly to keep consistency.

pylammpsmpi/__init__.py (1)

6-6: Sort all (RUF022) for consistency

Minor nit to satisfy tooling and keep exports tidy.

-__all__ = ["LammpsLibrary", "LammpsConcurrent", "LammpsBase", "init_function"] +__all__ = ["LammpsBase", "LammpsConcurrent", "LammpsLibrary", "init_function"]
tests/test_executor.py (1)

56-67: Use assertIn for the version check

Improves readability and failure messages.

- self.assertTrue( - call_function_lmp( - lmp=lmp, input_dict={"command": "get_version", "args": []} - ) - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + self.assertIn( + call_function_lmp( + lmp=lmp, input_dict={"command": "get_version", "args": []} + ), + [20220623, 20230802, 20231121, 20240207, 20240627, 20240829], + )
pylammpsmpi/wrapper/extended.py (1)

301-306: Small list build nit (RUF005): prefer unpacking over concatenation

More idiomatic and a touch faster.

- args = [name] + list(args) + args = [name, *args]
pylammpsmpi/wrapper/base.py (1)

452-490: Avoid shadowing built-in name 'type'

Not urgent, but consider renaming the parameter for readability and lint happiness (e.g., value_type).

pylammpsmpi/wrapper/concurrent.py (5)

24-27: Drop unused noqa directives (Ruff RUF100).

These # noqa: PLC0415 markers are flagged as unused. Remove them to satisfy linting.

- from lammps import lammps # noqa: PLC0415 - from mpi4py import MPI # noqa: PLC0415 + from lammps import lammps + from mpi4py import MPI - from pylammpsmpi.mpi.lmpmpi import select_cmd # noqa: PLC0415 + from pylammpsmpi.mpi.lmpmpi import select_cmd

76-76: Tighten typing for cmdargs.

Use Optional with element type for better clarity/tooling.

- cmdargs: list = None, + cmdargs: Optional[list[str]] = None,

640-641: Make destructor exception-safe.

Avoid noisy teardown issues at interpreter shutdown.

- if self._exe is not None: - self.close() + if self._exe is not None: + with contextlib.suppress(Exception): + self.close()

316-346: Avoid shadowing built-ins (id, type) in signatures.

Not a blocker, but confusing for readers and tooling.

Consider renaming parameters to atom_ids and atom_types (with deprecation aliasing if this is public API).


145-151: Docstring return types drifted from the new Future-based API.

Several sections still claim “Returns: None” or raw values but now return Futures. Update to reduce user confusion.

Happy to submit a follow-up PR updating these docstrings consistently.

Also applies to: 493-501, 524-529, 307-314, 338-346

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f47095f and bcc80ab.

📒 Files selected for processing (9)
  • .ci_support/environment-old.yml (1 hunks)
  • pylammpsmpi/__init__.py (1 hunks)
  • pylammpsmpi/mpi/lmpmpi.py (3 hunks)
  • pylammpsmpi/wrapper/ase.py (4 hunks)
  • pylammpsmpi/wrapper/base.py (20 hunks)
  • pylammpsmpi/wrapper/concurrent.py (5 hunks)
  • pylammpsmpi/wrapper/extended.py (5 hunks)
  • tests/test_executor.py (2 hunks)
  • tests/test_pylammpsmpi_local.py (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
tests/test_pylammpsmpi_local.py (3)
pylammpsmpi/wrapper/extended.py (2)
  • LammpsLibrary (242-331)
  • close (316-320)
pylammpsmpi/wrapper/concurrent.py (4)
  • init_function (23-53)
  • file (136-151)
  • version (125-134)
  • close (620-636)
pylammpsmpi/wrapper/base.py (2)
  • file (40-51)
  • version (30-38)
tests/test_executor.py (1)
pylammpsmpi/wrapper/concurrent.py (4)
  • init_function (23-53)
  • shutdown_lmp (65-67)
  • call_function_lmp (56-57)
  • start_lmp (60-62)
pylammpsmpi/wrapper/concurrent.py (2)
pylammpsmpi/mpi/lmpmpi.py (2)
  • select_cmd (374-424)
  • command (369-371)
pylammpsmpi/wrapper/base.py (1)
  • command (375-388)
pylammpsmpi/wrapper/extended.py (3)
pylammpsmpi/wrapper/concurrent.py (4)
  • LammpsConcurrent (70-641)
  • get_thermo (567-582)
  • command (44-51)
  • command (484-504)
pylammpsmpi/wrapper/base.py (3)
  • get_result (21-25)
  • get_thermo (435-450)
  • command (375-388)
pylammpsmpi/mpi/lmpmpi.py (2)
  • get_thermo (318-319)
  • command (369-371)
pylammpsmpi/wrapper/base.py (2)
pylammpsmpi/wrapper/concurrent.py (24)
  • version (125-134)
  • extract_setting (154-155)
  • extract_global (157-171)
  • extract_box (173-182)
  • extract_atom (184-198)
  • extract_fix (200-214)
  • extract_variable (216-230)
  • get_natoms (244-253)
  • set_variable (255-269)
  • has_exceptions (349-358)
  • has_gzip_support (361-370)
  • has_png_support (373-382)
  • has_jpeg_support (385-394)
  • has_ffmpeg_support (397-398)
  • installed_packages (401-402)
  • get_neighlist (409-416)
  • find_pair_neighlist (418-440)
  • find_fix_neighlist (442-453)
  • find_compute_neighlist (455-466)
  • get_neighlist_size (468-477)
  • get_neighlist_element_neighbors (479-482)
  • gather_atoms (506-548)
  • get_thermo (567-582)
  • extract_compute (585-618)
pylammpsmpi/mpi/lmpmpi.py (23)
  • extract_setting (102-103)
  • extract_global (106-107)
  • extract_box (110-111)
  • extract_atom (114-140)
  • extract_fix (143-144)
  • extract_variable (147-164)
  • get_natoms (167-168)
  • set_variable (171-172)
  • has_exceptions (265-266)
  • has_gzip_support (269-270)
  • has_png_support (273-274)
  • has_jpeg_support (277-278)
  • has_ffmpeg_support (281-282)
  • installed_packages (285-286)
  • get_neighlist (294-295)
  • find_pair_neighlist (298-299)
  • find_fix_neighlist (302-303)
  • find_compute_neighlist (306-307)
  • get_neighlist_size (310-311)
  • get_neighlist_element_neighbors (314-315)
  • gather_atoms (180-200)
  • get_thermo (318-319)
  • extract_compute (42-80)
pylammpsmpi/__init__.py (1)
pylammpsmpi/wrapper/concurrent.py (2)
  • LammpsConcurrent (70-641)
  • init_function (23-53)
🪛 Ruff (0.12.2)
pylammpsmpi/wrapper/concurrent.py

24-24: Unused noqa directive (non-enabled: PLC0415)

Remove unused noqa directive

(RUF100)


25-25: Unused noqa directive (non-enabled: PLC0415)

Remove unused noqa directive

(RUF100)


27-27: Unused noqa directive (non-enabled: PLC0415)

Remove unused noqa directive

(RUF100)

pylammpsmpi/wrapper/extended.py

302-302: Consider [name, *list(args)] instead of concatenation

Replace with [name, *list(args)]

(RUF005)

pylammpsmpi/__init__.py

6-6: __all__ is not sorted

Apply an isort-style sorting to __all__

(RUF022)

🔇 Additional comments (12)
.ci_support/environment-old.yml (1)

8-8: Original comment incorrect — executorlib is pinned to 1.7.1 and consistent.

  • pyproject.toml:28 -> "executorlib==1.7.1"
  • binder/environment.yml:10 -> "- executorlib =1.7.1"
  • No references to 1.3.0 or mismatched pins found.

Likely an incorrect or invalid review comment.

pylammpsmpi/mpi/lmpmpi.py (1)

154-157: Root-only data propagation looks good.

Returning a real array only on rank 0 is consistent with the new root-centric model and reduces redundant allocations on workers. Ensure downstream callers don’t rely on non-root data being populated.

pylammpsmpi/wrapper/ase.py (2)

66-69: Passing executor to LammpsBase is correct.

This wires the new execution model without affecting single-core behavior. Looks good.


11-11: Import path verified — BaseExecutor is exported from executorlib v1.3.0 package root.
No change required; from executorlib import BaseExecutor is correct.

tests/test_pylammpsmpi_local.py (1)

9-11: Top-level imports for executor usage look good

Importing SingleNodeExecutor and init_function where needed is correct and aligns with the new API.

pylammpsmpi/__init__.py (1)

3-3: Exporting init_function at the top level is a sensible addition

This unlocks a simpler import path for users/tests.

pylammpsmpi/wrapper/extended.py (2)

252-274: Executor parameter plumbed through correctly

Constructor wiring of executor → LammpsConcurrent is correct and keeps backwards compatibility.


276-306: Confirm callers don't rely on None-returning command/func wrappers

getattr now returns get_result(...) for func_list/command_list entries (instead of possibly None). Search found call sites invoking .file/.command/.clear/.reset_box without .result() — these callers may assume a None return. Update callers/tests to use .result() or restore None-return behavior for side-effect-only commands.

Inspect: tests/test_pylammpsmpi_local.py:21,98-99,128; tests/test_base.py:24,38,101-107; tests/test_concurrent.py:21,87-88,92-93; tests/test_exception.py:51; pylammpsmpi/wrapper/concurrent.py:57.

pylammpsmpi/wrapper/base.py (2)

38-38: LGTM: centralized result handling

Consistent use of get_result across properties and accessors simplifies multi-core vs single-core flows.

Also applies to: 66-66, 80-80, 95-95, 110-110, 124-124, 138-138, 159-159, 173-173, 281-302, 313-314, 335-336, 346-347, 357-359, 368-369, 371-373, 422-424, 450-450, 485-490


1-26: Verify property usage against concurrent layer — confirmed (natoms exception)

pylammpsmpi/wrapper/concurrent.py implements version, has_exceptions, has_gzip_support, has_png_support, has_jpeg_support, has_ffmpeg_support and installed_packages as @Property; natoms is defined as a regular method (def natoms(...)), and LammpsBase calls get_natoms() (not super().natoms), so the natoms item in the original comment is incorrect.

Likely an incorrect or invalid review comment.

pylammpsmpi/wrapper/concurrent.py (2)

100-111: LGTM: sensible SingleNodeExecutor defaults.

block_allocation=True, max_workers=1, and passing cores/cwd/oversubscribe via resource_dict align with the executor-backed design.


56-67: LGTM: thin bridging helpers keep the worker surface minimal.

call_function_lmp, start_lmp, and shutdown_lmp are clear and match the injected lmp object model.

Comment on lines +115 to +117
if cmdargs is None:
cmdargs = []
self._exe.submit(start_lmp, argument_lst=cmdargs).result()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fail fast if an external executor wasn’t initialized with init_function().

If a user supplies an external executor that didn’t bind lmp, start_lmp(...).result() raises a cryptic TypeError. Wrap and re-raise with guidance.

- self._exe.submit(start_lmp, argument_lst=cmdargs).result() + try: + self._exe.submit(start_lmp, argument_lst=cmdargs).result() + except TypeError as e: + # Likely missing injected 'lmp' from init_function() + raise ValueError( + "Provided executor must be created with init_function() from " + "pylammpsmpi.wrapper.concurrent so that 'lmp' is available for " + "dependency injection." + ) from e

🏁 Script executed:

#!/bin/bash set -euo pipefail echo "PWD: $(pwd)" echo # Try to find the target file file=$(rg --files | rg 'pylammpsmpi/wrapper/concurrent.py' || true) if [ -n "$file" ]; then echo "Found: $file" echo "---- file (first 240 lines) ----" nl -ba "$file" | sed -n '1,240p' else echo "pylammpsmpi/wrapper/concurrent.py not found; searching for any concurrent.py files..." files=$(rg --files | rg 'concurrent.py' || true) if [ -z "$files" ]; then echo "No concurrent.py files found in repository." else for f in $files; do echo "---- $f (first 240 lines) ----" nl -ba "$f" | sed -n '1,240p' done fi fi echo echo "---- Searching for start_lmp, init_function, initializer, submit, argument_lst, lmp ----" rg -n --hidden --no-ignore -S '\bstart_lmp\b' -C5 pylammpsmpi || true rg -n --hidden --no-ignore -S '\binit_function\b|\binitializer\b|\binitargs\b|\binit_worker\b' -C3 pylammpsmpi || true rg -n --hidden --no-ignore -S '\.submit\(' -C3 pylammpsmpi || true rg -n --hidden --no-ignore -S '\bargument_lst\b|\bargument_list\b' -C3 pylammpsmpi || true rg -n --hidden --no-ignore -S '\blmp\b' -C3 pylammpsmpi || true

Length of output: 140


🏁 Script executed:

#!/bin/bash # Robust inspection script (avoids 'nl', uses awk/sed, falls back if rg missing) set -euo pipefail echo "PWD: $(pwd)" echo # find the target file if command -v rg >/dev/null 2>&1; then file=$(rg --files | rg 'pylammpsmpi/wrapper/concurrent.py' || true) else file=$(find . -type f -path '*/pylammpsmpi/wrapper/concurrent.py' -print -quit || true) fi if [ -n "$file" ]; then echo "Found: $file" echo "---- $file (lines 1-260) ----" awk '{printf "%5d: %s\n", NR, $0}' "$file" | sed -n '1,260p' else echo "pylammpsmpi/wrapper/concurrent.py not found; searching for any concurrent.py files..." if command -v rg >/dev/null 2>&1; then files=$(rg --files | rg 'concurrent.py' || true) else files=$(find . -type f -name concurrent.py || true) fi if [ -z "$files" ]; then echo "No concurrent.py files found in repository." else for f in $files; do echo "---- $f (lines 1-260) ----" awk '{printf "%5d: %s\n", NR, $0}' "$f" | sed -n '1,260p' done fi fi echo echo "---- Searching for definitions/usages: start_lmp, init_function, initializer, submit, argument_lst, lmp ----" if command -v rg >/dev/null 2>&1; then rg -n --hidden --no-ignore -S '\bstart_lmp\b' -C3 || true rg -n --hidden --no-ignore -S '\binit_function\b|\binitializer\b|\binit_worker\b' -C3 || true rg -n --hidden --no-ignore -S '\.submit\(' -C3 || true rg -n --hidden --no-ignore -S '\bargument_lst\b|\bargument_list\b' -C3 || true rg -n --hidden --no-ignore -S '\blmp\b' -C3 || true rg -n --hidden --no-ignore -S 'def start_lmp\(|def init_function\(' -C3 || true else grep -RIn --exclude-dir=.git -E --line-number 'start_lmp|init_function|initializer|init_worker|\.submit\(|argument_lst|argument_list|\blmp\b|def start_lmp\(|def init_function\(' . || true fi

Length of output: 108036


Fail fast if external executor wasn't initialized with init_function().

Wrap the startup submit call to catch a missing injected lmp and raise a clear ValueError instead of a cryptic TypeError.
File: pylammpsmpi/wrapper/concurrent.py: lines 115–117

- self._exe.submit(start_lmp, argument_lst=cmdargs).result() + try: + self._exe.submit(start_lmp, argument_lst=cmdargs).result() + except TypeError as e: + # Likely missing injected 'lmp' from init_function() + raise ValueError( + "Provided executor must be created with init_function() from " + "pylammpsmpi.wrapper.concurrent so that 'lmp' is available for " + "dependency injection." + ) from e
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if cmdargs is None:
cmdargs = []
self._exe.submit(start_lmp, argument_lst=cmdargs).result()
if cmdargs is None:
cmdargs = []
try:
self._exe.submit(start_lmp, argument_lst=cmdargs).result()
except TypeError as e:
# Likely missing injected 'lmp' from init_function()
raise ValueError(
"Provided executor must be created with init_function() from "
"pylammpsmpi.wrapper.concurrent so that 'lmp' is available for "
"dependency injection."
) from e
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/concurrent.py around lines 115-117, the call to self._exe.submit(start_lmp, argument_lst=cmdargs).result() can raise a cryptic TypeError when the external executor wasn't initialized with init_function() (missing injected lmp); wrap the submit/result call in a try/except that catches TypeError and raises a clear ValueError instead (include a short message instructing the caller to initialize the executor with init_function and include the original exception as the __cause__ using "from" to preserve traceback). 
Comment on lines 309 to 312
elif name in prop_list:
fut = getattr(self.lmp, name)
return fut.result()
return get_result(future=fut, cores=self.cores)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden property dispatch (callable vs. attribute) and ensure natoms works

getattr(self.lmp, name) may return a callable (method) or raise for names like "natoms" if only get_natoms() exists. Call if callable; fall back to get_natoms for natoms. Prevents AttributeError and Future-type mismatches at runtime.

- elif name in prop_list: - fut = getattr(self.lmp, name) - return get_result(future=fut, cores=self.cores) + elif name in prop_list: + # Some entries may be methods returning a Future, others true properties. + if name == "natoms" and not hasattr(self.lmp, "natoms"): + fut = self.lmp.get_natoms() + else: + attr = getattr(self.lmp, name) + fut = attr() if callable(attr) else attr + return get_result(future=fut, cores=self.cores)

If you prefer, I can open a small follow-up PR to apply this change and add a unit test asserting LammpsLibrary().natoms returns an int.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif name in prop_list:
fut = getattr(self.lmp, name)
return fut.result()
return get_result(future=fut, cores=self.cores)
elif name in prop_list:
# Some entries may be methods returning a Future, others true properties.
if name == "natoms" and not hasattr(self.lmp, "natoms"):
fut = self.lmp.get_natoms()
else:
attr = getattr(self.lmp, name)
fut = attr() if callable(attr) else attr
return get_result(future=fut, cores=self.cores)
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/extended.py around lines 309-312, the current dispatch blindly uses getattr(self.lmp, name) which can be a callable or raise for properties like "natoms"; update it to first try getattr(self.lmp, name), if that returns a callable invoke it and pass the result to get_result, and if getattr raises or name == "natoms" and no attribute exists, call getattr(self.lmp, "get_natoms")() as a fallback and pass that integer result to get_result; ensure you only pass the final value (not an uncalled method) into get_result to avoid Future-type mismatches. 
@srmnitc
Copy link
Member

srmnitc commented Sep 11, 2025

@srmnitc Did you have the chance to test these changes with calphy?

@jan-janssen sorry about the delay from my side. I tested now, and everything works fine. Thanks!

@jan-janssen jan-janssen merged commit 8b45941 into main Sep 11, 2025
19 checks passed
@jan-janssen jan-janssen deleted the executorlib branch September 11, 2025 09:22
This was referenced Sep 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants