- Notifications
You must be signed in to change notification settings - Fork 5
Use executorlib directly #371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
for more information, see https://pre-commit.ci
WalkthroughThe PR introduces an executor-based execution model, adds top-level exports, centralizes Future result handling, adjusts MPI data return to root-only, updates tests to the new API, and bumps an executorlib dependency pin. Legacy async/queue paths and the standalone MPI wrapper entrypoint are removed. Changes
Sequence Diagram(s)sequenceDiagram autonumber actor User participant L as LammpsLibrary participant C as LammpsConcurrent participant E as Executor participant W as Worker(ParallelLammps) User->>L: instantiate(cores, executor=?) L->>C: new(cores, executor=?) alt no executor provided C->>E: create SingleNodeExecutor(workers=1, init=init_function) E-->>C: executor ready end note over E,W: init_function creates ParallelLammps instance User->>L: lmp.some_call(args) L->>C: submit task(dict{command,args}) C->>E: submit(call_function_lmp, input_dict) E->>W: call_function_lmp(lmp, input_dict) W-->>E: result E-->>C: Future C-->>L: Future L->>L: get_result(Future, cores) L-->>User: value User->>L: close/shutdown L->>C: shutdown C->>E: submit(shutdown_lmp) E->>W: shutdown_lmp(lmp) E-->>C: ack Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested labels
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
Poem
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
pylammpsmpi/wrapper/concurrent.py (2)
277-305: generate_atoms → create_atoms call is broken (missing n and wrong kwarg name)
generate_atoms()callscreate_atoms(ids=...)butcreate_atoms()expectsidand a required positional/keywordn. This will raise at runtime.Apply this diff to compute
nand pass correct argument names:def generate_atoms( self, ids=None, type=None, x=None, v=None, image=None, shrinkexceed=False ) -> Future: @@ - return self.create_atoms( - ids=ids, type=type, x=x, v=v, image=image, shrinkexceed=shrinkexceed - ) + # Determine number of atoms + if ids is not None: + n = len(ids) + elif x is not None: + n = len(x) + else: + raise TypeError("Either 'ids' or 'x' must be provided to infer atom count.") + return self.create_atoms( + n=n, id=ids, type=type, x=x, v=v, image=image, shrinkexceed=shrinkexceed + )
387-394: Fix command name in installed_packages propertyThe
installed_packagesproperty inpylammpsmpi/wrapper/concurrent.pyis calling the wrong command string. There is noget_installed_packagescommand registered in the worker’sselect_cmdmapping—onlyinstalled_packagesexists—so this will fail at runtime.Affected location:
- File: pylammpsmpi/wrapper/concurrent.py
Lines: 392–393Proposed diff:
@property def installed_packages(self): - return self._send_and_receive_dict(command="get_installed_packages", data=[]) + return self._send_and_receive_dict(command="installed_packages", data=[])This ensures the property’s command name matches the
installed_packagesfunction registered in the MPI worker.
🧹 Nitpick comments (4)
pylammpsmpi/wrapper/concurrent.py (4)
96-109: Executor configuration: sensible defaults; minor robustness nitConfiguration looks correct for a single LAMMPS worker bound to N MPI ranks. One small robustness tweak: normalize
cmdargsearlier (before constructing args in the worker) is already handled; consider validatingworking_directoryexists before passing to the executor to avoid late failures in remote context.Example:
- self.working_directory = working_directory + self.working_directory = working_directory + if not os.path.isdir(self.working_directory): + os.makedirs(self.working_directory, exist_ok=True)
140-142: Improve FileNotFoundError messageInclude the missing path to aid debugging.
- if not os.path.exists(inputfile): - raise FileNotFoundError("Input file does not exist") + if not os.path.exists(inputfile): + raise FileNotFoundError(f"Input file does not exist: {inputfile}")
306-338: create_atoms signature usestype/idnames; OK but consider avoiding builtinsUsing
typeandidshadows Python builtins. Not a blocker, but if you can, consideratom_typeandatom_idto reduce confusion (API compatibility permitting).
630-631: Harden del against partially-initialized instancesIf construction fails before
_exeis set,__del__may raise. Usegetattrwith a default.- if self._exe is not None: + if getattr(self, "_exe", None) is not None: self.close()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
pylammpsmpi/wrapper/concurrent.py(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
pylammpsmpi/wrapper/concurrent.py (2)
pylammpsmpi/mpi/lmpmpi.py (2)
select_cmd(376-426)command(371-373)pylammpsmpi/wrapper/base.py (1)
command(426-439)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: unittest_openmpi (3.12)
- GitHub Check: unittest_openmpi (3.11)
- GitHub Check: unittest_mpich (3.13)
- GitHub Check: unittest_openmpi (3.13)
- GitHub Check: unittest_mpich (3.11)
- GitHub Check: unittest_mpich (3.12)
- GitHub Check: unittest_old
- GitHub Check: coverage
🔇 Additional comments (6)
pylammpsmpi/wrapper/concurrent.py (6)
4-9: Imports and executorlib adoption look goodMoving to SingleNodeExecutor and keeping Future typing is clean and keeps the public surface stable.
58-61: Explicit start wrapper is fineStarting the LAMMPS instance via a dedicated function makes the init path explicit for the executor.
111-113: Submit path is concise and consistentReturning the Future directly keeps the wrapper behavior consistent with the rest of the codebase.
475-496: Command dispatch is correct and covers list/multiline casesThe routing to commands_string vs command is sound and preserves previous semantics.
529-540: gather_atoms arguments are packed correctlyAppending
len(ids)beforeidsfor subset mode matches the typical LAMMPS FFI calling convention.
548-557: scatter_atoms subset packing looks correctMirrors the gather subset pattern; looks good.
| def init_function(): | ||
| from lammps import lammps # noqa: PLC0415 | ||
| from mpi4py import MPI # noqa: PLC0415 | ||
| | ||
| from pylammpsmpi.mpi.lmpmpi import select_cmd # noqa: PLC0415 | ||
| | ||
| class ParallelLammps: | ||
| def __init__(self): | ||
| self._job = None | ||
| | ||
| def start(self, argument_lst): | ||
| args = ["-screen", "none"] | ||
| if len(argument_lst) > 0: | ||
| args.extend(argument_lst) | ||
| self._job = lammps(cmdargs=args) | ||
| | ||
| def shutdown(self): | ||
| if self._job is not None: | ||
| self._job.close() | ||
| | ||
| def command(self, input_dict): | ||
| output = select_cmd(input_dict["command"])( | ||
| job=self._job, funct_args=input_dict["args"] | ||
| ) | ||
| if MPI.COMM_WORLD.rank == 0: | ||
| return output | ||
| else: | ||
| return True | ||
| | ||
| return {"lmp": ParallelLammps()} | ||
| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against unknown LAMMPS commands in the worker
If select_cmd(input_dict["command"]) returns None (typo or unsupported command), the current code will raise a TypeError when attempting to call it. Better to fail fast with a clear error.
Apply this diff in ParallelLammps.command:
- def command(self, input_dict): - output = select_cmd(input_dict["command"])( - job=self._job, funct_args=input_dict["args"] - ) + def command(self, input_dict): + func = select_cmd(input_dict["command"]) + if func is None: + raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") + output = func(job=self._job, funct_args=input_dict["args"]) if MPI.COMM_WORLD.rank == 0: return output else: return True📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def init_function(): | |
| from lammps import lammps # noqa: PLC0415 | |
| from mpi4py import MPI # noqa: PLC0415 | |
| from pylammpsmpi.mpi.lmpmpi import select_cmd # noqa: PLC0415 | |
| class ParallelLammps: | |
| def __init__(self): | |
| self._job = None | |
| def start(self, argument_lst): | |
| args = ["-screen", "none"] | |
| if len(argument_lst) > 0: | |
| args.extend(argument_lst) | |
| self._job = lammps(cmdargs=args) | |
| def shutdown(self): | |
| if self._job is not None: | |
| self._job.close() | |
| def command(self, input_dict): | |
| output = select_cmd(input_dict["command"])( | |
| job=self._job, funct_args=input_dict["args"] | |
| ) | |
| if MPI.COMM_WORLD.rank == 0: | |
| return output | |
| else: | |
| return True | |
| return {"lmp": ParallelLammps()} | |
| def command(self, input_dict): | |
| func = select_cmd(input_dict["command"]) | |
| if func is None: | |
| raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") | |
| output = func(job=self._job, funct_args=input_dict["args"]) | |
| if MPI.COMM_WORLD.rank == 0: | |
| return output | |
| else: | |
| return True |
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/concurrent.py around lines 22 to 52, guard the ParallelLammps.command method against unknown/unsupported LAMMPS commands by checking the result of select_cmd(input_dict["command"]) before calling it; if the selector returns None, raise a clear ValueError (or RuntimeError) that includes the offending command name and context (e.g., available/expected commands) so the worker fails fast and with an informative message, otherwise call the command as before and preserve the MPI rank 0 return behavior. | with contextlib.suppress(AttributeError): | ||
| self._exe.submit(shutdown_lmp).result() | ||
| self._exe.shutdown(wait=True) | ||
| self._exe = None | ||
| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make close() resilient to worker errors and always shutdown the executor
If the remote shutdown_lmp raises, executor.shutdown() may be skipped and leak resources. Wrap in try/finally and suppress remote exceptions.
- with contextlib.suppress(AttributeError): - self._exe.submit(shutdown_lmp).result() - self._exe.shutdown(wait=True) - self._exe = None + try: + fut = None + with contextlib.suppress(AttributeError): + fut = self._exe.submit(shutdown_lmp) + if fut is not None: + with contextlib.suppress(Exception): + fut.result() + finally: + with contextlib.suppress(Exception): + self._exe.shutdown(wait=True) + self._exe = None📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| with contextlib.suppress(AttributeError): | |
| self._exe.submit(shutdown_lmp).result() | |
| self._exe.shutdown(wait=True) | |
| self._exe = None | |
| try: | |
| fut = None | |
| with contextlib.suppress(AttributeError): | |
| fut = self._exe.submit(shutdown_lmp) | |
| if fut is not None: | |
| with contextlib.suppress(Exception): | |
| fut.result() | |
| finally: | |
| with contextlib.suppress(Exception): | |
| self._exe.shutdown(wait=True) | |
| self._exe = None |
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/concurrent.py around lines 623-627, the current close() calls self._exe.submit(shutdown_lmp).result() inside a contextlib.suppress(AttributeError) so if the remote shutdown_lmp raises another exception the executor.shutdown(wait=True) may be skipped and leak resources; change the code to call the submit/result inside a try block and always call executor.shutdown(wait=True) in a finally block, suppressing any exceptions from the remote call (except AttributeError is already suppressed) so that shutdown is guaranteed to run, and then set self._exe = None after shutdown. for more information, see https://pre-commit.ci
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@ ## main #371 +/- ## ========================================== + Coverage 81.26% 84.47% +3.21% ========================================== Files 6 6 Lines 790 773 -17 ========================================== + Hits 642 653 +11 + Misses 148 120 -28 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
| @srmnitc and @raynol-dsouza I updated the internals of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
pylammpsmpi/wrapper/concurrent.py (2)
44-51: Guard against unknown LAMMPS commands in worker dispatchCalling the selector result without validation can raise a TypeError for typos/unsupported commands. Fail fast with a clear error.
- def command(self, input_dict): - output = select_cmd(input_dict["command"])( - job=self._job, funct_args=input_dict["args"] - ) + def command(self, input_dict): + func = select_cmd(input_dict["command"]) + if func is None: + raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") + output = func(job=self._job, funct_args=input_dict["args"]) if MPI.COMM_WORLD.rank == 0: return output else: return True
630-634: Always shutdown the executor even if remote shutdown raisesIf submit()/result() raises, executor.shutdown may be skipped, leaking resources. Use try/finally and suppress remote errors.
- with contextlib.suppress(AttributeError): - self._exe.submit(shutdown_lmp).result() - self._exe.shutdown(wait=True) - self._exe = None + try: + fut = None + with contextlib.suppress(AttributeError): + fut = self._exe.submit(shutdown_lmp) + if fut is not None: + with contextlib.suppress(Exception): + fut.result() + finally: + with contextlib.suppress(Exception): + self._exe.shutdown(wait=True) + self._exe = None
🧹 Nitpick comments (10)
pylammpsmpi/mpi/lmpmpi.py (1)
427-433: Prefer itertools.chain for flattening gathered listsSlight readability/perf win and avoids building an intermediate list in Python loops.
Apply within this function:
def _gather_data_from_all_processors(data): - data_gather = MPI.COMM_WORLD.gather(data, root=0) - if MPI.COMM_WORLD.rank == 0: - return [v for vl in data_gather for v in vl] + data_gather = MPI.COMM_WORLD.gather(data, root=0) + if MPI.COMM_WORLD.rank == 0: + return list(chain.from_iterable(data_gather)) else: return []And add this import at the top of the file:
from itertools import chainpylammpsmpi/wrapper/concurrent.py (3)
147-149: Use isfile to avoid directories passing the existence checkMore precise check; prevents passing a directory path.
- if not os.path.exists(inputfile): + if not os.path.isfile(inputfile): raise FileNotFoundError("Input file does not exist")
23-53: Optional: add a barrier during shutdown for MPI ranksNot strictly necessary, but a barrier can make shutdown sequencing more predictable across ranks in tightly coupled runs.
def shutdown(self): if self._job is not None: - self._job.close() + self._job.close() + MPI.COMM_WORLD.Barrier() self._job = None
495-503: Accept tuple-of-commands and avoid split cost for single-line stringsMinor ergonomics/readability improvement.
- if isinstance(cmd, list): - return self._send_and_receive_dict( - command="commands_string", data="\n".join(cmd) - ) - elif len(cmd.split("\n")) > 1: + if isinstance(cmd, (list, tuple)): + return self._send_and_receive_dict( + command="commands_string", data="\n".join(cmd) + ) + elif "\n" in cmd: return self._send_and_receive_dict(command="commands_string", data=cmd) else: return self._send_and_receive_dict(command="command", data=cmd)tests/test_executor.py (1)
56-66: Future-proof the version assertionHard-coding a finite list of release dates will make the test brittle when a new LAMMPS version drops. Assert shape/type and a sane lower bound instead.
- self.assertTrue( - call_function_lmp( - lmp=lmp, input_dict={"command": "get_version", "args": []} - ) - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + ver = call_function_lmp( + lmp=lmp, input_dict={"command": "get_version", "args": []} + ) + self.assertIsInstance(ver, int) + # LAMMPS versions are YYYYMMDD integers; accept reasonably recent builds. + self.assertGreaterEqual(ver, 20160101)pylammpsmpi/wrapper/ase.py (3)
67-69: Avoid passing None for working_directoryIf working_directory is None, some backends may expect a string path and raise. Safest to default to "." here.
Apply this minimal diff:
- working_directory=working_directory, + working_directory=working_directory or ".",If you prefer, I can run a repo-wide check to confirm whether LammpsBase tolerates None.
29-29: Clarify/guard executor usage when cores == 1Executor is only used for cores > 1; if a caller passes executor with cores == 1 it’s silently ignored. Either document this explicitly or warn once to avoid confusion.
- executor: Executor to use for parallel execution (default: None) + executor: Executor to use for parallel execution when cores > 1 (default: None)Outside the changed lines, consider:
# near the start of __init__ if executor is not None and self._cores == 1: warnings.warn("executor is ignored for cores == 1 in LammpsASELibrary.", stacklevel=2)Also applies to: 41-41
11-11: Optional: make BaseExecutor import type-only to keep soft dependencyIf you want executorlib to be optional at runtime for users who never touch the multi-core path, wrap the import under TYPE_CHECKING and quote the annotation. Otherwise, current direct import is fine given the PR’s direction.
tests/test_pylammpsmpi_local.py (1)
137-141: Future-proof the version assertionEnumerating fixed version IDs is brittle and will fail when newer LAMMPS releases land. Assert type and a lower bound instead.
- self.assertTrue( - self.lmp.version - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + v = self.lmp.version + self.assertIsInstance(v, int) + self.assertGreaterEqual(v, 20220623)Optional follow-up: apply the same assertion style to TestLocalLammpsLibrary.test_version for consistency.
pylammpsmpi/wrapper/extended.py (1)
242-253: Docstring tweak to reflect backend and new parameterDocstring still says “provided by LammpsBase” while this class composes LammpsConcurrent (via LammpsBase elsewhere). Also, add the cores > 1 note for executor to match behavior.
- Top level class which manages the lammps library provided by LammpsBase + Top level class which manages the LAMMPS library via the concurrent backend. @@ - executor: Executor to use for parallel execution (default: None) + executor: Executor to use for parallel execution when cores > 1 (default: None)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (9)
.ci_support/environment-old.yml(1 hunks)pylammpsmpi/__init__.py(1 hunks)pylammpsmpi/mpi/lmpmpi.py(3 hunks)pylammpsmpi/wrapper/ase.py(4 hunks)pylammpsmpi/wrapper/base.py(20 hunks)pylammpsmpi/wrapper/concurrent.py(5 hunks)pylammpsmpi/wrapper/extended.py(5 hunks)tests/test_executor.py(2 hunks)tests/test_pylammpsmpi_local.py(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- .ci_support/environment-old.yml
🧰 Additional context used
🧬 Code graph analysis (6)
pylammpsmpi/__init__.py (2)
pylammpsmpi/wrapper/concurrent.py (2)
LammpsConcurrent(70-638)init_function(23-53)pylammpsmpi/wrapper/extended.py (1)
LammpsLibrary(242-331)
tests/test_executor.py (1)
pylammpsmpi/wrapper/concurrent.py (4)
init_function(23-53)shutdown_lmp(65-67)call_function_lmp(56-57)start_lmp(60-62)
tests/test_pylammpsmpi_local.py (3)
pylammpsmpi/wrapper/extended.py (2)
LammpsLibrary(242-331)close(316-320)pylammpsmpi/wrapper/concurrent.py (4)
init_function(23-53)file(134-149)close(618-633)version(123-132)pylammpsmpi/wrapper/base.py (2)
file(40-51)version(30-38)
pylammpsmpi/wrapper/base.py (2)
pylammpsmpi/wrapper/concurrent.py (24)
version(123-132)extract_setting(152-153)extract_global(155-169)extract_box(171-180)extract_atom(182-196)extract_fix(198-212)extract_variable(214-228)get_natoms(242-251)set_variable(253-267)has_exceptions(347-356)has_gzip_support(359-368)has_png_support(371-380)has_jpeg_support(383-392)has_ffmpeg_support(395-396)installed_packages(399-400)get_neighlist(407-414)find_pair_neighlist(416-438)find_fix_neighlist(440-451)find_compute_neighlist(453-464)get_neighlist_size(466-475)get_neighlist_element_neighbors(477-480)gather_atoms(504-546)get_thermo(565-580)extract_compute(583-616)pylammpsmpi/mpi/lmpmpi.py (23)
extract_setting(102-103)extract_global(106-107)extract_box(110-111)extract_atom(114-140)extract_fix(143-144)extract_variable(147-164)get_natoms(167-168)set_variable(171-172)has_exceptions(265-266)has_gzip_support(269-270)has_png_support(273-274)has_jpeg_support(277-278)has_ffmpeg_support(281-282)installed_packages(285-286)get_neighlist(294-295)find_pair_neighlist(298-299)find_fix_neighlist(302-303)find_compute_neighlist(306-307)get_neighlist_size(310-311)get_neighlist_element_neighbors(314-315)gather_atoms(180-200)get_thermo(318-319)extract_compute(42-80)
pylammpsmpi/wrapper/extended.py (3)
pylammpsmpi/wrapper/concurrent.py (4)
LammpsConcurrent(70-638)get_thermo(565-580)command(44-51)command(482-502)pylammpsmpi/wrapper/base.py (3)
get_result(21-25)get_thermo(435-450)command(375-388)pylammpsmpi/mpi/lmpmpi.py (2)
get_thermo(318-319)command(369-371)
pylammpsmpi/wrapper/concurrent.py (2)
pylammpsmpi/mpi/lmpmpi.py (2)
select_cmd(374-424)command(369-371)pylammpsmpi/wrapper/base.py (1)
command(375-388)
🔇 Additional comments (12)
pylammpsmpi/mpi/lmpmpi.py (2)
75-78: Rank-conditional return: verify downstream expectations and document contractFor style==1, rank 0 returns converted data while non-root returns the raw gathered placeholder (currently []). This is fine with the executor path (which only consumes rank 0), but direct usage of pylammpsmpi.mpi.lmpmpi without the wrapper may observe empty returns on non-root ranks.
- Please add a brief note to the function docstring clarifying “root-only meaningful return”.
- Confirm no callers expect a meaningful value on non-root ranks. If any do, consider returning a consistent sentinel (e.g., None) to make the contract explicit.
154-157: Per-variable extraction now root-only: ensure compatibility with existing callersWhen funct_args[2] == 1, non-root ranks now return an empty array. This aligns with the executor aggregation pattern, but could break direct MPI-rank-local consumers.
- Double-check downstream usage (especially in any legacy MPI paths) and add docstring notes mirroring extract_compute.
pylammpsmpi/wrapper/base.py (1)
38-38: Consistent centralization via get_result — nice cleanupGreat use of get_result across the API to abstract executor aggregation details. This simplifies call sites and keeps MPI/executor specifics localized.
Also applies to: 66-66, 80-80, 95-95, 110-110, 124-124, 138-138, 159-159, 173-173, 281-281, 285-285, 289-289, 293-293, 297-297, 301-301, 313-313, 335-335, 368-368, 371-373, 422-424, 450-450, 485-490
pylammpsmpi/wrapper/concurrent.py (1)
100-116: Executor injection and resource wiring — good extensibilityAllowing an external executor and passing MPI-related resources cleanly through resource_dict makes the design flexible without complicating call sites.
pylammpsmpi/__init__.py (1)
3-3: Publicly exporting init_function — LGTMThis aligns the top-level API with the new executor-based workflow and simplifies test and user access.
Also applies to: 6-6
tests/test_executor.py (1)
8-13: Direct import of new concurrency helpers — LGTMTests now exercise the start/call/shutdown flow directly; this provides good coverage of the executor-side worker surface.
pylammpsmpi/wrapper/ase.py (1)
66-69: Executor wiring for multi-core path — LGTMPassing the executor through to LammpsBase in the multi-core branch looks correct and aligns with the PR objective to use executorlib directly.
tests/test_pylammpsmpi_local.py (2)
112-131: Executor-backed path coverage — LGTMGood addition to ensure the executor-backed LammpsLibrary path initializes, loads input, and returns version under SingleNodeExecutor.
9-11: init_function export verified
pylammpsmpi/__init__.pyimportsinit_functionfrompylammpsmpi.wrapper.concurrentand includes it in__all__.- The actual
def init_function():implementation exists inpylammpsmpi/wrapper/concurrent.py(line 23).No further changes needed.
pylammpsmpi/wrapper/extended.py (3)
6-9: Imports — LGTMPulling BaseExecutor and central get_result into this module is appropriate and keeps async handling consistent.
262-274: Executor parameter propagation — LGTMAccepting an Optional[BaseExecutor] and passing it through to LammpsConcurrent aligns with the executorlib integration plan.
286-312: Centralized Future handling via get_result — check all call sites return FutureThe wrappers now uniformly call get_result(future=..., cores=self.cores). Ensure every attribute/method referenced here in LammpsConcurrent returns a Future; otherwise get_result will raise.
If any of these return immediate values instead of Futures, we can add a small adapter in LammpsConcurrent or guard in get_result.
| def get_result(future: Future, cores: int) -> Any: | ||
| if cores > 1: | ||
| return future.result()[0] | ||
| else: | ||
| return future.result() | ||
| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden get_result against non-list results and empty aggregates
Future results from custom/external executors or edge cases may not be indexable or could be empty. Guard the access to avoid IndexError/TypeError.
-def get_result(future: Future, cores: int) -> Any: - if cores > 1: - return future.result()[0] - else: - return future.result() +def get_result(future: Future, cores: int) -> Any: + res = future.result() + if cores > 1 and isinstance(res, (list, tuple)) and len(res) > 0: + return res[0] + return res📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def get_result(future: Future, cores: int) -> Any: | |
| if cores > 1: | |
| return future.result()[0] | |
| else: | |
| return future.result() | |
| def get_result(future: Future, cores: int) -> Any: | |
| res = future.result() | |
| if cores > 1 and isinstance(res, (list, tuple)) and len(res) > 0: | |
| return res[0] | |
| return res |
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/base.py around lines 21 to 26, get_result currently assumes future.result() is indexable and non-empty when cores>1; instead, assign result = future.result() and, if cores>1, check whether result is a Sequence (from collections.abc) but not a str/bytes/bytearray and has len(result) > 0 before returning result[0]; otherwise return result as-is (this avoids IndexError/TypeError for non-list/empty aggregates and preserves non-sequence results). | @jan-janssen I did a preliminary check, and all of my scripts work as expected. I will update later this week if something breaks. :) |
| @jan-janssen I have not gotten around to testing it yet. Quick question: does anything change for the mode - local? |
Yes, the serial execution is not affected but whenever we use MPI parallel LAMMPS it is now handled via executorlib. |
| @jan-janssen thanks. This definitely needs to be checked from the calphy side. I will try to do this until tomorrow eod. I hope thats ok. |
| @srmnitc Did you have the chance to test these changes with calphy? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pylammpsmpi/wrapper/concurrent.py (1)
149-151: Don’t check file existence locally for remote executors.
os.path.exists(inputfile)runs on the client, not on the remote worker (Slurm/Flux). This wrongly blocks valid remote paths.- if not os.path.exists(inputfile): - raise FileNotFoundError("Input file does not exist") + # Skip local existence checks; worker CWD may differ. LAMMPS will error if missing remotely.
♻️ Duplicate comments (3)
pylammpsmpi/wrapper/base.py (1)
21-26: Harden get_result against non-sequence/empty resultsRepeat of earlier feedback; guards avoid IndexError/TypeError with custom executors or edge cases.
-def get_result(future: Future, cores: int) -> Any: - if cores > 1: - return future.result()[0] - else: - return future.result() +def get_result(future: Future, cores: int) -> Any: + res = future.result() + if cores > 1 and isinstance(res, (list, tuple)) and len(res) > 0: + return res[0] + return respylammpsmpi/wrapper/concurrent.py (2)
44-51: Guard against unknown LAMMPS commands to avoid TypeError.Calling the result of
select_cmd(...)without checking can explode if it returnsNone.- def command(self, input_dict): - output = select_cmd(input_dict["command"])( - job=self._job, funct_args=input_dict["args"] - ) + def command(self, input_dict): + func = select_cmd(input_dict["command"]) + if func is None: + raise ValueError(f"Unknown LAMMPS command: {input_dict['command']}") + output = func(job=self._job, funct_args=input_dict["args"]) if MPI.COMM_WORLD.rank == 0: return output else: return True
632-636: Always shut down executor even if remote shutdown raises.Ensure no resource leaks by using try/finally and suppress remote errors.
- with contextlib.suppress(AttributeError): - self._exe.submit(shutdown_lmp).result() - if not self._external_executor: - self._exe.shutdown(wait=True) - self._exe = None + try: + fut = None + with contextlib.suppress(AttributeError): + fut = self._exe.submit(shutdown_lmp) + if fut is not None: + with contextlib.suppress(Exception): + fut.result() + finally: + if not self._external_executor: + with contextlib.suppress(Exception): + self._exe.shutdown(wait=True) + self._exe = None
🧹 Nitpick comments (13)
pylammpsmpi/mpi/lmpmpi.py (2)
75-78: Non-root return type inconsistency (list vs ndarray).extract_compute(style=1) now returns a Python list on non-root, while extract_variable returns an empty ndarray. Suggest making both return np.ndarray for consistency.
- else: - return val + else: + # Keep type consistent with extract_variable non-root path + return np.array([])
427-433: Make gather robust and memory-friendly; handle empty partitions.The list-flatten approach can be slow and awkward for 2D arrays; also guard against the “all-empty” case.
def _gather_data_from_all_processors(data): data_gather = MPI.COMM_WORLD.gather(data, root=0) if MPI.COMM_WORLD.rank == 0: - return [v for vl in data_gather for v in vl] + # Tolerate empty local partitions and return ndarray when possible + non_empty = [a for a in data_gather if hasattr(a, "size") and a.size > 0] + if not non_empty: + return np.array([]) + return np.concatenate(non_empty, axis=0) else: return []pylammpsmpi/wrapper/ase.py (1)
29-30: Docstring: add explicit type for executor.Minor clarity: match signature types in the docstring.
- executor: Executor to use for parallel execution (default: None) + executor (Optional[BaseExecutor]): Executor to use for parallel execution (default: None).tests/test_pylammpsmpi_local.py (1)
112-134: Tighten assertion style and avoid version-list duplication
- Prefer assertIn over membership checks for clearer failures.
Apply within this hunk:
- self.assertTrue( - lmp.version - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + self.assertIn( + lmp.version, + [20220623, 20230802, 20231121, 20240207, 20240627, 20240829], + )Also consider updating the earlier TestLocalLammpsLibrary.test_version (Lines 27–31) similarly to keep consistency.
pylammpsmpi/__init__.py (1)
6-6: Sort all (RUF022) for consistencyMinor nit to satisfy tooling and keep exports tidy.
-__all__ = ["LammpsLibrary", "LammpsConcurrent", "LammpsBase", "init_function"] +__all__ = ["LammpsBase", "LammpsConcurrent", "LammpsLibrary", "init_function"]tests/test_executor.py (1)
56-67: Use assertIn for the version checkImproves readability and failure messages.
- self.assertTrue( - call_function_lmp( - lmp=lmp, input_dict={"command": "get_version", "args": []} - ) - in [20220623, 20230802, 20231121, 20240207, 20240627, 20240829] - ) + self.assertIn( + call_function_lmp( + lmp=lmp, input_dict={"command": "get_version", "args": []} + ), + [20220623, 20230802, 20231121, 20240207, 20240627, 20240829], + )pylammpsmpi/wrapper/extended.py (1)
301-306: Small list build nit (RUF005): prefer unpacking over concatenationMore idiomatic and a touch faster.
- args = [name] + list(args) + args = [name, *args]pylammpsmpi/wrapper/base.py (1)
452-490: Avoid shadowing built-in name 'type'Not urgent, but consider renaming the parameter for readability and lint happiness (e.g., value_type).
pylammpsmpi/wrapper/concurrent.py (5)
24-27: Drop unused noqa directives (Ruff RUF100).These
# noqa: PLC0415markers are flagged as unused. Remove them to satisfy linting.- from lammps import lammps # noqa: PLC0415 - from mpi4py import MPI # noqa: PLC0415 + from lammps import lammps + from mpi4py import MPI - from pylammpsmpi.mpi.lmpmpi import select_cmd # noqa: PLC0415 + from pylammpsmpi.mpi.lmpmpi import select_cmd
76-76: Tighten typing for cmdargs.Use Optional with element type for better clarity/tooling.
- cmdargs: list = None, + cmdargs: Optional[list[str]] = None,
640-641: Make destructor exception-safe.Avoid noisy teardown issues at interpreter shutdown.
- if self._exe is not None: - self.close() + if self._exe is not None: + with contextlib.suppress(Exception): + self.close()
316-346: Avoid shadowing built-ins (id,type) in signatures.Not a blocker, but confusing for readers and tooling.
Consider renaming parameters to
atom_idsandatom_types(with deprecation aliasing if this is public API).
145-151: Docstring return types drifted from the new Future-based API.Several sections still claim “Returns: None” or raw values but now return
Futures. Update to reduce user confusion.Happy to submit a follow-up PR updating these docstrings consistently.
Also applies to: 493-501, 524-529, 307-314, 338-346
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.ci_support/environment-old.yml(1 hunks)pylammpsmpi/__init__.py(1 hunks)pylammpsmpi/mpi/lmpmpi.py(3 hunks)pylammpsmpi/wrapper/ase.py(4 hunks)pylammpsmpi/wrapper/base.py(20 hunks)pylammpsmpi/wrapper/concurrent.py(5 hunks)pylammpsmpi/wrapper/extended.py(5 hunks)tests/test_executor.py(2 hunks)tests/test_pylammpsmpi_local.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (6)
tests/test_pylammpsmpi_local.py (3)
pylammpsmpi/wrapper/extended.py (2)
LammpsLibrary(242-331)close(316-320)pylammpsmpi/wrapper/concurrent.py (4)
init_function(23-53)file(136-151)version(125-134)close(620-636)pylammpsmpi/wrapper/base.py (2)
file(40-51)version(30-38)
tests/test_executor.py (1)
pylammpsmpi/wrapper/concurrent.py (4)
init_function(23-53)shutdown_lmp(65-67)call_function_lmp(56-57)start_lmp(60-62)
pylammpsmpi/wrapper/concurrent.py (2)
pylammpsmpi/mpi/lmpmpi.py (2)
select_cmd(374-424)command(369-371)pylammpsmpi/wrapper/base.py (1)
command(375-388)
pylammpsmpi/wrapper/extended.py (3)
pylammpsmpi/wrapper/concurrent.py (4)
LammpsConcurrent(70-641)get_thermo(567-582)command(44-51)command(484-504)pylammpsmpi/wrapper/base.py (3)
get_result(21-25)get_thermo(435-450)command(375-388)pylammpsmpi/mpi/lmpmpi.py (2)
get_thermo(318-319)command(369-371)
pylammpsmpi/wrapper/base.py (2)
pylammpsmpi/wrapper/concurrent.py (24)
version(125-134)extract_setting(154-155)extract_global(157-171)extract_box(173-182)extract_atom(184-198)extract_fix(200-214)extract_variable(216-230)get_natoms(244-253)set_variable(255-269)has_exceptions(349-358)has_gzip_support(361-370)has_png_support(373-382)has_jpeg_support(385-394)has_ffmpeg_support(397-398)installed_packages(401-402)get_neighlist(409-416)find_pair_neighlist(418-440)find_fix_neighlist(442-453)find_compute_neighlist(455-466)get_neighlist_size(468-477)get_neighlist_element_neighbors(479-482)gather_atoms(506-548)get_thermo(567-582)extract_compute(585-618)pylammpsmpi/mpi/lmpmpi.py (23)
extract_setting(102-103)extract_global(106-107)extract_box(110-111)extract_atom(114-140)extract_fix(143-144)extract_variable(147-164)get_natoms(167-168)set_variable(171-172)has_exceptions(265-266)has_gzip_support(269-270)has_png_support(273-274)has_jpeg_support(277-278)has_ffmpeg_support(281-282)installed_packages(285-286)get_neighlist(294-295)find_pair_neighlist(298-299)find_fix_neighlist(302-303)find_compute_neighlist(306-307)get_neighlist_size(310-311)get_neighlist_element_neighbors(314-315)gather_atoms(180-200)get_thermo(318-319)extract_compute(42-80)
pylammpsmpi/__init__.py (1)
pylammpsmpi/wrapper/concurrent.py (2)
LammpsConcurrent(70-641)init_function(23-53)
🪛 Ruff (0.12.2)
pylammpsmpi/wrapper/concurrent.py
24-24: Unused noqa directive (non-enabled: PLC0415)
Remove unused noqa directive
(RUF100)
25-25: Unused noqa directive (non-enabled: PLC0415)
Remove unused noqa directive
(RUF100)
27-27: Unused noqa directive (non-enabled: PLC0415)
Remove unused noqa directive
(RUF100)
pylammpsmpi/wrapper/extended.py
302-302: Consider [name, *list(args)] instead of concatenation
Replace with [name, *list(args)]
(RUF005)
pylammpsmpi/__init__.py
6-6: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
🔇 Additional comments (12)
.ci_support/environment-old.yml (1)
8-8: Original comment incorrect — executorlib is pinned to 1.7.1 and consistent.
- pyproject.toml:28 -> "executorlib==1.7.1"
- binder/environment.yml:10 -> "- executorlib =1.7.1"
- No references to 1.3.0 or mismatched pins found.
Likely an incorrect or invalid review comment.
pylammpsmpi/mpi/lmpmpi.py (1)
154-157: Root-only data propagation looks good.Returning a real array only on rank 0 is consistent with the new root-centric model and reduces redundant allocations on workers. Ensure downstream callers don’t rely on non-root data being populated.
pylammpsmpi/wrapper/ase.py (2)
66-69: Passing executor to LammpsBase is correct.This wires the new execution model without affecting single-core behavior. Looks good.
11-11: Import path verified — BaseExecutor is exported from executorlib v1.3.0 package root.
No change required;from executorlib import BaseExecutoris correct.tests/test_pylammpsmpi_local.py (1)
9-11: Top-level imports for executor usage look goodImporting SingleNodeExecutor and init_function where needed is correct and aligns with the new API.
pylammpsmpi/__init__.py (1)
3-3: Exporting init_function at the top level is a sensible additionThis unlocks a simpler import path for users/tests.
pylammpsmpi/wrapper/extended.py (2)
252-274: Executor parameter plumbed through correctlyConstructor wiring of executor → LammpsConcurrent is correct and keeps backwards compatibility.
276-306: Confirm callers don't rely on None-returning command/func wrappersgetattr now returns get_result(...) for func_list/command_list entries (instead of possibly None). Search found call sites invoking .file/.command/.clear/.reset_box without .result() — these callers may assume a None return. Update callers/tests to use .result() or restore None-return behavior for side-effect-only commands.
Inspect: tests/test_pylammpsmpi_local.py:21,98-99,128; tests/test_base.py:24,38,101-107; tests/test_concurrent.py:21,87-88,92-93; tests/test_exception.py:51; pylammpsmpi/wrapper/concurrent.py:57.
pylammpsmpi/wrapper/base.py (2)
38-38: LGTM: centralized result handlingConsistent use of get_result across properties and accessors simplifies multi-core vs single-core flows.
Also applies to: 66-66, 80-80, 95-95, 110-110, 124-124, 138-138, 159-159, 173-173, 281-302, 313-314, 335-336, 346-347, 357-359, 368-369, 371-373, 422-424, 450-450, 485-490
1-26: Verify property usage against concurrent layer — confirmed (natoms exception)pylammpsmpi/wrapper/concurrent.py implements version, has_exceptions, has_gzip_support, has_png_support, has_jpeg_support, has_ffmpeg_support and installed_packages as @Property; natoms is defined as a regular method (def natoms(...)), and LammpsBase calls get_natoms() (not super().natoms), so the natoms item in the original comment is incorrect.
Likely an incorrect or invalid review comment.
pylammpsmpi/wrapper/concurrent.py (2)
100-111: LGTM: sensible SingleNodeExecutor defaults.
block_allocation=True,max_workers=1, and passing cores/cwd/oversubscribe viaresource_dictalign with the executor-backed design.
56-67: LGTM: thin bridging helpers keep the worker surface minimal.
call_function_lmp,start_lmp, andshutdown_lmpare clear and match the injectedlmpobject model.
| if cmdargs is None: | ||
| cmdargs = [] | ||
| self._exe.submit(start_lmp, argument_lst=cmdargs).result() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fail fast if an external executor wasn’t initialized with init_function().
If a user supplies an external executor that didn’t bind lmp, start_lmp(...).result() raises a cryptic TypeError. Wrap and re-raise with guidance.
- self._exe.submit(start_lmp, argument_lst=cmdargs).result() + try: + self._exe.submit(start_lmp, argument_lst=cmdargs).result() + except TypeError as e: + # Likely missing injected 'lmp' from init_function() + raise ValueError( + "Provided executor must be created with init_function() from " + "pylammpsmpi.wrapper.concurrent so that 'lmp' is available for " + "dependency injection." + ) from e🏁 Script executed:
#!/bin/bash set -euo pipefail echo "PWD: $(pwd)" echo # Try to find the target file file=$(rg --files | rg 'pylammpsmpi/wrapper/concurrent.py' || true) if [ -n "$file" ]; then echo "Found: $file" echo "---- file (first 240 lines) ----" nl -ba "$file" | sed -n '1,240p' else echo "pylammpsmpi/wrapper/concurrent.py not found; searching for any concurrent.py files..." files=$(rg --files | rg 'concurrent.py' || true) if [ -z "$files" ]; then echo "No concurrent.py files found in repository." else for f in $files; do echo "---- $f (first 240 lines) ----" nl -ba "$f" | sed -n '1,240p' done fi fi echo echo "---- Searching for start_lmp, init_function, initializer, submit, argument_lst, lmp ----" rg -n --hidden --no-ignore -S '\bstart_lmp\b' -C5 pylammpsmpi || true rg -n --hidden --no-ignore -S '\binit_function\b|\binitializer\b|\binitargs\b|\binit_worker\b' -C3 pylammpsmpi || true rg -n --hidden --no-ignore -S '\.submit\(' -C3 pylammpsmpi || true rg -n --hidden --no-ignore -S '\bargument_lst\b|\bargument_list\b' -C3 pylammpsmpi || true rg -n --hidden --no-ignore -S '\blmp\b' -C3 pylammpsmpi || trueLength of output: 140
🏁 Script executed:
#!/bin/bash # Robust inspection script (avoids 'nl', uses awk/sed, falls back if rg missing) set -euo pipefail echo "PWD: $(pwd)" echo # find the target file if command -v rg >/dev/null 2>&1; then file=$(rg --files | rg 'pylammpsmpi/wrapper/concurrent.py' || true) else file=$(find . -type f -path '*/pylammpsmpi/wrapper/concurrent.py' -print -quit || true) fi if [ -n "$file" ]; then echo "Found: $file" echo "---- $file (lines 1-260) ----" awk '{printf "%5d: %s\n", NR, $0}' "$file" | sed -n '1,260p' else echo "pylammpsmpi/wrapper/concurrent.py not found; searching for any concurrent.py files..." if command -v rg >/dev/null 2>&1; then files=$(rg --files | rg 'concurrent.py' || true) else files=$(find . -type f -name concurrent.py || true) fi if [ -z "$files" ]; then echo "No concurrent.py files found in repository." else for f in $files; do echo "---- $f (lines 1-260) ----" awk '{printf "%5d: %s\n", NR, $0}' "$f" | sed -n '1,260p' done fi fi echo echo "---- Searching for definitions/usages: start_lmp, init_function, initializer, submit, argument_lst, lmp ----" if command -v rg >/dev/null 2>&1; then rg -n --hidden --no-ignore -S '\bstart_lmp\b' -C3 || true rg -n --hidden --no-ignore -S '\binit_function\b|\binitializer\b|\binit_worker\b' -C3 || true rg -n --hidden --no-ignore -S '\.submit\(' -C3 || true rg -n --hidden --no-ignore -S '\bargument_lst\b|\bargument_list\b' -C3 || true rg -n --hidden --no-ignore -S '\blmp\b' -C3 || true rg -n --hidden --no-ignore -S 'def start_lmp\(|def init_function\(' -C3 || true else grep -RIn --exclude-dir=.git -E --line-number 'start_lmp|init_function|initializer|init_worker|\.submit\(|argument_lst|argument_list|\blmp\b|def start_lmp\(|def init_function\(' . || true fiLength of output: 108036
Fail fast if external executor wasn't initialized with init_function().
Wrap the startup submit call to catch a missing injected lmp and raise a clear ValueError instead of a cryptic TypeError.
File: pylammpsmpi/wrapper/concurrent.py: lines 115–117
- self._exe.submit(start_lmp, argument_lst=cmdargs).result() + try: + self._exe.submit(start_lmp, argument_lst=cmdargs).result() + except TypeError as e: + # Likely missing injected 'lmp' from init_function() + raise ValueError( + "Provided executor must be created with init_function() from " + "pylammpsmpi.wrapper.concurrent so that 'lmp' is available for " + "dependency injection." + ) from e📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if cmdargs is None: | |
| cmdargs = [] | |
| self._exe.submit(start_lmp, argument_lst=cmdargs).result() | |
| if cmdargs is None: | |
| cmdargs = [] | |
| try: | |
| self._exe.submit(start_lmp, argument_lst=cmdargs).result() | |
| except TypeError as e: | |
| # Likely missing injected 'lmp' from init_function() | |
| raise ValueError( | |
| "Provided executor must be created with init_function() from " | |
| "pylammpsmpi.wrapper.concurrent so that 'lmp' is available for " | |
| "dependency injection." | |
| ) from e |
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/concurrent.py around lines 115-117, the call to self._exe.submit(start_lmp, argument_lst=cmdargs).result() can raise a cryptic TypeError when the external executor wasn't initialized with init_function() (missing injected lmp); wrap the submit/result call in a try/except that catches TypeError and raises a clear ValueError instead (include a short message instructing the caller to initialize the executor with init_function and include the original exception as the __cause__ using "from" to preserve traceback). | elif name in prop_list: | ||
| fut = getattr(self.lmp, name) | ||
| return fut.result() | ||
| return get_result(future=fut, cores=self.cores) | ||
| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden property dispatch (callable vs. attribute) and ensure natoms works
getattr(self.lmp, name) may return a callable (method) or raise for names like "natoms" if only get_natoms() exists. Call if callable; fall back to get_natoms for natoms. Prevents AttributeError and Future-type mismatches at runtime.
- elif name in prop_list: - fut = getattr(self.lmp, name) - return get_result(future=fut, cores=self.cores) + elif name in prop_list: + # Some entries may be methods returning a Future, others true properties. + if name == "natoms" and not hasattr(self.lmp, "natoms"): + fut = self.lmp.get_natoms() + else: + attr = getattr(self.lmp, name) + fut = attr() if callable(attr) else attr + return get_result(future=fut, cores=self.cores)If you prefer, I can open a small follow-up PR to apply this change and add a unit test asserting LammpsLibrary().natoms returns an int.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| elif name in prop_list: | |
| fut = getattr(self.lmp, name) | |
| return fut.result() | |
| return get_result(future=fut, cores=self.cores) | |
| elif name in prop_list: | |
| # Some entries may be methods returning a Future, others true properties. | |
| if name == "natoms" and not hasattr(self.lmp, "natoms"): | |
| fut = self.lmp.get_natoms() | |
| else: | |
| attr = getattr(self.lmp, name) | |
| fut = attr() if callable(attr) else attr | |
| return get_result(future=fut, cores=self.cores) |
🤖 Prompt for AI Agents
In pylammpsmpi/wrapper/extended.py around lines 309-312, the current dispatch blindly uses getattr(self.lmp, name) which can be a callable or raise for properties like "natoms"; update it to first try getattr(self.lmp, name), if that returns a callable invoke it and pass the result to get_result, and if getattr raises or name == "natoms" and no attribute exists, call getattr(self.lmp, "get_natoms")() as a fallback and pass that integer result to get_result; ensure you only pass the final value (not an uncalled method) into get_result to avoid Future-type mismatches.
@jan-janssen sorry about the delay from my side. I tested now, and everything works fine. Thanks! |
Summary by CodeRabbit
New Features
Refactor
Behavior Changes
Tests
Chores