- Notifications
You must be signed in to change notification settings - Fork 20
upgrade to java 25 & polish workflow #1312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughUpdated README JDK requirement; added a reports placeholder; adjusted Gatling test defaults and behavior (Kafka init delay, HTTP protocol settings, user/load schedules, pauses, defensive checks, and assertions) across multiple simulation classes. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
.github/workflows/gatling-tests.ymlis excluded by!**/*.ymlgatling-tests/pom.xmlis excluded by!**/*.xml
📒 Files selected for processing (1)
gatling-tests/ReadMe.md(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Run Unit & Integration Tests with jdk 25
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
gatling-tests/src/test/java/simulation/CreateProductSimulation.java (1)
183-204: Session marked as failed butupdateInventorystill executes.When inventory data is missing, the session is marked as failed (line 199), but
exec(updateInventory)on line 202 runs unconditionally. This will causegetInventoryId()to throw aRuntimeExceptionwhen it receives null/empty input, resulting in a confusing error rather than a clean skip.Add
exitHereIfFailed()or usedoIf()to guard the update:return session.markAsFailed(); } }) + .exitHereIfFailed() .exec(updateInventory)
🧹 Nitpick comments (4)
gatling-tests/src/test/java/simulation/BaseSimulation.java (1)
48-50: Consider removinginferHtmlResources()for JSON API tests.
inferHtmlResources()is designed to automatically fetch embedded HTML resources (CSS, JS, images) from HTML responses. Since this simulation tests JSON APIs exclusively, this setting has no practical effect and may add slight overhead during response parsing.
silentResources()complementsinferHtmlResources()to suppress logging of those inferred resources, so it's also redundant here..acceptEncodingHeader("gzip, deflate") .enableHttp2() // Enable HTTP/2 for better performance - .inferHtmlResources() // Better resource inference - .silentResources(); // Don't log resource requests separately + ;gatling-tests/src/test/java/simulation/StressTestSimulation.java (1)
459-464: AvoidSystem.exit(1)in simulation code.Calling
System.exit(1)terminates the JVM abruptly, which can:
- Prevent Gatling from generating reports
- Interfere with CI/CD pipeline result collection
- Skip cleanup/shutdown hooks
Gatling's assertion framework (which you've configured in
getDefaultAssertions()) already handles smoke test failures gracefully viadetails("Smoke Test").successfulRequests().percent().is(100.0)..exec( session -> { if (session.isFailed()) { LOGGER.error("Smoke test failed - stopping simulation"); - System.exit(1); // Force exit if smoke test failed + // Assertion framework will catch the failure } LOGGER.info("All smoke test scenarios passed successfully!");gatling-tests/src/test/java/simulation/CreateProductSimulation.java (2)
88-88: UseDurationfor consistency with other pauses.This pause uses a raw millisecond integer while other pauses in the file use
Duration.ofMillis()..check(bodyString().saveAs("inventoryResponseBody"))) - .pause(1000) // Add a pause to ensure the response is processed + .pause(Duration.ofMillis(1000)) // Add a pause to ensure the response is processed
212-263: Duplicate validation logic betweengetBodyAsStringandgetInventoryId.Both methods perform identical null/empty checks and JSON parsing validation. Consider consolidating into a single validation call before invoking these methods, or reuse
validateInventoryResponse()consistently.Since
validateInventoryResponse()already exists (lines 300-319), refactorgetBodyAsString()to use it:private String getBodyAsString(String inventoryResponseBody) { - if (inventoryResponseBody == null || inventoryResponseBody.trim().isEmpty()) { - LOGGER.error("Empty inventory response body"); - throw new RuntimeException("Empty inventory response body"); - } + validateInventoryResponse(inventoryResponseBody, "Preparing inventory update request"); try { - // Log the raw response to help with debugging - if (LOGGER.isDebugEnabled()) { - LOGGER.debug("Processing inventory response: {}", inventoryResponseBody); - } - - // Add additional validation before parsing - if (inventoryResponseBody.equals("{}") || inventoryResponseBody.equals("[]")) { - LOGGER.error("Invalid empty JSON object or array: {}", inventoryResponseBody); - throw new RuntimeException("Empty JSON structure in inventory response"); - } - - try { - // First validate that it's valid JSON before trying to deserialize - OBJECT_MAPPER.readTree(inventoryResponseBody); - } catch (Exception e) { - LOGGER.error("Invalid JSON format: {}", e.getMessage()); - throw new RuntimeException("Invalid JSON format in inventory response", e); - } - InventoryResponseDTO inventoryResponseDTO =Also applies to: 271-298
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
.github/workflows/gatling-tests.ymlis excluded by!**/*.ymlgatling-tests/pom.xmlis excluded by!**/*.xml
📒 Files selected for processing (3)
gatling-tests/src/test/java/simulation/BaseSimulation.java(2 hunks)gatling-tests/src/test/java/simulation/CreateProductSimulation.java(8 hunks)gatling-tests/src/test/java/simulation/StressTestSimulation.java(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Run Unit & Integration Tests with jdk 25
🔇 Additional comments (7)
gatling-tests/src/test/java/simulation/BaseSimulation.java (1)
31-32: LGTM on Kafka init delay adjustment.Reducing from 15 to 10 seconds aligns with the documented 5-8 second initialization window while still providing adequate buffer.
gatling-tests/src/test/java/simulation/StressTestSimulation.java (3)
41-48: LGTM on load configuration adjustments.The reduced defaults (50 users, 3-min ramp, 5-min plateau, 1-min cooldown) align with the stated goal of "sustainable load" testing. Users can still override via system properties.
56-63: LGTM on think time configurations.The tighter think-time windows (1-3s user think time, 300ms-1s page think time) create a more realistic interaction pattern while maintaining sustainable throughput.
371-371: Verify 500ms is sufficient for product availability.The pause was reduced from 2 seconds to 500ms. If the system relies on eventual consistency (e.g., Kafka message processing, async inventory updates), this short pause may cause intermittent failures in subsequent
browseCatalogchecks.Consider monitoring test stability and increasing if flakiness occurs.
gatling-tests/src/test/java/simulation/CreateProductSimulation.java (3)
7-7: LGTM ondetailsimport addition.Enables per-request assertion metrics.
36-43: LGTM on configuration parameter adjustments.Reducing test duration and adjusting user counts for sustainable throughput is appropriate. The shorter test (30s) with lower concurrent users (5) should still validate the workflow effectively.
331-359: LGTM on enhanced assertion configuration.The injection profile with Kafka initialization delay and the comprehensive assertions (global SLAs + per-request metrics) provide good coverage for validating service performance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
gatling-tests/src/test/java/simulation/CreateProductSimulation.java (2)
53-59: Text block indentation includes leading whitespace in the payload.Java text blocks preserve the leading spaces. The current indentation adds 4 spaces at the start of each JSON line, which will be included in the HTTP request body. While most JSON parsers handle this, it's cleaner to align with the closing
"""to strip common leading whitespace.StringBody( """ - { - "productCode": "#{productCode}", - "productName": "#{productName}", - "price": #{price}, - "description": "Performance test product" - } - """)) + { + "productCode": "#{productCode}", + "productName": "#{productName}", + "price": #{price}, + "description": "Performance test product" + } + """))Alternatively, use a compact single-line JSON or align the text block content with the closing delimiter to leverage automatic indentation stripping.
350-359: Redundant success/failure assertions.Lines 353-354 have redundant assertions:
successfulRequests().percent().gt(95.0)andfailedRequests().percent().lt(5.0)are logically equivalent (success + failure = 100%). Consider keeping only one for clarity.global().responseTime() .percentile(99) .lt(8000), // 99% of responses under 8s global().successfulRequests().percent().gt(95.0), // More than 95% success - global().failedRequests().percent().lt(5.0), // Less than 5% failed requests // Request-specific assertions for detailed metrics details("Create product").responseTime().mean().lt(500),The per-request assertions are a good addition for granular performance tracking.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
gatling-tests/src/test/java/simulation/CreateProductSimulation.java(7 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Codacy Static Code Analysis
- GitHub Check: Run Unit & Integration Tests with jdk 25
🔇 Additional comments (3)
gatling-tests/src/test/java/simulation/CreateProductSimulation.java (3)
36-43: LGTM - Reasonable defaults for sustainable throughput testing.The reduced default values (5 users, 10s ramp, 30s duration) make the test faster for local development while still being configurable via system properties for CI/production load testing scenarios.
178-204: LGTM - Realistic think-time pauses.The 100-200ms pauses between steps simulate realistic user behavior without significantly extending test duration. The slightly longer pause (200ms) before the critical inventory update is appropriate.
332-342: LGTM - Well-structured injection profile with warm-up phase.The injection profile properly sequences:
- Single user for Kafka initialization
- Delay for Kafka to stabilize
- Gradual ramp-up
- Constant arrival rate
The comment on line 340 correctly clarifies that
constantUsersPerSecis an arrival rate (5 new users/sec), not concurrent users.
No description provided.