Skip to content

Commit b8b9739

Browse files
authored
docs: Correct architecture description for the Inference stack
This correction aligns the text with the diagram, clarifying that the "Inference stack" consumes the Knowledge Base, but it is deployed by the "Ingestion stack". This improves accuracy and clarity for anyone reading or implementing the project.
1 parent 4381a0e commit b8b9739

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

samples/rfp-answer-generation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Components deployed by this stack include:
3434

3535
### Inference stack
3636

37-
The inference stack configures a workflow for processing new, incoming RFPs. It breaks the RFP down into questions, then prompts the LLM to use context from the Knowledge Base (deployed in the inference stack) to answer each question.
37+
The inference stack configures a workflow for processing new, incoming RFPs. It breaks the RFP down into questions, then prompts the LLM to use context from the Knowledge Base (which is deployed in the ingestion stack) to answer each question.
3838

3939
The generated answer and the original question are saved into an Amazon DynamoDB table.
4040

0 commit comments

Comments
 (0)