Skip to main content

Curie LLM Setup Guide

This guide provides step-by-step instructions for setting up, configuring, and developing with the Curie LLM system, including local development, AWS configuration, and deployment procedures.

Prerequisites

Required Software

  • Python 3.9+ for Lambda development
  • Node.js 18+ for frontend integration
  • AWS CLI configured with appropriate credentials
  • Docker (optional, for local testing)
  • Git for version control

AWS Services Access

  • AWS Lambda - Function execution
  • AWS Bedrock - AI model access
  • Amazon DynamoDB - Product database
  • Amazon S3 - Asset storage
  • CloudFront - CDN for 3D models
  • API Gateway - HTTP routing

Required Permissions

Your AWS user/role needs the following permissions:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"lambda:*",
"bedrock:*",
"bedrock-agent:*",
"bedrock-agent-runtime:*",
"dynamodb:GetItem",
"dynamodb:Query",
"s3:GetObject",
"s3:ListObjects",
"apigateway:*"
],
"Resource": "*"
}
]
}

Backend Setup (Lambda Function)

1. Clone the Repository

git clone https://github.com/curievision/curie-shopping-api.git
cd curie-shopping-api

2. Navigate to LLM Function

cd ShoppingApi/LLM/ProcessUserPrompt

3. Create Virtual Environment

python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate

4. Install Dependencies

pip install -r requirements.txt

requirements.txt:

boto3>=1.26.0
jstyleson>=0.0.2
cachetools>=5.0.0

5. Environment Configuration

Create a .env file for local development:

# .env
BEDROCK_REGION=us-east-2
BEDROCK_FLOW_ID=6PICZHUL9X
BEDROCK_FLOW_ALIAS_ID=N46HHFYPMJ
DYNAMODB_TABLE=curie-products
AWS_PROFILE=your-aws-profile

6. Local Testing

Create a test script test_local.py:

import json
import os
from lambda_process_user_prompt import lambda_handler

# Load environment variables
from dotenv import load_dotenv
load_dotenv()

def test_local():
# Test event
event = {
'body': json.dumps({
'prompt': 'comfortable running shoes under $150'
})
}

context = {} # Mock context

try:
response = lambda_handler(event, context)
print("Response:", json.dumps(response, indent=2))
except Exception as e:
print(f"Error: {e}")

if __name__ == "__main__":
test_local()

Run the test:

python test_local.py

AWS Bedrock Configuration

1. Enable Bedrock Models

In the AWS Console, navigate to Amazon Bedrock and enable the required models:

  • Titan Text Embeddings v2 (for vector search)
  • Claude 3 or other text models (for agent flows)

2. Create Bedrock Agent Flow

  1. Go to Amazon BedrockAgent BuilderFlows
  2. Create a new flow with the following configuration:

Flow Structure:

Flow Name: curie-product-search
Description: Natural language product search flow

Nodes:
- Input Node: FlowInputNode
- Output: document (string)

- Processing Node: ProductSearchNode
- Input: user_query from FlowInputNode
- Logic: Parse query and search products
- Output: product_results (array)

- Output Node: FlowOutputNode
- Input: product_results from ProductSearchNode
- Format: JSON with results array
  1. Deploy the Flow and note the Flow ID and Alias ID
  2. Update your environment variables with the new IDs

3. Configure IAM Permissions

Create an IAM role for the Lambda function:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"bedrock-agent-runtime:InvokeFlow"
],
"Resource": "arn:aws:bedrock-agent:us-east-2:*:agent-alias/*/*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:Query"
],
"Resource": [
"arn:aws:dynamodb:us-east-2:*:table/curie-products",
"arn:aws:dynamodb:us-east-2:*:table/curie-products/index/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListObjects",
"s3:ListObjectsV2"
],
"Resource": [
"arn:aws:s3:::curie-product-*",
"arn:aws:s3:::curie-product-*/*"
]
}
]
}

Frontend Setup (React Integration)

1. Navigate to Destination Site

cd ../../../curie-destination-site

2. Install Dependencies

npm install
# or
pnpm install

3. Configure API Endpoint

Update src/utils/llm.ts with your API Gateway endpoint:

const API_ENDPOINT = "https://your-api-id.execute-api.us-east-2.amazonaws.com/prod/prompt";

4. Environment Variables

Create .env.local:

# .env.local
VITE_LLM_API_ENDPOINT=https://your-api-id.execute-api.us-east-2.amazonaws.com/prod/prompt
VITE_API_BASE_URL=https://your-api-base-url.com

5. Start Development Server

npm run dev
# or
pnpm dev

Navigate to http://localhost:3000/chat to test the chat interface.

Deployment

Automated Deployment (GitHub Actions)

The system deploys automatically via GitHub Actions when code is pushed to the main branch.

Deployment Configuration (from deploy.yml):

- name: ProcessUserPromptLambda
files:
- ShoppingApi/LLM/ProcessUserPrompt/lambda_process_user_prompt.py
- utils/enrichproduct.py
requirements: ShoppingApi/LLM/ProcessUserPrompt/requirements.txt

Manual Deployment

1. Package Lambda Function

cd ShoppingApi/LLM/ProcessUserPrompt

# Create deployment package
rm -rf package
mkdir package

# Install dependencies
pip install -r requirements.txt --target package/

# Copy function code
cp lambda_process_user_prompt.py package/
cp ../../../utils/enrichproduct.py package/

# Create ZIP file
cd package
zip -r ../ProcessUserPromptLambda.zip .
cd ..

2. Deploy to AWS Lambda

# Update function code
aws lambda update-function-code \
--function-name ProcessUserPromptLambda \
--zip-file fileb://ProcessUserPromptLambda.zip \
--region us-east-2

3. Update Environment Variables

aws lambda update-function-configuration \
--function-name ProcessUserPromptLambda \
--environment Variables='{
"BEDROCK_REGION":"us-east-2",
"BEDROCK_FLOW_ID":"your-flow-id",
"BEDROCK_FLOW_ALIAS_ID":"your-alias-id",
"DYNAMODB_TABLE":"curie-products"
}' \
--region us-east-2

Development Workflow

1. Local Development Setup

# Backend development
cd curie-shopping-api/ShoppingApi/LLM/ProcessUserPrompt
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Frontend development
cd curie-destination-site
npm install
npm run dev

2. Testing Changes

Backend Testing:

# test_lambda.py
import json
from lambda_process_user_prompt import lambda_handler

def test_various_queries():
queries = [
"comfortable running shoes",
"Nike basketball shoes under $200",
"waterproof hiking boots",
"casual sneakers for everyday wear"
]

for query in queries:
event = {'body': json.dumps({'prompt': query})}
response = lambda_handler(event, {})

print(f"\nQuery: {query}")
print(f"Status: {response['statusCode']}")

if response['statusCode'] == 200:
body = json.loads(response['body'])
print(f"Results: {len(body.get('results', []))}")
else:
print(f"Error: {response['body']}")

if __name__ == "__main__":
test_various_queries()

Frontend Testing:

// Test the chat component
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import Chat from '../components/Chat';

test('chat handles user input and displays results', async () => {
render(<Chat />);

const input = screen.getByPlaceholderText('Looking for new kicks?');
const sendButton = screen.getByText('Send');

fireEvent.change(input, { target: { value: 'running shoes' } });
fireEvent.click(sendButton);

await waitFor(() => {
expect(screen.getByText(/curie agent/i)).toBeInTheDocument();
});
});

3. Debugging

Lambda Function Debugging:

import logging

# Enhanced logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)

def lambda_handler(event, context):
logger.debug(f"Received event: {json.dumps(event)}")

try:
# Your code here
pass
except Exception as e:
logger.error(f"Error details: {str(e)}", exc_info=True)
raise

Frontend Debugging:

// Enable detailed logging
const getLlmSuggestions = async (query: string): Promise<ServerProduct[]> => {
console.log("🔍 Querying LLM with:", query);

try {
const response = await fetch(API_ENDPOINT, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt: query }),
});

console.log("📡 Response status:", response.status);

const data = await response.json();
console.log("📦 Response data:", data);

return data.results;
} catch (error) {
console.error("❌ LLM API Error:", error);
throw error;
}
};

Monitoring and Troubleshooting

CloudWatch Logs

Monitor Lambda function logs:

# View recent logs
aws logs tail /aws/lambda/ProcessUserPromptLambda --follow

# Filter for errors
aws logs filter-log-events \
--log-group-name /aws/lambda/ProcessUserPromptLambda \
--filter-pattern "ERROR"

Common Issues

1. Bedrock Access Denied

Error: AccessDeniedException: User is not authorized to perform: bedrock-agent-runtime:InvokeFlow

Solution:

  • Verify IAM permissions include bedrock-agent-runtime:InvokeFlow
  • Check that Bedrock models are enabled in your AWS region
  • Ensure the flow ID and alias ID are correct

2. DynamoDB Access Issues

Error: ResourceNotFoundException: Requested resource not found

Solution:

  • Verify the DynamoDB table name in environment variables
  • Check IAM permissions for DynamoDB access
  • Ensure the table exists in the correct region

3. Frontend CORS Issues

Error: Access to fetch at '...' from origin '...' has been blocked by CORS policy

Solution:

  • Verify API Gateway CORS configuration
  • Check that Lambda function returns proper CORS headers
  • Ensure the API endpoint URL is correct

4. Empty Results

Issue: API returns 200 but with empty results array

Debugging:

# Add debugging to enrichment process
def enrich_product_data(product):
logger.info(f"Enriching product: {product.get('ID', 'Unknown')}")

try:
# Enrichment logic
result = enrich_result(product)
logger.info(f"Enrichment successful for {result.get('ID')}")
return result
except Exception as e:
logger.error(f"Enrichment failed for {product.get('ID')}: {str(e)}")
return product

Performance Optimization

Lambda Function Optimization

# Connection reuse
import boto3

# Initialize clients outside handler for reuse
bedrock_client = boto3.client("bedrock-agent-runtime", region_name=BEDROCK_REGION)
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(DYNAMODB_TABLE)

def lambda_handler(event, context):
# Use pre-initialized clients
pass

Frontend Optimization

// Debounce user input
import { useMemo, useCallback } from 'react';
import { debounce } from 'lodash';

const Chat: React.FC = () => {
const debouncedSearch = useMemo(
() => debounce(async (query: string) => {
if (query.trim()) {
await handleSend(query);
}
}, 500),
[]
);

// Use debounced search for auto-suggestions
};

Security Considerations

API Security

  • Input Validation: Always validate and sanitize user input
  • Rate Limiting: Implement client-side throttling
  • Error Handling: Don't expose internal system details in error messages

AWS Security

  • Least Privilege: Grant minimal required permissions
  • VPC Configuration: Consider running Lambda in VPC for enhanced security
  • Encryption: Enable encryption at rest for DynamoDB and S3

Frontend Security

  • Environment Variables: Never expose AWS credentials in frontend code
  • Content Security Policy: Implement CSP headers
  • Input Sanitization: Sanitize all user inputs before display

This setup guide provides comprehensive instructions for developing and deploying the Curie LLM system. For API details and integration examples, see the API Reference.