Integrate Amazon Bedrock with Amazon Redshift ML for generative AI applications

Integrate Amazon Bedrock with Amazon Redshift ML for generative AI applications


Amazon Redshift has enhanced its Redshift ML feature to support integration of large language models (LLMs). As part of these enhancements, Redshift now enables native integration with Amazon Bedrock. This integration enables you to use LLMs from simple SQL commands alongside your data in Amazon Redshift, helping you to build generative AI applications quickly. This powerful combination enables customers to harness the transformative capabilities of LLMs and seamlessly incorporate them into their analytical workflows.

With this new integration, you can now perform generative AI tasks such as language translation, text summarization, text generation, customer classification, and sentiment analysis on your Redshift data using popular foundation models (FMs) such as Anthropic’s Claude, Amazon Titan, Meta’s Llama 2, and Mistral AI. You can use the CREATE EXTERNAL MODEL command to point to a text-based model in Amazon Bedrock, requiring no model training or provisioning. You can invoke these models using familiar SQL commands, making it more straightforward than ever to integrate generative AI capabilities into your data analytics workflows.

Solution overview

To illustrate this new Redshift machine learning (ML) feature, we will build a solution to generate personalized diet plans for patients based on their conditions and medications. The following figure shows the steps to build the solution and the steps to run it.

The steps to build and run the solution are the following:

  1. Load sample patients’ data
  2. Prepare the prompt
  3. Enable LLM access
  4. Create a model that references the LLM model on Amazon Bedrock
  5. Send the prompt and generate a personalized patient diet plan

Pre-requisites

  1. An AWS account.
  2. An Amazon Redshift Serverless workgroup or provisioned data warehouse. For setup instructions, see Creating a workgroup with a namespace or Create a sample Amazon Redshift data warehouse, respectively. The Amazon Bedrock integration feature is supported in both Amazon Redshift provisioned and serverless.
  3. Create or update an AWS Identity and Access Management (IAM role) for Amazon Redshift ML integration with Amazon Bedrock.
  4. Associate the IAM role to a Redshift instance.
  5. Users should have the required permissions to create models.

Implementation

The following are the solution implementation steps. The sample data used in the implementation is for illustration only. The same implementation approach can be adapted to your specific data sets and use cases.

You can download a SQL notebook to run the implementation steps in Redshift Query Editor V2. If you’re using another SQL editor, you can copy and paste the SQL queries either from the content of this post or from the notebook.

Load sample patients’ data:

  1. Open Amazon Redshift Query Editor V2 or another SQL editor of your choice and connect to the Redshift data warehouse.
  2. Run the following SQL to create the patientsinfo table and load sample data.
-- Create table

CREATE TABLE patientsinfo (
pid integer ENCODE az64,
pname varchar(100),
condition character varying(100) ENCODE lzo,
medication character varying(100) ENCODE lzo
);

  1. Download the sample file, upload it into your S3 bucket, and load the data into the patientsinfo table using the following COPY command.
-- Load sample data
COPY patientsinfo
FROM 's3://<<your_s3_bucket>>/sample_patientsinfo.csv'
IAM_ROLE DEFAULT
csv
DELIMITER ','
IGNOREHEADER 1;

Prepare the prompt:

  1. Run the following SQL to aggregate patient conditions and medications.
SELECT
pname,
listagg(distinct condition,',') within group (order by pid) over (partition by pid) as conditions,
listagg(distinct medication,',') within group (order by pid) over (partition by pid) as medications
FROM patientsinfo

The following is the sample output showing aggregated conditions and medications. The output includes multiple rows, which will be grouped in the next step.

  1. Build the prompt to combine patient, conditions, and medications data.
SELECT
pname || ' has ' || conditions || ' taking ' || medications as patient_prompt
FROM (
    SELECT pname, 
    listagg(distinct condition,',') within group (order by pid) over (partition by pid) as conditions,
    listagg(distinct medication,',') within group (order by pid) over (partition by pid) as medications
    FROM patientsinfo) 
GROUP BY 1

The following is the sample output showing the results of the fully built prompt concatenating the patients, conditions, and medications into single column value.

  1. Create a materialized view with the preceding SQL query as the definition. This step isn’t mandatory; you’re creating the table for readability. Note that you might see a message indicating that materialized views with column aliases won’t be incrementally refreshed. You can safely ignore this message for the purpose of this illustration.
CREATE MATERIALIZED VIEW mv_prompts AUTO REFRESH YES
AS
(
SELECT pid,
pname || ' has ' || conditions || ' taking ' || medications as patient_prompt
FROM (
SELECT pname, pid,
listagg(distinct condition,',') within group (order by pid) over (partition by pid) as conditions,
listagg(distinct medication,',') within group (order by pid) over (partition by pid) as medications
FROM patientsinfo)
GROUP BY 1,2
)

  1. Run the following SQL to review the sample output.
SELECT * FROM mv_prompts limit 5;

The following is a sample output with a materialized view.

Enable LLM model access:

Perform the following steps to enable model access in Amazon Bedrock.

  1. Navigate to the Amazon Bedrock console.
  2. In the navigation pane, choose Model Access.

  1. Choose Enable specific models.
    You must have the required IAM permissions to enable access to available Amazon Bedrock FMs.

  1. For this illustration, use Anthropic’s Claude model. Enter Claude in the search box and select Claude from the list. Choose Next to proceed.

  1. Review the selection and choose Submit.

Create a model referencing the LLM model on Amazon Bedrock:

  1. Navigate back to Amazon Redshift Query Editor V2 or, if you didn’t use Query Editor V2, to the SQL editor you used to connect with Redshift data warehouse.
  2. Run the following SQL to create an external model referencing the anthropic.claude-v2 model on Amazon Bedrock. See Amazon Bedrock model IDs for how to find the model ID.
CREATE EXTERNAL MODEL patient_recommendations
FUNCTION patient_recommendations_func
IAM_ROLE '<<provide the arn of IAM role created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
    MODEL_ID 'anthropic.claude-v2',
    PROMPT 'Generate personalized diet plan for following patient:');

Send the prompt and generate a personalized patient diet plan:

  1. Run the following SQL to pass the prompt to the function created in the previous step.
SELECT patient_recommendations_func(patient_prompt) 
FROM mv_prompts limit 2;

  1. You will get the output with the generated diet plan. You can copy the cells and paste in a text editor or export the output to view the results in a spreadsheet if you’re using Redshift Query Editor V2.

You will need to expand the row size to see the complete text.

Additional customization options

The previous example demonstrates a straightforward integration of Amazon Redshift with Amazon Bedrock. However, you can further customize this integration to suit your specific needs and requirements.

  • Inference functions as leader-only functions: Amazon Bedrock model inference functions can run as leader node-only when the query doesn’t reference tables. This can be helpful if you want to quickly ask an LLM a question.

You can run following SQL with no FROM clause. This will run as leader-node only function because it doesn’t need data to fetch and pass to the model.

SELECT patient_recommendations_func('Generate diet plan for pre-diabetes');

This will return a generic 7-day diet plan for pre-diabetes. The following figure is an output sample generated by the preceding function call.

  • Inference with UNIFIED request type models: In this mode, you can pass additional optional parameters along with input text to customize the response. Amazon Redshift passes these parameters to the corresponding parameters for the Converse API.

In the following example, we’re setting the temperature parameter to a custom value. The parameter temperature affects the randomness and creativity of the model’s outputs. The default value is 1 (the range is 0–1.0).

SELECT patient_recommendations_func(patient_prompt,object('temperature', 0.2)) 
FROM mv_prompts
WHERE pid=101;

The following is a sample output with a temperature of 0.2. The output includes recommendations to drink fluids and avoid certain foods.

Regenerate the predictions, this time setting the temperature to 0.8 for the same patient.

SELECT patient_recommendations_func(patient_prompt,object('temperature', 0.8)) 
FROM mv_prompts
WHERE pid=101;

The following is a sample output with a temperature of 0.8. The output still includes recommendations on fluid intake and foods to avoid, but is more specific in those recommendations.

Note that the output won’t be the same every time you run a particular query. However, we want to illustrate that the model behavior is influenced by changing parameters.

  • Inference with RAW request type models: CREATE EXTERNAL MODEL supports Amazon Bedrock-hosted models, even those that aren’t supported by the Amazon Bedrock Converse API. In those cases, the request_type needs to be raw and the request needs to be constructed during inference. The request is a combination of a prompt and optional parameters.

Make sure that you enable access to the Titan Text G1 – Express model in Amazon Bedrock before running the following example. You should follow the same steps as described previously in Enable LLM model access to enable access to this model.

-- Create model with REQUEST_TYPE as RAW

CREATE EXTERNAL MODEL titan_raw
FUNCTION func_titan_raw
IAM_ROLE '<<provide the arn of IAM role created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
MODEL_ID 'amazon.titan-text-express-v1',
REQUEST_TYPE RAW,
RESPONSE_TYPE SUPER);

-- Need to construct the request during inference.
SELECT func_titan_raw(object('inputText', 'Generate personalized diet plan for following: ' || patient_prompt, 'textGenerationConfig', object('temperature', 0.5, 'maxTokenCount', 500)))
FROM mv_prompts limit 1;

The following figure shows the sample output.

  • Fetch run metrics with RESPONSE_TYPE as SUPER: If you need more information about an input request such as total tokens, you can request the RESPONSE_TYPE to be super when you create the model.
-- Create Model specifying RESPONSE_TYPE as SUPER.

CREATE EXTERNAL MODEL patient_recommendations_v2
FUNCTION patient_recommendations_func_v2
IAM_ROLE '<<provide the arn of IAM role created in pre-requisites>>'
MODEL_TYPE BEDROCK
SETTINGS (
MODEL_ID 'anthropic.claude-v2',
PROMPT 'Generate personalized diet plan for following patient:',
RESPONSE_TYPE SUPER);

-- Run the inference function
SELECT patient_recommendations_func_v2(patient_prompt)
FROM mv_prompts limit 1;

The following figure shows the output, which includes the input tokens, output tokens, and latency metrics.

Considerations and best practices

There are a few things to keep in mind when using the methods described in this post:

  • Inference queries might generate throttling exceptions because of the limited runtime quotas for Amazon Bedrock. Amazon Redshift retries requests multiple times, but queries can still be throttled because throughput for non-provisioned models might be variable.
  • The throughput of inference queries is limited by the runtime quotas of the different models offered by Amazon Bedrock in different AWS Regions. If you find that the throughput isn’t enough for your application, you can request a quota increase for your account. For more information, see Quotas for Amazon Bedrock.
  • If you need stable and consistent throughput, consider getting provisioned throughput for the model that you need from Amazon Bedrock. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock.
  • Using Amazon Redshift ML with Amazon Bedrock incurs additional costs. The cost is model- and Region-specific and depends on the number of input and output tokens that the model will process. For more information, see Amazon Bedrock Pricing.

Cleanup

To avoid incurring future charges, delete the Redshift Serverless instance or Redshift provisioned data warehouse created as part of the prerequisite steps.

Conclusion

In this post, you learned how to use the Amazon Redshift ML feature to invoke LLMs on Amazon Bedrock from Amazon Redshift. You were provided with step-by-step instructions on how to implement this integration, using illustrative datasets. Additionally, read about various options to further customize the integration to help meet your specific needs. We encourage you to try Redshift ML integration with Amazon Bedrock and share your feedback with us.


About the Authors

Satesh Sonti is a Sr. Analytics Specialist Solutions Architect based out of Atlanta, specialized in building enterprise data services, data warehousing, and analytics solutions. He has over 19 years of experience in building data assets and leading complex data services for banking and insurance clients across the globe.

Nikos Koulouris is a Software Development Engineer at AWS. He received his PhD from University of California, San Diego and he has been working in the areas of databases and analytics.

Post Comment