Scroll to top

Deploying Confluent Connectors on AWS ECS


Apache Kafka has become a cornerstone in building scalable, distributed, and fault-tolerant data pipelines. Confluent, a company founded by the creators of Kafka, provides a powerful platform that extends Kafka with additional capabilities, including connectors. Connectors are typically used to integrate Kafka with various data sources and sinks and facilitates the seamless movement of data between Kafka topics and external systems. In this blog article, we’ll explore how to deploy Confluent Self-Managed Connectors on AWS ECS using CloudFormation.

Prerequisites


Before diving into the deployment process, make sure the following prerequisites are in place:

  1. AWS account with appropriate permissions.
  2. Docker is installed locally for building and testing containers.
  3. AWS Command Line Interface (CLI) installed for managing ECS resources.
  4. Confluent Platform installed locally or on a separate cluster.

Step 1: Prepare Docker images

Create Docker images for the Confluent Self-Managed Connectors you intend to deploy. Follow Confluent’s documentation for building connector-specific images. Ensure that your Dockerfile includes all necessary dependencies and configurations.

Below is an example of how to setup a Dockerfile for a Confluent self-managed JMS connector.

# Docker image for running and deploying Kafka Connect
FROM confluentinc/cp-kafka-connect-base:6.2.1 

# Add JMS connector JAR via a volume mounted to the image.
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jms:11.0.11  

USER root   
RUN yum install -y jq findutils   
RUN mkdir /usr/share/connector   

# Ensure connect-distributed properties setup is added and configured. 
COPY connect-distributed.properties /usr/share/connector/ 
RUN chown -R appuser:appuser /usr/share/connector/ 
USER appuser 

# Ensure connect-log4j.properties setup is added and configured to maintain logs and identify errors.   
COPY connect-log4j.properties /etc/kafka/ 

# Ensure relevant dependencies are copied to the respective lib folder. 
COPY ./lib/12.1.0.2/*.jar /usr/share/confluent-hub-components/confluentinc-kafka-connect-jms/lib

Step 2: Build and push Docker image to Amazon Elastic Container Registry (ECR)

Build and push the Docker images to Amazon ECR using AWS CLI to make them accessible to ECS. Below is an example of how to build the docker image and push it to the AWS ECR.

#!/bin/sh -e 

ECR_REPO=$(aws sts get-caller-identity --query Account --output text).dkr.ecr.ap-southeast-2.amazonaws.com/%REPOSITORY_NAME%
echo "Docker login"
aws ecr get-login-password --region ap-southeast-2 | docker login --username AWS --password-stdin $ECR_REPO

echo "Building docker image"
docker build -t %CONNECTOR_NAME% .   
docker tag %CONNECTOR_NAME% $ECR_REPO/$CONNECTOR_NAME:$BUILD_VERSION     

echo "Pushing docker image"   
docker push $ECR_REPO/$CONNECTOR_NAME:$BUILD_VERSION

Step 3: Deploy Docker image to AWS Elastic Container Service (ECS)

Create an ECS cluster to host your Kafka Connect tasks. You can use the AWS Management Console or the CLI to create a cluster. Below is an example of how to pull the latest image from ECR and deploy it to the ECS cluster using AWS CLI and CloudFormation.

#!/bin/sh -e      
IMAGE_VERSION=$1   
STACK_NAME='%STACK_NAME%'

if [[ -z "$IMAGE_VERSION" ]]; then
     echo "Please provide IMAGE_VERSION"
     exit 1;
fi   

# Pulls the image from ECR and deploys to ECS cluster by creating the stack using CloudFormation fargate.yml.
cmd="docker run --rm \
    -v `pwd`:/cwd \
    -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN \ -e AWS_REGION \ realestate/stackup:1.4.6 $STACK_NAME up \ -t fargate.yml \ -o ImageVersion=$IMAGE_VERSION"      

echo "updating $STACK_NAME"
eval "$cmd"

Note: The above steps explain only how to setup an ECS cluster with the docker image containing jms self managed connector dependencies. The following steps give an understanding about how the jms Confluent connector can be configured and deployed into the same AWS ECS cluster

Step 4: Create a JMS Connector configuration

JMS Source Connector is used to move messages from any JMS-compliant broker to Apache Kafka. To understand the configuration setup, refer to the JMS Source Connector for Confluent Platform guide. JMS Sink Connector is used to move messages from Apache Kafka to any JMS-compliant broker. To understand the configuration setup, refer to the JMS Sink Connector Confluent Platform guide.

Step 5: Confluent cluster setup

The connectors need to be set up with the appropriate access control list for the respective topics and consumer groups in the Confluent cluster. It can be implemented using Confluent CLI or other tools like Terraform.

Step 6: Deploy JMS Connector into AWS ECS cluster

As the AWS ECS cluster has already been set up with a Docker image having the necessary dependencies for the JMS connector and the respective confluent cluster setup is ready, the JMS connector can be deployed to the AWS ECS cluster via REST API. Below is an example of how to deploy a JMS connector to the AWS ECS cluster.

if [ ! -f ${CONNECTOR_FILE_PATH}/connector.json ]; then
     echo "ERROR: Failed to find Connector definition in [ ${CONNECTOR_FILE_PATH}/connector.json ]. Aborting."
     exit 1
fi

echo "... deploying Connector [ ${CONNECTOR_NAME} ]
curl -i -s -X POST -H "%API_DETAILS%: ${WES_API_HEADER}" -H "X-ENTITY-NAME: ${PROJECT_NAME}" -H "Content-Type: application/json" ${CONNECT_URL}/connectors --data "$RESULT"

if [ $? -ne 0 ]; then
     echo "ERROR: Failed to deploy Connector [ ${CONNECTOR_NAME} ] . Aborting."  exit 1
else
     echo "Successfully deployed Connector [ ${CONNECTOR_NAME} ] ."
fi

Conclusion

In conclusion, it’s evident that the effective implementation of Confluent Connectors within AWS ECS is a critical component for advanced real-time data processing. LimePoint, as a Confluent Premier Partner, stands ready to offer its extensive expertise in this area.

LimePoint’s recognition as a Confluent Elite Partner underlines our ability to assist clients in setting up Confluent clusters, configuring Kafka Connectors, and developing cloud-based solutions using Confluent Kafka. Our expertise includes configuring systems using the Confluent CLI, customising connectors to meet specific needs, and designing CloudFormation Templates with fargate.yml for streamlined cloud integration.

Contact us to take advantage of our knowledge and experience in navigating the complexities of Confluent Kafka Connector and AWS ECS.

Related posts

Post a Comment

Your email address will not be published. Required fields are marked *