Published: Mar 16, 2024 by
Prerequisites:
- Access to the OpenShift environment.
- Familiarity with Kafka, Camel K, and AWS S3.
Step 1: Access Kafka Messages
- Open your OpenShift dashboard.
- Navigate to the
edge-datalake
project route. -
Click on the route for
kafdrop
. -
Once inside, click on the
kafdrop
route to view the messages. -
To view specific messages, click on a ship name, such as
Ship name
. - Explore other ships within Kafka as needed.
Step 2: View Camel-K Integration
-
Within the
edge-datalake
project, click on thecamel-k operator
. -
Review the code for
kafka-to-s3-integration-olympic
. -
Navigate to
Resources
and click onPod
. -
View the logs within the selected pod.
Step 3: Access AWS Console and View S3 Bucket
- Log in to your AWS console.
-
Navigate to the S3 service and locate the relevant bucket.
-
View the data within one of the files.
-
Scroll to the bottom and click on
Run SQL query
. -
Review the results of the data.
Step 4: Create an Instance for Data Push
Attempt to create an instance of either QueenMary
or Titanic
to push data to the S3 bucket. For detailed steps on this, refer to the provided Camel K Ship integration documentation.
By following this tutorial, you should be able to view Kafka messages, integrate with AWS S3, and push data to an S3 bucket using the Data Lake project on OpenShift. Happy exploring!