Install Kafka In Docker



  1. To install packages in a docker container, the packages should be defined in the Dockerfile. If you want to install packages in the Container, use the RUN statement followed by exact download command. $ RUN pip install //IN Windows $ RUN apt-get install //in Ubuntu $ RUN yum install //CentOS/RHEL.
  2. For local development and testing, you can run Pulsar in standalone mode on your own machine within a Docker container. If you have not installed Docker, download the Community edition and follow the instructions for your OS.
  3. Docker container run -p 8080:8080 -d employee-producer Now go to localhost:8080 So we have linked the docker internal port 8080 to our external port 8080. Now go to localhost:8080 and we can see that tomcat has started successfully. Next we will see how to go inside the docker and investigate it using the docker exec command.
  1. Install Kafka In Docker Windows
  2. Install Kafka In Docker Download

Install Docker on all the Master and Worker Nodes participating in your cluster. That means you need to repeat this process on each node in turn. Note: Hardware devices have unique addresses, although some virtual machines may have identical values. Kafka writes data to a scalable disk structure and replicates for fault-tolerance. Producers can wait for write acknowledgments. Stream processing with Kafka Streams API, enables complex aggregations or joins of input streams onto an output stream of processed data. Traditional messaging models are queue and publish-subscribe.


ThingsBoard cloud


We recommend to use ThingsBoard Cloud - fully managed, scalable and fault-tolerant platform for your IoT applications

ThingsBoard Cloud is for everyone who would like to use ThingsBoard but don’t want to host their own instance of the platform.

  • Troubleshooting

This guide will help you to install and start ThingsBoard using Docker on Linux or Mac OS.

Prerequisites

Running

Depending on the database used there are three type of ThingsBoard single instance docker images:

  • thingsboard/tb-postgres - single instance of ThingsBoard with PostgreSQL database.

    Recommended option for small servers with at least 1GB of RAM and minimum load (few messages per second). 2-4GB is recommended.

  • thingsboard/tb-cassandra - single instance of ThingsBoard with Cassandra database.

    The most performant and recommended option but requires at least 4GB of RAM. 8GB is recommended.

  • thingsboard/tb - single instance of ThingsBoard with embedded HSQLDB database.

    Note: Not recommended for any evaluation or production usage and is used only for development purposes and automatic tests.

In this instruction thingsboard/tb-postgres image will be used. You can choose any other images with different databases (see above).

LinuxInstalling kafka docker on kubernetes

Choose ThingsBoard queue service

ThingsBoard is able to use various messaging systems/brokers for storing the messages and communication between ThingsBoard services. How to choose the right queue implementation?

  • In Memory queue implementation is built-in and default. It is useful for development(PoC) environments and is not suitable for production deployments or any sort of cluster deployments.

  • Kafka is recommended for production deployments. This queue is used on the most of ThingsBoard production environments now. It is useful for both on-prem and private cloud deployments. It is also useful if you like to stay independent from your cloud provider.However, some providers also have managed services for Kafka. See AWS MSK for example.

  • RabbitMQ is recommended if you don’t have much load and you already have experience with this messaging system.

  • AWS SQS is a fully managed message queuing service from AWS. Useful if you plan to deploy ThingsBoard on AWS.

  • Google Pub/Sub is a fully managed message queuing service from Google. Useful if you plan to deploy ThingsBoard on Google Cloud.

  • Azure Service Bus is a fully managed message queuing service from Azure. Useful if you plan to deploy ThingsBoard on Azure.

  • Confluent Cloud is a fully managed streaming platform based on Kafka. Useful for a cloud agnostic deployments.

See corresponding architecture page and rule engine page for more details.

ThingsBoard includes In Memory Queue service and use it by default without extra settings.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file:

Apache Kafka is an open-source stream-processing software platform.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file.

AWS SQS Configuration

To access AWS SQS service, you first need to create an AWS account.

To work with AWS SQS service you will need to create your next credentials using this instruction:

  • Access key ID
  • Secret access key

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_KEY”, “YOUR_SECRET” with your real AWS SQS IAM user credentials and “YOUR_REGION” with your real AWS SQS account region:

Google Pub/Sub Configuration

To access Pub/Sub service, you first need to create an Google cloud account.

To work with Pub/Sub service you will need to create a project using this instruction.

Create service account credentials with the role “Editor” or “Admin” using this instruction,and save json file with your service account credentials step 9 here.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_PROJECT_ID”, “YOUR_SERVICE_ACCOUNT” with your real Pub/Sub project id, and service account (it is whole data from json file):

Azure Service Bus Configuration

To access Azure Service Bus, you first need to create an Azure account.

To work with Service Bus service you will need to create a Service Bus Namespace using this instruction.

Create Shared Access Signature using this instruction.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_NAMESPACE_NAME” with your real Service Bus namespace name, and “YOUR_SAS_KEY_NAME”, “YOUR_SAS_KEY” with your real Service Bus credentials. Note: “YOUR_SAS_KEY_NAME” it is “SAS Policy”, “YOUR_SAS_KEY” it is “SAS Policy Primary Key”:

For installing RabbitMQ use this instruction.

Create docker compose file for ThingsBoard queue service:

Add the following lines to the yml file. Don’t forget to replace “YOUR_USERNAME” and “YOUR_PASSWORD” with your real user credentials, “localhost” and “5672” with your real RabbitMQ host and port:

Confluent Cloud Configuration

To access Confluent Cloud you should first create an account, then create a Kafka cluster and get your API Key.

Create docker compose file for ThingsBoard queue service:

Add the following line to the yml file. Don’t forget to replace “CLUSTER_API_KEY”, “CLUSTER_API_SECRET” and “localhost:9092” with your real Confluent Cloud bootstrap servers:

Where:

Install kafka in docker
  • 8080:9090 - connect local port 8080 to exposed internal HTTP port 9090
  • 1883:1883 - connect local port 1883 to exposed internal MQTT port 1883
  • 5683:5683 - connect local port 5683 to exposed internal COAP port 5683
  • ~/.mytb-data:/data - mounts the host’s dir ~/.mytb-data to ThingsBoard DataBase data directory
  • ~/.mytb-logs:/var/log/thingsboard - mounts the host’s dir ~/.mytb-logs to ThingsBoard logs directory
  • mytb - friendly local name of this machine
  • restart: always - automatically start ThingsBoard in case of system reboot and restart in case of failure.
  • image: thingsboard/tb-postgres - docker image, can be also thingsboard/tb-cassandra or thingsboard/tb

Before starting Docker container run following commands to create a directory for storing data and logs and then change its owner to docker container user,to be able to change user, chown command is used, which requires sudo permissions (command will request password for a sudo access):

NOTE: Replace directory ~/.mytb-data and ~/.mytb-logs with directories you’re planning to use in docker-compose.yml.

Set the terminal in the directory which contains the docker-compose.yml file and execute the following command to up this docker compose directly:

After executing this command you can open http://{your-host-ip}:8080 in your browser (for ex. http://localhost:8080). You should see ThingsBoard login page. Use the following default credentials:

  • System Administrator: [email protected] / sysadmin
  • Tenant Administrator: [email protected] / tenant
  • Customer User: [email protected] / customer

You can always change passwords for each account in account profile page.

Detaching, stop and start commands

You can detach from session terminal with Ctrl-pCtrl-q - the container will keep running in the background.

In case of any issues you can examine service logs for errors.For example to see ThingsBoard node logs execute the following command:

To stop the container:

To start the container:

Upgrading

Install Kafka In Docker Windows

In order to update to the latest image, execute the following commands:

NOTE: if you use different database change image name in all commands from thingsboard/tb-postgres to thingsboard/tb-cassandra or thingsboard/tb correspondingly.

NOTE: replace host’s directory ~/.mytb-data with directory used during container creation.

NOTE: if you have used one database and want to try another one, then remove the current docker container using docker-compose rm command and use different directory for ~/.mytb-data in docker-compose.yml.

Troubleshooting

DNS issues

Note If you observe errors related to DNS issues, for example

You may configure your system to use Google public DNS servers. See corresponding Linux and Mac OS instructions.

Next steps

  • Getting started guides - These guides provide quick overview of main ThingsBoard features. Designed to be completed in 15-30 minutes.

  • Connect your device - Learn how to connect devices based on your connectivity technology or solution.

  • Data visualization - These guides contain instructions how to configure complex ThingsBoard dashboards.

  • Data processing & actions - Learn how to use ThingsBoard Rule Engine.

  • IoT Data analytics - Learn how to use rule engine to perform basic analytics tasks.

  • Hardware samples - Learn how to connect various hardware platforms to ThingsBoard.

  • Advanced features - Learn about advanced ThingsBoard features.

  • Contribution and Development - Learn about contribution and development in ThingsBoard.


For local development and testing, you can run Pulsar in standalonemode on your own machine within a Docker container.

If you have not installed Docker, download the Community editionand follow the instructions for your OS.

Start Pulsar in Docker

  • For MacOS, Linux, and Windows:

A few things to note about this command:

  • The data, metadata, and configuration are persisted on Docker volumes in order to not start 'fresh' everytime the container is restarted. For details on the volumes you can use docker volume inspect <sourcename>
  • For Docker on Windows make sure to configure it to use Linux containers

If you start Pulsar successfully, you will see INFO-level log messages like this:

Tip

When you start a local standalone cluster, a public/defaultnamespace is created automatically. The namespace is used for development purposes. All Pulsar topics are managed within namespaces.For more information, see Topics.

Use Pulsar in Docker

Pulsar offers client libraries for Java, Go, Pythonand C++. If you're running a local standalone cluster, you canuse one of these root URLs to interact with your cluster:

  • pulsar://localhost:6650
  • http://localhost:8080

Install Kafka In Docker Download

The following example will guide you get started with Pulsar quickly by using the Pythonclient API.

Install the Pulsar Python client library directly from PyPI:

Consume a message

Create a consumer and subscribe to the topic:

Produce a message

Now start a producer to send some test messages:

Docker

Get the topic statistics

In Pulsar, you can use REST, Java, or command-line tools to control every aspect of the system.For details on APIs, refer to Admin API Overview.

In the simplest example, you can use curl to probe the stats for a particular topic:

The output is something like this: