Skip to content

Latest commit

 

History

History
178 lines (120 loc) · 8.93 KB

File metadata and controls

178 lines (120 loc) · 8.93 KB

Contributing to AutoMQ

Thank you for your interest in contributing! We love community contributions. Read on to learn how to contribute to AutoMQ. We appreciate first-time contributors, and we are happy to assist you in getting started. In case of questions, just reach out to us via Wechat Group or Slack!

Before getting started, please review AutoMQ's Code of Conduct. Everyone interacting in Slack or WeChat follow Code of Conduct.

Suggested Onboarding Path for New Contributors

Quick Start for First-Time Contributors (Recommended)

If you are new to AutoMQ, we recommend starting with the simplest path before diving into local builds.

Option 1: Quick exploration (recommended for beginners)

  • Run AutoMQ using Docker as described in the README
  • Verify you can:
    • Start the broker
    • Create a topic
    • Produce and consume messages
  • This helps you understand AutoMQ behavior without local environment setup

Option 2: DevKit local development (recommended for code contributions)

  • Use the Docker Compose-based DevKit to start a local AutoMQ cluster with MinIO and JDWP debug ports
  • Quick start:
    • cd devkit
    • just start-build (single node) or just start-build 3 (3-node cluster)
  • Useful shortcuts: just topic-list, just produce <topic>, just logs
  • See full instructions in devkit/README.md

Option 3: Manual local development setup

  • Follow the steps below to build and run AutoMQ locally with IDEA/manual configuration
  • This path is useful when you need full control over local components and configs

Tip: If you encounter setup issues, check devkit/README.md, “Local Debug with IDEA”, and S3 configuration sections below.

If you are new to AutoMQ, it is recommended to first deploy and run AutoMQ using Docker as described in the README. This helps you quickly understand AutoMQ’s core concepts and behavior without local environment complexity.

After gaining familiarity, contributors who want to work on code can follow the steps in this guide to build and run AutoMQ locally. For most contributors, we recommend starting with DevKit (devkit/README.md) and using the manual setup only when deeper environment customization is needed.

Code Contributions

Finding or Reporting Issues

  • Find an existing issue: Look through the existing issues. Issues open for contributions are often tagged with good first issue. To claim an issue, simply reply with '/assign', and the GitHub bot will assign it to you. Start with this tagged good first issue.
  • Report a new issue: If you've found a bug or have a feature request, please create a new issue. Select the appropriate template (Bug Report or Feature Request) and fill out the form provided.

If you have any questions about an issue, please feel free to ask in the issue comments. We will do our best to clarify any doubts you may have.

Submitting Pull Requests

The usual workflow of code contribution is:

  1. Fork the AutoMQ repository.
  2. Clone the repository locally.
  3. Create a branch for your feature/bug fix with the format {YOUR_USERNAME}/{FEATURE/BUG} ( e.g. jdoe/source-stock-api-stream-fix)
  4. Make and commit changes.
  5. Push your local branch to your fork.
  6. Submit a Pull Request so that we can review your changes.
  7. Link an existing Issue (created via the steps above or an existing one you claimed) that does not include the needs triage label to your Pull Request. A pull request without a linked issue will be closed, otherwise.
  8. Write a PR title and description that follows the Pull Request Template.
  9. An AutoMQ maintainer will trigger the CI tests for you and review the code.
  10. Review and respond to feedback and questions from AutoMQ maintainers.
  11. Merge the contribution.

Pull Request reviews are done on a regular basis.

Note

Please make sure you respond to our feedback/questions and sign our CLA.

Pull Requests without updates will be closed due to inactivity.

Requirement

Requirement Version
Compiling requirements JDK 17
Compiling requirements Scala 2.13
Running requirements JDK 17

Note: At least 8GB RAM is recommended for local development and debugging.

Tips: You can refer the document to install Scala 2.13

Local Debug with IDEA

Gradle

Building AutoMQ is the same as Apache Kafka. Kafka uses Gradle as its project management tool. The management of Gradle projects is based on scripts written in Groovy syntax, and within the Kafka project, the main project management configuration is found in the build.gradle file located in the root directory, which serves a similar function to the root POM in Maven projects. Gradle also supports configuring a build.gradle for each module separately, but Kafka does not do this; all modules are managed by the build.gradle file in the root directory.

It is not recommended to manually install Gradle. The gradlew script in the root directory will automatically download Gradle for you, and the version is also specified by the gradlew script.

Build

./gradlew jar -x test

Prepare S3 service

Refer to this documentation to install localstack to mock a local S3 service or use AWS S3 service directly.

If you are using localstack then create a bucket with the following command:

aws s3api create-bucket --bucket ko3 --endpoint=http://127.0.0.1:4566

Modify Configuration

Modify the config/kraft/server.properties file. The following settings need to be changed:

s3.endpoint=https://s3.amazonaws.com

# The region of S3 service
# For Aliyun, you have to set the region to aws-global. See https://www.alibabacloud.com/help/zh/oss/developer-reference/use-amazon-s3-sdks-to-access-oss.
s3.region=us-east-1

# The bucket of S3 service to store data
s3.bucket=ko3

Tips: If you're using localstack, make sure to set the s3.endpoint to http://127.0.0.1:4566, not localhost. Set the region to us-east-1. The bucket should match the one created earlier.

Format

Generated Cluster UUID:

KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"

Format Metadata Catalog:

bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties

IDE Start Configuration

Item Value
Main core/src/main/scala/kafka/Kafka.scala
ClassPath -cp kafka.core.main
VM Options -Xmx1G -Xms1G -server -XX:+UseZGC -XX:MaxDirectMemorySize=2G -Dkafka.logs.dir=logs/ -Dlog4j.configuration=file:config/log4j.properties -Dio.netty.leakDetection.level=paranoid
CLI Arguments config/kraft/server.properties
Environment KAFKA_S3_ACCESS_KEY=test;KAFKA_S3_SECRET_KEY=test

tips: If you are using localstack, just use any value of access key and secret key. If you are using real S3 service, set KAFKA_S3_ACCESS_KEY and KAFKA_S3_SECRET_KEY to the real access key and secret key that have read/write permission of S3 service.

Documentation

We welcome Pull Requests that enhance the grammar, structure, or fix typos in our documentation.

Engage with the Community

Another crucial way to contribute is by reporting bugs and helping other users in the community.

You're welcome to enter the Community Slack and help other users or report bugs in GitHub.

Attribution

This contributing document is adapted from that of Airbyte.