mybpm!
Скачать в формате: PDF pdf DOCX word

Быстрая установка MyBPM через docker-compose - демонстрационный режим

Можно быстро установить систему с помощью docker-compose на одной машине.

Требования

Необходимо иметь любою linux-систему.

Нужно установить docker с плагином docker-compose.

Нужно получить от MyBPM логин и пароль для доступа к репозиторию образов hub.mybpm.kz

И создать регистрационную запись, указав полученный логин и пароль в команде:

docker login hub.mybpm.kz

Установка

Создайте пустую папку и войдите в неё:

mkdir mybpm
cd mybpm

Теперь подготовим окружение для файла docker-compose.yaml

Файл инициации MongoDB:

mkdir mongo-init
cat > mongo-init/rs-init.js

С текстом:

rs.initiate();

Ctrl+D - чтобы выйти из ввода.

Директория для Kafka:

mkdir kf_work

Файл генерации идентификатора кластера брокера кафки:

cat > kf_work/create-kafka-cluster-id.sh

С содержимым:

#!/usr/bin/env bash
set -e
cd "$(dirname "$0")" || exit 113
KAFKA_CLUSTER_ID="$(docker run --rm -i bitnami/kafka:3.5.1 kafka-storage.sh random-uuid)"
KAFKA_CLUSTER_ID="$(echo -n "$KAFKA_CLUSTER_ID" | tr -d '[:blank:]' | tr -d '\n')"
echo -n "$KAFKA_CLUSTER_ID" > kafka-cluster-id.txt

Ctrl+D - чтобы выйти из ввода. И запустим его

bash kf_work/create-kafka-cluster-id.sh

Должен появиться файл:

kf_work/kafka-cluster-id.txt

Файла настроек кафки:

cat > kf_work/kf.server.properties

С содержимым:

process.roles=broker,controller
node.id=1
controller.quorum.voters=1@kf:9091

############################# Socket Server Settings #############################

listeners=CONTROLLER://0.0.0.0:9091,IN_BROKER://0.0.0.0:9092,FROM_LOCALHOST://0.0.0.0:9093
advertised.listeners=IN_BROKER://kf:9092,FROM_LOCALHOST://localhost:10011
inter.broker.listener.name=IN_BROKER
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,IN_BROKER:PLAINTEXT,FROM_LOCALHOST:PLAINTEXT
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/kafka-data
num.partitions=4
offsets.topic.num.partitions=4
num.recovery.threads.per.data.dir=1

auto.create.topics.enable=true
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
default.replication.factor=1
unclean.leader.election=true
compression.type=gzip
log.roll.hours=1

log.flush.interval.messages=10000
log.flush.interval.ms=1000

############################# Log Retention Policy #############################

log.retention.hours=-1
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

Ctrl+D - чтобы выйти из ввода.

Файл запуска кафки:

cat > kf_work/run.sh

С содержимым:

#!/usr/bin/env bash
set -e
cd /kafka-data
if [ ! -f "/kafka-data/bootstrap.checkpoint" ] || [ ! -f  "/kafka-data/meta.properties" ]; then
  rm -f "/kafka-data/bootstrap.checkpoint"
  rm -f "/kafka-data/meta.properties"
  KAFKA_CLUSTER_ID="$(cat /kf_work/kafka-cluster-id.txt)"
  kafka-storage.sh format -t "$KAFKA_CLUSTER_ID" -c /kf_work/kf.server.properties
fi
exec kafka-server-start.sh /kf_work/kf.server.properties

И ему нужно установить атрибут исполнения:

chmod +x kf_work/run.sh

Файл инициации PostgreSQL:

mkdir pg-init
cat > pg-init/init.sql

С содержимым:

CREATE USER mybpm WITH ENCRYPTED PASSWORD 't30my2ayTWsGKC0lf7P0SfCFc421fF';
ALTER USER mybpm WITH CREATEROLE;
CREATE DATABASE mybpm_aux1 WITH OWNER mybpm;
CREATE USER test_register_util WITH ENCRYPTED PASSWORD '25UsbGa7G76X01F30K09D7v1a96vYZUybWVsZWqN';
ALTER USER test_register_util WITH CREATEROLE;
CREATE DATABASE test_register_util_db WITH OWNER test_register_util;
CREATE USER in_migration WITH ENCRYPTED PASSWORD 'xObV19Du1pUB931H5pQ8BnlR3RU5iY2T6fu3Yk4Z';
CREATE DATABASE in_migration WITH OWNER in_migration;

Файл для эластика:

mkdir es
cat > es/log4j2.properties

С содержимым:

status=error
appender.console.type=Console
appender.console.name=console
appender.console.layout.type=PatternLayout
appender.console.layout.pattern=[%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
######## Server JSON ############################
appender.rolling.type=RollingFile
appender.rolling.name=rolling
appender.rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type=ECSJsonLayout
appender.rolling.layout.dataset=elasticsearch.server
appender.rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type=Policies
appender.rolling.policies.time.type=TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval=1
appender.rolling.policies.time.modulate=true
appender.rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=128MB
appender.rolling.strategy.type=DefaultRolloverStrategy
appender.rolling.strategy.fileIndex=nomax
appender.rolling.strategy.action.type=Delete
appender.rolling.strategy.action.basepath=${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type=IfFileName
appender.rolling.strategy.action.condition.glob=${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type=IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds=2GB
################################################
######## Server -  old style pattern ###########
appender.rolling_old.type=RollingFile
appender.rolling_old.name=rolling_old
appender.rolling_old.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling_old.layout.type=ECSJsonLayout
appender.rolling_old.layout.pattern=[%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
appender.rolling_old.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_old.policies.type=Policies
appender.rolling_old.policies.time.type=TimeBasedTriggeringPolicy
appender.rolling_old.policies.time.interval=1
appender.rolling_old.policies.time.modulate=true
appender.rolling_old.policies.size.type=SizeBasedTriggeringPolicy
appender.rolling_old.policies.size.size=128MB
appender.rolling_old.strategy.type=DefaultRolloverStrategy
appender.rolling_old.strategy.fileIndex=nomax
appender.rolling_old.strategy.action.type=Delete
appender.rolling_old.strategy.action.basepath=${sys:es.logs.base_path}
appender.rolling_old.strategy.action.condition.type=IfFileName
appender.rolling_old.strategy.action.condition.glob=${sys:es.logs.cluster_name}-*
appender.rolling_old.strategy.action.condition.nested_condition.type=IfAccumulatedFileSize
appender.rolling_old.strategy.action.condition.nested_condition.exceeds=2GB
################################################
rootLogger.level=info
rootLogger.appenderRef.console.ref=console
rootLogger.appenderRef.rolling.ref=rolling
rootLogger.appenderRef.rolling_old.ref=rolling_old
######## Deprecation JSON #######################
appender.deprecation_rolling.type=RollingFile
appender.deprecation_rolling.name=deprecation_rolling
appender.deprecation_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
appender.deprecation_rolling.layout.type=ECSJsonLayout
# Intentionally follows a different pattern to above
appender.deprecation_rolling.layout.dataset=deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type=RateLimitingFilter
appender.deprecation_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
appender.deprecation_rolling.policies.type=Policies
appender.deprecation_rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size=1GB
appender.deprecation_rolling.strategy.type=DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max=4
appender.header_warning.type=HeaderWarningAppender
appender.header_warning.name=header_warning
#################################################
logger.deprecation.name=org.elasticsearch.deprecation
logger.deprecation.level=WARN
logger.deprecation.appenderRef.deprecation_rolling.ref=deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref=header_warning
logger.deprecation.additivity=false
######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type=RollingFile
appender.index_search_slowlog_rolling.name=index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
.cluster_name}_index_search_slowlog.json
appender.index_search_slowlog_rolling.layout.type=ECSJsonLayout
appender.index_search_slowlog_rolling.layout.dataset=elasticsearch.index_search_slowlog
appender.index_search_slowlog_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
.cluster_name}_index_search_slowlog-%i.json.gz
appender.index_search_slowlog_rolling.policies.type=Policies
appender.index_search_slowlog_rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size=1GB
appender.index_search_slowlog_rolling.strategy.type=DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max=4
#################################################
#################################################
logger.index_search_slowlog_rolling.name=index.search.slowlog
logger.index_search_slowlog_rolling.level=trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref=index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity=false
######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type=RollingFile
appender.index_indexing_slowlog_rolling.name=index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
_index_indexing_slowlog.json
appender.index_indexing_slowlog_rolling.layout.type=ECSJsonLayout
appender.index_indexing_slowlog_rolling.layout.dataset=elasticsearch.index_indexing_slowlog
appender.index_indexing_slowlog_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
_index_indexing_slowlog-%i.json.gz
appender.index_indexing_slowlog_rolling.policies.type=Policies
appender.index_indexing_slowlog_rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.size.size=1GB
appender.index_indexing_slowlog_rolling.strategy.type=DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling.strategy.max=4
#################################################
logger.index_indexing_slowlog.name=index.indexing.slowlog.index
logger.index_indexing_slowlog.level=trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref=index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity=false
logger.com_amazonaws.name=com.amazonaws
logger.com_amazonaws.level=warn
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name=com.amazonaws.jmx.SdkMBeanRegistrySupport
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level=error
logger.com_amazonaws_metrics_AwsSdkMetrics.name=com.amazonaws.metrics.AwsSdkMetrics
logger.com_amazonaws_metrics_AwsSdkMetrics.level=error
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name=com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level=error
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name=com.amazonaws.services.s3.internal.UseArnRegionResolver
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level=error
appender.audit_rolling.type=RollingFile
appender.audit_rolling.name=audit_rolling
appender.audit_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit.json
appender.audit_rolling.layout.type=PatternLayout
appender.audit_rolling.layout.pattern={\
"type":"audit", \
"timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
%varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
%varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
%varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
%varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
%varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
%varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
%varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
%varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
%varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
%varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
%varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
%varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
%varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
%varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
%varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
%varsNotEmpty{, "user.roles":%map{user.roles}}\
%varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
%varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
%varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
%varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
%varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
%varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
%varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
%varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
%varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
%varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
%varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
%varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
%varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
%varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
%varsNotEmpty{, "indices":%map{indices}}\
%varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
%varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
%varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
%varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
%varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
%varsNotEmpty{, "put":%map{put}}\
%varsNotEmpty{, "delete":%map{delete}}\
%varsNotEmpty{, "change":%map{change}}\
%varsNotEmpty{, "create":%map{create}}\
%varsNotEmpty{, "invalidate":%map{invalidate}}\
}%n
# "node.name" node name from the `elasticsearch.yml` settings
# "node.id" node id which should not change between cluster restarts
# "host.name" unresolved hostname of the local node
# "host.ip" the local bound ip (i.e. the ip listening for connections)
# "origin.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
# "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
# "authentication.type" one of "realm", "api_key", "token", "anonymous" or "internal"
# "user.name" the subject name as authenticated by a realm
# "user.run_by.name" the original authenticated subject name that is impersonating another one.
# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
# "user.realm" the name of the realm that authenticated "user.name"
# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
# "user.roles" the roles array of the user; these are the roles that are granting privileges
# "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
# "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
# "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
# "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
# "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
# "request.body" the content of the request body entity, JSON escaped
# "request.id" a synthetic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
# "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
# "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
# "indices" the array of indices that the "action" is acting upon
# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
# "trace_id" an identifier conveyed by the part of "traceparent" request header
# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
# "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
# "rule" name of the applied rule if the "origin.type" is "ip_filter"
# the "put", "delete", "change", "create", "invalidate" fields are only present
# when the "event.type" is "security_config_change" and contain the security config change (as an object) taking effect
appender.audit_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit-%d{yyyy-MM-dd}-%i.json.gz
appender.audit_rolling.policies.type=Policies
appender.audit_rolling.policies.time.type=TimeBasedTriggeringPolicy
appender.audit_rolling.policies.time.interval=1
appender.audit_rolling.policies.time.modulate=true
appender.audit_rolling.policies.size.type=SizeBasedTriggeringPolicy
appender.audit_rolling.policies.size.size=1GB
appender.audit_rolling.strategy.type=DefaultRolloverStrategy
appender.audit_rolling.strategy.fileIndex=nomax
logger.xpack_security_audit_logfile.name=org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level=info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref=audit_rolling
logger.xpack_security_audit_logfile.additivity=false
logger.xmlsig.name=org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level=error
logger.samlxml_decrypt.name=org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level=fatal
logger.saml2_decrypt.name=org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level=fatal

Теперь сам файл docker-compose-а:

cat > docker-compose.yaml

С содержимым:

networks:
  default:
    name: mybpm-aio-network

services:

  web:
    container_name: mybpm-aio-web
    image: hub.mybpm.kz/mybpm-web-release:4.24.12.0
    restart: always
    ports:
      - "10000:8000"              #                 WEB        http://localhost:10000
    depends_on:
      - api
    environment:
      MYBPM_API_HOST: "api"
      MYBPM_API_PORT: "8080"

  api:
    container_name: mybpm-aio-api
    image: hub.mybpm.kz/mybpm-api-release:4.24.12.0
    restart: always
    ports:
      - "10001:8080"              #                 SERVER   http://localhost:10001/web/v2/test/hello
      - "10002:5005"              #                 DEBUG    localhost 10002
      - "10003:10003"              #                 JConsole localhost 10003
    volumes:
      - ~/volumes/mybpm-aio-debug/api-logs:/var/log/mybpm
    #    command:
    #      - tail
    #      - -f
    #      - /etc/hosts
    depends_on: [kf, es, zoo, mongo, pg]
    environment:
      #      MY1BPM_PLUGINS: "test"
      MYBPM_USE_SHENANDOAH: "yes"
      MYBPM_JAVA_DEBUG: "yes"
      MYBPM_JAVA_CONSOLE: "yes"
      MYBPM_JAVA_RMI_HOST: "192.168.111.111" # здесь укажите IP вашей машины
      MYBPM_JAVA_RMI_PORT: "10003"
      MYBPM_JAVA_JMX_PORT: "10003"
      MYBPM_LOGS_COLORED: "true"
      MYBPM_COMPANY_CODE: "greetgo"
      MYBPM_MONGO_SERVERS: "mongodb://mongo:27017"
      MYBPM_ZOOKEEPER_SERVERS: "zoo:2181"
      MYBPM_KAFKA_SERVERS: "kf:9092"
      MYBPM_AUX1_DB_NAME: "mybpm_aux1"
      MYBPM_AUX1_HOST: "pg"
      MYBPM_AUX1_PORT: "5432"
      MYBPM_AUX1_USER_NAME: "mybpm"
      MYBPM_AUX1_PASSWORD: "t30my2ayTWsGKC0lf7P0SfCFc421fF"
      MYBPM_ELASTIC_SEARCH_SERVERS: "es:9200"
      MYBPM_FILES_MONGO_SERVERS: "mongodb://mongo:27017"
    healthcheck:
      test: [ "CMD", "curl", "-f", "http://localhost:8080/web/health" ]
      interval: 30s
      timeout: 10s
      retries: 30
      start_period: 10s

  pg: #              docker exec -it mybpm-aio-pg psql -U postgres
    image: postgres:13.4
    container_name: mybpm-aio-pg
    restart: always
    mem_limit: 700M
    environment:
      POSTGRES_PASSWORD: "iWAKOy4uS3v04T7bWM3SHNLiR8WyBP"
    ports:
      - "10018:5432"
    volumes:
      - ~/volumes/mybpm-aio-debug/pg-data:/var/lib/postgresql/data
      - ./pg-init:/docker-entrypoint-initdb.d
    command:
      - "docker-entrypoint.sh"
      - "-c"
      - "max-connections=900"

  mongo: #              docker exec -it mybpm-aio-mongo mongo
    image: mongo:4.4.9
    container_name: mybpm-aio-mongo
    mem_limit: 700M
    restart: always
    ports:
      - "10017:27017"
    volumes:
      - ~/volumes/mybpm-aio-debug/mongo:/data/db
      - ./mongo-init:/docker-entrypoint-initdb.d
    command:
      - docker-entrypoint.sh
      - --bind_ip_all
      - --replSet
      - main
      - --profileFilter
      - '{"command.$$db": "mybpm"}'
      - --profile
      - "1"
#      - --slowms
#      - "0"

  mongo-express:
    image: mongo-express:1.0.0-alpha.4
    container_name: mybpm-aio-mongo-express
    mem_limit: 200M
    restart: always
    depends_on:
      - mongo
    ports:
      - "10013:8081"                                        # MONGO   http://localhost:10013
    environment:
      ME_CONFIG_OPTIONS_EDITORTHEME: cobalt
      ME_CONFIG_BASICAUTH_USERNAME: admin
      ME_CONFIG_BASICAUTH_PASSWORD: 111
      ME_CONFIG_MONGODB_SERVER: mongo

  zoo:
    container_name: mybpm-aio-zoo
    image: confluentinc/cp-zookeeper:5.5.0
    user: "0:0"
    mem_limit: 200M
    restart: always
    ports:
      - "10012:2181"
    volumes:
      - ~/volumes/mybpm-aio-debug/zookeeper/data:/var/lib/zookeeper/data
      - ~/volumes/mybpm-aio-debug/zookeeper/log:/var/lib/zookeeper/log
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 11
      ZOOKEEPER_SYNC_LIMIT: 5

  zoo-navigator:
    container_name: mybpm-aio-zoo-navigator
    # noinspection SpellCheckingInspection
    image: elkozmon/zoonavigator:1.1.2
    restart: always
    mem_limit: 500M
    ports:
      - "10010:9001"                              #  http://localhost:10010
    environment:
      HTTP_PORT: "9001"
      AUTO_CONNECT_CONNECTION_ID: "MAIN"
      CONNECTION_MAIN_NAME: "main"
      CONNECTION_MAIN_CONN: "zoo:2181"

  kf:
    container_name: mybpm-aio-kf
    image: bitnami/kafka:3.5.1
    mem_limit: 1G
    restart: always
    ports:
      - "10011:9093"
      - "10015:7071"
    volumes:
      - ~/volumes/mybpm-aio-debug/kafka:/kafka-data
      - ./kf_work:/kf_work
    user: "0:0"
    entrypoint: [ /kf_work/run.sh ]
    environment:
      KAFKA_HEAP_OPTS: "-Xmx1G -Xms1G"

  kafdrop:
    # noinspection SpellCheckingInspection
    container_name: mybpm-aio-kafdrop
    # noinspection SpellCheckingInspection
    image: obsidiandynamics/kafdrop:4.0.0-SNAPSHOT
    depends_on:
      - kf
    mem_limit: 500M
    restart: always
    ports:
      - "10014:9000"                              #  http://localhost:10014
    environment:
      KAFKA_BROKERCONNECT: "kf:9092"
      SERVER_PORT: "9000"
      JVM_OPTS: "-Xms700M -Xmx700M"
      SERVER_SERVLET_CONTEXTPATH: "/"

  es:
    container_name: mybpm-aio-es
    image: elasticsearch:8.3.2
    # noinspection ComposeUnknownValues
    mem_limit: "${MYBPM_ES_MEMORY_MAIN:-3000M}"
    restart: always
    ports:
      - "10016:9200"                              #  http://localhost:10016
    # noinspection SpellCheckingInspection
    environment:
      #- cluster.name=docker-cluster
      - discovery.type=single-node
      - node.name=from-plugin
      - bootstrap.memory_lock=true
      - index.store.type=hybridfs
      - "ES_JAVA_OPTS=-Xms${MYBPM_ES_MEMORY_JAVA:-1500M} -Xmx${MYBPM_ES_MEMORY_JAVA:-1500M}"
      #      - TAKE_FILE_OWNERSHIP=true
      - xpack.security.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - ~/volumes/mybpm-aio-debug/elasticsearch/data:/usr/share/elasticsearch/data
      - ~/volumes/mybpm-aio-debug/elasticsearch/logs:/usr/share/elasticsearch/logs
      - ./es/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:ro

Файла перезапуска:

cat > restart.bash

С содержимым:

#!/usr/bin/env bash
cd "$(dirname "$0")" || exit 131
sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/data"
sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/logs"
sudo chmod 777 -R "$HOME/volumes/mybpm-aio-debug/elasticsearch"

docker compose down
EXIT="$?"
# shellcheck disable=SC2181
if [ "$EXIT" != "0" ] ; then
echo "%%%"
echo "%%% ERROR of : docker compose down"
echo "%%%"
exit "$EXIT"
fi

docker compose up -d
EXIT="$?"
# shellcheck disable=SC2181
if [ "$EXIT" != "0" ] ; then
echo "%%%"
echo "%%% ERROR of : docker compose up -d"
echo "%%%"
exit "$EXIT"
fi

echo "%%%"
echo "%%% ГОТОВО (restart)"
echo "%%%"

И файл остановки:

cat > stop.bash

С содержимым:

#!/usr/bin/env bash
cd "$(dirname "$0")" || exit 131
mkdir -p ~/volumes/mybpm-aio-debug/elasticsearch
sudo chmod a+rwx ~/volumes/mybpm-aio-debug/elasticsearch
docker compose down
EXIT="$?"
# shellcheck disable=SC2181
if [ "$EXIT" != "0" ] ; then
  echo "%%%"
  echo "%%% ERROR of : docker compose down"
  echo "%%%"
  exit "$EXIT"
fi

После чего запускаем файла

bash restart.bash

И смотрим логи системы:

docker compose logs -f api

Может случиться, что эластик не хочет стартовать, потому что у него нет доступа к директориям:

~/volumes/mybpm-aio-debug/elasticsearch/data
~/volumes/mybpm-aio-debug/elasticsearch/logs

Дайте доступ к этим директориям

sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/data"
sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/logs"
sudo chmod 777 -R "$HOME/volumes/mybpm-aio-debug/elasticsearch"

Когда появиться строка

INFO  Tomcat started on port(s): 8080 (http) with context path ''

Это означает, что сервер стартанул и готов к работе.

Можно зайти в браузере по адресу:

http://localhost:10000

И получиться страницу логина и пароля. Для первого входа необходим логин и пароль пользователя root - он генерируется системой случайным образом и фиксируется в БД. Чтобы его посмотреть выполните команду:

docker exec -it mybpm-aio-mongo mongo

Откроется приглашение MongoDB:

main:PRIMARY> 

Теперь можно вводить команды для MongoDB. Активируйте нужную базу данных:

use mybpm_aux

И посмотрите коллекцию:

db.PersonPassword.find()

Должно появиться много строк, из них одна такая:

{ "_id" : ObjectId("3b430203076fff3236b508cf"), "initPassword" : "root : 0jAWS240G2uTQ" }

В ней виден пароль рута.

Войдите в систему с этим паролем и тут-же его поменяйте, чтобы этот стал неактуальный.

Всё можете смотреть платформу MyBPM в демонстрационном режиме.