java.lang.RuntimeException: wXb6Vnl31u :: Ошибка для HTML= 001 003 004
005 006Можно быстро установить систему с помощью docker-compose на 014 одной машине.
015 016Необходимо иметь любою linux-систему.
019 020Нужно установить docker с плагином docker-compose.
021 022Нужно получить от MyBPM логин и пароль для доступа к репозиторию 023 образов hub.mybpm.kz
024 025И создать регистрационную запись, указав полученный логин и 026 пароль в команде:
027 028
029 docker login hub.mybpm.kz
030
031
032
033
034
035 Создайте пустую папку и войдите в неё:
038 039
040 mkdir mybpm
041 cd mybpm
042
043
044
045
046
047 Теперь подготовим окружение для файла docker-compose.yaml
048 049Файл инициации MongoDB:
050 051
052 mkdir mongo-init
053 cat > mongo-init/rs-init.js
054
055
056
057
058
059 С текстом:
060 061
062 rs.initiate();
063
064
065
066
067
068 Ctrl+D - чтобы выйти из ввода.
069 070Директория для Kafka:
071 072
073 mkdir kf_work
074
075
076
077
078
079 Файл генерации идентификатора кластера брокера кафки:
080 081
082 cat > kf_work/create-kafka-cluster-id.sh
083
084
085
086
087
088 С содержимым:
089 090
091 #!/usr/bin/env bash
092 set -e
093 cd "$(dirname "$0")" || exit 113
094 KAFKA_CLUSTER_ID="$(docker run --rm -i bitnami/kafka:3.5.1 kafka-storage.sh random-uuid)"
095 KAFKA_CLUSTER_ID="$(echo -n "$KAFKA_CLUSTER_ID" | tr -d '[:blank:]' | tr -d '\n')"
096 echo -n "$KAFKA_CLUSTER_ID" > kafka-cluster-id.txt
097
098
099
100
101
102 Ctrl+D - чтобы выйти из ввода. И запустим его
103 104
105 bash kf_work/create-kafka-cluster-id.sh
106
107
108
109
110
111 Должен появиться файл:
112 113
114 kf_work/kafka-cluster-id.txt
115
116
117
118
119
120 Файла настроек кафки:
121 122
123 cat > kf_work/kf.server.properties
124
125
126
127
128
129 С содержимым:
130 131
132 process.roles=broker,controller
133 node.id=1
134 controller.quorum.voters=1@kf:9091
135
136 ############################# Socket Server Settings #############################
137
138 listeners=CONTROLLER://0.0.0.0:9091,IN_BROKER://0.0.0.0:9092,FROM_LOCALHOST://0.0.0.0:9093
139 advertised.listeners=IN_BROKER://kf:9092,FROM_LOCALHOST://localhost:10011
140 inter.broker.listener.name=IN_BROKER
141 controller.listener.names=CONTROLLER
142 listener.security.protocol.map=CONTROLLER:PLAINTEXT,IN_BROKER:PLAINTEXT,FROM_LOCALHOST:PLAINTEXT
143 num.network.threads=3
144 num.io.threads=8
145 socket.send.buffer.bytes=102400
146 socket.receive.buffer.bytes=102400
147 socket.request.max.bytes=104857600
148
149
150 ############################# Log Basics #############################
151
152 # A comma separated list of directories under which to store log files
153 log.dirs=/kafka-data
154 num.partitions=4
155 offsets.topic.num.partitions=4
156 num.recovery.threads.per.data.dir=1
157
158 auto.create.topics.enable=true
159 offsets.topic.replication.factor=1
160 transaction.state.log.replication.factor=1
161 transaction.state.log.min.isr=1
162 default.replication.factor=1
163 unclean.leader.election=true
164 compression.type=gzip
165 log.roll.hours=1
166
167 log.flush.interval.messages=10000
168 log.flush.interval.ms=1000
169
170 ############################# Log Retention Policy #############################
171
172 log.retention.hours=-1
173 log.segment.bytes=1073741824
174 log.retention.check.interval.ms=300000
175
176
177
178
179
180 Ctrl+D - чтобы выйти из ввода.
181 182Файл запуска кафки:
183 184
185 cat > kf_work/run.sh
186
187
188
189
190
191 С содержимым:
192 193
194 #!/usr/bin/env bash
195 set -e
196 cd /kafka-data
197 if [ ! -f "/kafka-data/bootstrap.checkpoint" ] || [ ! -f "/kafka-data/meta.properties" ]; then
198 rm -f "/kafka-data/bootstrap.checkpoint"
199 rm -f "/kafka-data/meta.properties"
200 KAFKA_CLUSTER_ID="$(cat /kf_work/kafka-cluster-id.txt)"
201 kafka-storage.sh format -t "$KAFKA_CLUSTER_ID" -c /kf_work/kf.server.properties
202 fi
203 exec kafka-server-start.sh /kf_work/kf.server.properties
204
205
206
207
208
209 И ему нужно установить атрибут исполнения:
210 211
212 chmod +x kf_work/run.sh
213
214
215
216
217
218 Файл инициации PostgreSQL:
219 220
221 mkdir pg-init
222 cat > pg-init/init.sql
223
224
225
226
227
228 С содержимым:
229 230
231 CREATE USER mybpm WITH ENCRYPTED PASSWORD 't30my2ayTWsGKC0lf7P0SfCFc421fF';
232 ALTER USER mybpm WITH CREATEROLE;
233 CREATE DATABASE mybpm_aux1 WITH OWNER mybpm;
234 CREATE USER test_register_util WITH ENCRYPTED PASSWORD '25UsbGa7G76X01F30K09D7v1a96vYZUybWVsZWqN';
235 ALTER USER test_register_util WITH CREATEROLE;
236 CREATE DATABASE test_register_util_db WITH OWNER test_register_util;
237 CREATE USER in_migration WITH ENCRYPTED PASSWORD 'xObV19Du1pUB931H5pQ8BnlR3RU5iY2T6fu3Yk4Z';
238 CREATE DATABASE in_migration WITH OWNER in_migration;
239
240
241
242
243
244 Файл для эластика:
245 246
247 mkdir es
248 cat > es/log4j2.properties
249
250
251
252
253
254 С содержимым:
255 256
257 status=error
258 appender.console.type=Console
259 appender.console.name=console
260 appender.console.layout.type=PatternLayout
261 appender.console.layout.pattern=[%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
262 ######## Server JSON ############################
263 appender.rolling.type=RollingFile
264 appender.rolling.name=rolling
265 appender.rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
266 appender.rolling.layout.type=ECSJsonLayout
267 appender.rolling.layout.dataset=elasticsearch.server
268 appender.rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
269 appender.rolling.policies.type=Policies
270 appender.rolling.policies.time.type=TimeBasedTriggeringPolicy
271 appender.rolling.policies.time.interval=1
272 appender.rolling.policies.time.modulate=true
273 appender.rolling.policies.size.type=SizeBasedTriggeringPolicy
274 appender.rolling.policies.size.size=128MB
275 appender.rolling.strategy.type=DefaultRolloverStrategy
276 appender.rolling.strategy.fileIndex=nomax
277 appender.rolling.strategy.action.type=Delete
278 appender.rolling.strategy.action.basepath=${sys:es.logs.base_path}
279 appender.rolling.strategy.action.condition.type=IfFileName
280 appender.rolling.strategy.action.condition.glob=${sys:es.logs.cluster_name}-*
281 appender.rolling.strategy.action.condition.nested_condition.type=IfAccumulatedFileSize
282 appender.rolling.strategy.action.condition.nested_condition.exceeds=2GB
283 ################################################
284 ######## Server - old style pattern ###########
285 appender.rolling_old.type=RollingFile
286 appender.rolling_old.name=rolling_old
287 appender.rolling_old.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
288 appender.rolling_old.layout.type=ECSJsonLayout
289 appender.rolling_old.layout.pattern=[%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
290 appender.rolling_old.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
291 appender.rolling_old.policies.type=Policies
292 appender.rolling_old.policies.time.type=TimeBasedTriggeringPolicy
293 appender.rolling_old.policies.time.interval=1
294 appender.rolling_old.policies.time.modulate=true
295 appender.rolling_old.policies.size.type=SizeBasedTriggeringPolicy
296 appender.rolling_old.policies.size.size=128MB
297 appender.rolling_old.strategy.type=DefaultRolloverStrategy
298 appender.rolling_old.strategy.fileIndex=nomax
299 appender.rolling_old.strategy.action.type=Delete
300 appender.rolling_old.strategy.action.basepath=${sys:es.logs.base_path}
301 appender.rolling_old.strategy.action.condition.type=IfFileName
302 appender.rolling_old.strategy.action.condition.glob=${sys:es.logs.cluster_name}-*
303 appender.rolling_old.strategy.action.condition.nested_condition.type=IfAccumulatedFileSize
304 appender.rolling_old.strategy.action.condition.nested_condition.exceeds=2GB
305 ################################################
306 rootLogger.level=info
307 rootLogger.appenderRef.console.ref=console
308 rootLogger.appenderRef.rolling.ref=rolling
309 rootLogger.appenderRef.rolling_old.ref=rolling_old
310 ######## Deprecation JSON #######################
311 appender.deprecation_rolling.type=RollingFile
312 appender.deprecation_rolling.name=deprecation_rolling
313 appender.deprecation_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
314 appender.deprecation_rolling.layout.type=ECSJsonLayout
315 # Intentionally follows a different pattern to above
316 appender.deprecation_rolling.layout.dataset=deprecation.elasticsearch
317 appender.deprecation_rolling.filter.rate_limit.type=RateLimitingFilter
318 appender.deprecation_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
319 appender.deprecation_rolling.policies.type=Policies
320 appender.deprecation_rolling.policies.size.type=SizeBasedTriggeringPolicy
321 appender.deprecation_rolling.policies.size.size=1GB
322 appender.deprecation_rolling.strategy.type=DefaultRolloverStrategy
323 appender.deprecation_rolling.strategy.max=4
324 appender.header_warning.type=HeaderWarningAppender
325 appender.header_warning.name=header_warning
326 #################################################
327 logger.deprecation.name=org.elasticsearch.deprecation
328 logger.deprecation.level=WARN
329 logger.deprecation.appenderRef.deprecation_rolling.ref=deprecation_rolling
330 logger.deprecation.appenderRef.header_warning.ref=header_warning
331 logger.deprecation.additivity=false
332 ######## Search slowlog JSON ####################
333 appender.index_search_slowlog_rolling.type=RollingFile
334 appender.index_search_slowlog_rolling.name=index_search_slowlog_rolling
335 appender.index_search_slowlog_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
336 .cluster_name}_index_search_slowlog.json
337 appender.index_search_slowlog_rolling.layout.type=ECSJsonLayout
338 appender.index_search_slowlog_rolling.layout.dataset=elasticsearch.index_search_slowlog
339 appender.index_search_slowlog_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
340 .cluster_name}_index_search_slowlog-%i.json.gz
341 appender.index_search_slowlog_rolling.policies.type=Policies
342 appender.index_search_slowlog_rolling.policies.size.type=SizeBasedTriggeringPolicy
343 appender.index_search_slowlog_rolling.policies.size.size=1GB
344 appender.index_search_slowlog_rolling.strategy.type=DefaultRolloverStrategy
345 appender.index_search_slowlog_rolling.strategy.max=4
346 #################################################
347 #################################################
348 logger.index_search_slowlog_rolling.name=index.search.slowlog
349 logger.index_search_slowlog_rolling.level=trace
350 logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref=index_search_slowlog_rolling
351 logger.index_search_slowlog_rolling.additivity=false
352 ######## Indexing slowlog JSON ##################
353 appender.index_indexing_slowlog_rolling.type=RollingFile
354 appender.index_indexing_slowlog_rolling.name=index_indexing_slowlog_rolling
355 appender.index_indexing_slowlog_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
356 _index_indexing_slowlog.json
357 appender.index_indexing_slowlog_rolling.layout.type=ECSJsonLayout
358 appender.index_indexing_slowlog_rolling.layout.dataset=elasticsearch.index_indexing_slowlog
359 appender.index_indexing_slowlog_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
360 _index_indexing_slowlog-%i.json.gz
361 appender.index_indexing_slowlog_rolling.policies.type=Policies
362 appender.index_indexing_slowlog_rolling.policies.size.type=SizeBasedTriggeringPolicy
363 appender.index_indexing_slowlog_rolling.policies.size.size=1GB
364 appender.index_indexing_slowlog_rolling.strategy.type=DefaultRolloverStrategy
365 appender.index_indexing_slowlog_rolling.strategy.max=4
366 #################################################
367 logger.index_indexing_slowlog.name=index.indexing.slowlog.index
368 logger.index_indexing_slowlog.level=trace
369 logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref=index_indexing_slowlog_rolling
370 logger.index_indexing_slowlog.additivity=false
371 logger.com_amazonaws.name=com.amazonaws
372 logger.com_amazonaws.level=warn
373 logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name=com.amazonaws.jmx.SdkMBeanRegistrySupport
374 logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level=error
375 logger.com_amazonaws_metrics_AwsSdkMetrics.name=com.amazonaws.metrics.AwsSdkMetrics
376 logger.com_amazonaws_metrics_AwsSdkMetrics.level=error
377 logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name=com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
378 logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level=error
379 logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name=com.amazonaws.services.s3.internal.UseArnRegionResolver
380 logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level=error
381 appender.audit_rolling.type=RollingFile
382 appender.audit_rolling.name=audit_rolling
383 appender.audit_rolling.fileName=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit.json
384 appender.audit_rolling.layout.type=PatternLayout
385 appender.audit_rolling.layout.pattern={\
386 "type":"audit", \
387 "timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
388 %varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
389 %varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
390 %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
391 %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
392 %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
393 %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
394 %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
395 %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
396 %varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
397 %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
398 %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
399 %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
400 %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
401 %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
402 %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
403 %varsNotEmpty{, "user.roles":%map{user.roles}}\
404 %varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
405 %varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
406 %varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
407 %varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
408 %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
409 %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
410 %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
411 %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
412 %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
413 %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
414 %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
415 %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
416 %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
417 %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
418 %varsNotEmpty{, "indices":%map{indices}}\
419 %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
420 %varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
421 %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
422 %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
423 %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
424 %varsNotEmpty{, "put":%map{put}}\
425 %varsNotEmpty{, "delete":%map{delete}}\
426 %varsNotEmpty{, "change":%map{change}}\
427 %varsNotEmpty{, "create":%map{create}}\
428 %varsNotEmpty{, "invalidate":%map{invalidate}}\
429 }%n
430 # "node.name" node name from the `elasticsearch.yml` settings
431 # "node.id" node id which should not change between cluster restarts
432 # "host.name" unresolved hostname of the local node
433 # "host.ip" the local bound ip (i.e. the ip listening for connections)
434 # "origin.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
435 # "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
436 # "authentication.type" one of "realm", "api_key", "token", "anonymous" or "internal"
437 # "user.name" the subject name as authenticated by a realm
438 # "user.run_by.name" the original authenticated subject name that is impersonating another one.
439 # "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
440 # "user.realm" the name of the realm that authenticated "user.name"
441 # "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
442 # "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
443 # "user.roles" the roles array of the user; these are the roles that are granting privileges
444 # "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
445 # "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
446 # "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
447 # "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
448 # "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
449 # "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
450 # "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
451 # "url.path" the URI component between the port and the query string; it is percent (URL) encoded
452 # "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
453 # "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
454 # "request.body" the content of the request body entity, JSON escaped
455 # "request.id" a synthetic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
456 # "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
457 # "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
458 # "indices" the array of indices that the "action" is acting upon
459 # "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
460 # "trace_id" an identifier conveyed by the part of "traceparent" request header
461 # "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
462 # "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
463 # "rule" name of the applied rule if the "origin.type" is "ip_filter"
464 # the "put", "delete", "change", "create", "invalidate" fields are only present
465 # when the "event.type" is "security_config_change" and contain the security config change (as an object) taking effect
466 appender.audit_rolling.filePattern=${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit-%d{yyyy-MM-dd}-%i.json.gz
467 appender.audit_rolling.policies.type=Policies
468 appender.audit_rolling.policies.time.type=TimeBasedTriggeringPolicy
469 appender.audit_rolling.policies.time.interval=1
470 appender.audit_rolling.policies.time.modulate=true
471 appender.audit_rolling.policies.size.type=SizeBasedTriggeringPolicy
472 appender.audit_rolling.policies.size.size=1GB
473 appender.audit_rolling.strategy.type=DefaultRolloverStrategy
474 appender.audit_rolling.strategy.fileIndex=nomax
475 logger.xpack_security_audit_logfile.name=org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
476 logger.xpack_security_audit_logfile.level=info
477 logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref=audit_rolling
478 logger.xpack_security_audit_logfile.additivity=false
479 logger.xmlsig.name=org.apache.xml.security.signature.XMLSignature
480 logger.xmlsig.level=error
481 logger.samlxml_decrypt.name=org.opensaml.xmlsec.encryption.support.Decrypter
482 logger.samlxml_decrypt.level=fatal
483 logger.saml2_decrypt.name=org.opensaml.saml.saml2.encryption.Decrypter
484 logger.saml2_decrypt.level=fatal
485
486
487
488
489
490 Теперь сам файл docker-compose-а:
491 492
493 cat > docker-compose.yaml
494
495
496
497
498
499 С содержимым:
500 501
502 networks:
503 default:
504 name: mybpm-aio-network
505
506 services:
507
508 web:
509 container_name: mybpm-aio-web
510 image: hub.mybpm.kz/mybpm-web-release:4.24.12.0
511 restart: always
512 ports:
513 - "10000:8000" # WEB http://localhost:10000
514 depends_on:
515 - api
516 environment:
517 MYBPM_API_HOST: "api"
518 MYBPM_API_PORT: "8080"
519
520 api:
521 container_name: mybpm-aio-api
522 image: hub.mybpm.kz/mybpm-api-release:4.24.12.0
523 restart: always
524 ports:
525 - "10001:8080" # SERVER http://localhost:10001/web/v2/test/hello
526 - "10002:5005" # DEBUG localhost 10002
527 - "10003:10003" # JConsole localhost 10003
528 volumes:
529 - ~/volumes/mybpm-aio-debug/api-logs:/var/log/mybpm
530 # command:
531 # - tail
532 # - -f
533 # - /etc/hosts
534 depends_on: [kf, es, zoo, mongo, pg]
535 environment:
536 # MY1BPM_PLUGINS: "test"
537 MYBPM_USE_SHENANDOAH: "yes"
538 MYBPM_JAVA_DEBUG: "yes"
539 MYBPM_JAVA_CONSOLE: "yes"
540 MYBPM_JAVA_RMI_HOST: "192.168.111.111" # здесь укажите IP вашей машины
541 MYBPM_JAVA_RMI_PORT: "10003"
542 MYBPM_JAVA_JMX_PORT: "10003"
543 MYBPM_LOGS_COLORED: "true"
544 MYBPM_COMPANY_CODE: "greetgo"
545 MYBPM_MONGO_SERVERS: "mongodb://mongo:27017"
546 MYBPM_ZOOKEEPER_SERVERS: "zoo:2181"
547 MYBPM_KAFKA_SERVERS: "kf:9092"
548 MYBPM_AUX1_DB_NAME: "mybpm_aux1"
549 MYBPM_AUX1_HOST: "pg"
550 MYBPM_AUX1_PORT: "5432"
551 MYBPM_AUX1_USER_NAME: "mybpm"
552 MYBPM_AUX1_PASSWORD: "t30my2ayTWsGKC0lf7P0SfCFc421fF"
553 MYBPM_ELASTIC_SEARCH_SERVERS: "es:9200"
554 MYBPM_FILES_MONGO_SERVERS: "mongodb://mongo:27017"
555 healthcheck:
556 test: [ "CMD", "curl", "-f", "http://localhost:8080/web/health" ]
557 interval: 30s
558 timeout: 10s
559 retries: 30
560 start_period: 10s
561
562 pg: # docker exec -it mybpm-aio-pg psql -U postgres
563 image: postgres:13.4
564 container_name: mybpm-aio-pg
565 restart: always
566 mem_limit: 700M
567 environment:
568 POSTGRES_PASSWORD: "iWAKOy4uS3v04T7bWM3SHNLiR8WyBP"
569 ports:
570 - "10018:5432"
571 volumes:
572 - ~/volumes/mybpm-aio-debug/pg-data:/var/lib/postgresql/data
573 - ./pg-init:/docker-entrypoint-initdb.d
574 command:
575 - "docker-entrypoint.sh"
576 - "-c"
577 - "max-connections=900"
578
579 mongo: # docker exec -it mybpm-aio-mongo mongo
580 image: mongo:4.4.9
581 container_name: mybpm-aio-mongo
582 mem_limit: 700M
583 restart: always
584 ports:
585 - "10017:27017"
586 volumes:
587 - ~/volumes/mybpm-aio-debug/mongo:/data/db
588 - ./mongo-init:/docker-entrypoint-initdb.d
589 command:
590 - docker-entrypoint.sh
591 - --bind_ip_all
592 - --replSet
593 - main
594 - --profileFilter
595 - '{"command.$$db": "mybpm"}'
596 - --profile
597 - "1"
598 # - --slowms
599 # - "0"
600
601 mongo-express:
602 image: mongo-express:1.0.0-alpha.4
603 container_name: mybpm-aio-mongo-express
604 mem_limit: 200M
605 restart: always
606 depends_on:
607 - mongo
608 ports:
609 - "10013:8081" # MONGO http://localhost:10013
610 environment:
611 ME_CONFIG_OPTIONS_EDITORTHEME: cobalt
612 ME_CONFIG_BASICAUTH_USERNAME: admin
613 ME_CONFIG_BASICAUTH_PASSWORD: 111
614 ME_CONFIG_MONGODB_SERVER: mongo
615
616 zoo:
617 container_name: mybpm-aio-zoo
618 image: confluentinc/cp-zookeeper:5.5.0
619 user: "0:0"
620 mem_limit: 200M
621 restart: always
622 ports:
623 - "10012:2181"
624 volumes:
625 - ~/volumes/mybpm-aio-debug/zookeeper/data:/var/lib/zookeeper/data
626 - ~/volumes/mybpm-aio-debug/zookeeper/log:/var/lib/zookeeper/log
627 environment:
628 ZOOKEEPER_SERVER_ID: 1
629 ZOOKEEPER_CLIENT_PORT: 2181
630 ZOOKEEPER_TICK_TIME: 2000
631 ZOOKEEPER_INIT_LIMIT: 11
632 ZOOKEEPER_SYNC_LIMIT: 5
633
634 zoo-navigator:
635 container_name: mybpm-aio-zoo-navigator
636 # noinspection SpellCheckingInspection
637 image: elkozmon/zoonavigator:1.1.2
638 restart: always
639 mem_limit: 500M
640 ports:
641 - "10010:9001" # http://localhost:10010
642 environment:
643 HTTP_PORT: "9001"
644 AUTO_CONNECT_CONNECTION_ID: "MAIN"
645 CONNECTION_MAIN_NAME: "main"
646 CONNECTION_MAIN_CONN: "zoo:2181"
647
648 kf:
649 container_name: mybpm-aio-kf
650 image: bitnami/kafka:3.5.1
651 mem_limit: 1G
652 restart: always
653 ports:
654 - "10011:9093"
655 - "10015:7071"
656 volumes:
657 - ~/volumes/mybpm-aio-debug/kafka:/kafka-data
658 - ./kf_work:/kf_work
659 user: "0:0"
660 entrypoint: [ /kf_work/run.sh ]
661 environment:
662 KAFKA_HEAP_OPTS: "-Xmx1G -Xms1G"
663
664 kafdrop:
665 # noinspection SpellCheckingInspection
666 container_name: mybpm-aio-kafdrop
667 # noinspection SpellCheckingInspection
668 image: obsidiandynamics/kafdrop:4.0.0-SNAPSHOT
669 depends_on:
670 - kf
671 mem_limit: 500M
672 restart: always
673 ports:
674 - "10014:9000" # http://localhost:10014
675 environment:
676 KAFKA_BROKERCONNECT: "kf:9092"
677 SERVER_PORT: "9000"
678 JVM_OPTS: "-Xms700M -Xmx700M"
679 SERVER_SERVLET_CONTEXTPATH: "/"
680
681 es:
682 container_name: mybpm-aio-es
683 image: elasticsearch:8.3.2
684 # noinspection ComposeUnknownValues
685 mem_limit: "${MYBPM_ES_MEMORY_MAIN:-3000M}"
686 restart: always
687 ports:
688 - "10016:9200" # http://localhost:10016
689 # noinspection SpellCheckingInspection
690 environment:
691 #- cluster.name=docker-cluster
692 - discovery.type=single-node
693 - node.name=from-plugin
694 - bootstrap.memory_lock=true
695 - index.store.type=hybridfs
696 - "ES_JAVA_OPTS=-Xms${MYBPM_ES_MEMORY_JAVA:-1500M} -Xmx${MYBPM_ES_MEMORY_JAVA:-1500M}"
697 # - TAKE_FILE_OWNERSHIP=true
698 - xpack.security.enabled=false
699 ulimits:
700 memlock:
701 soft: -1
702 hard: -1
703 volumes:
704 - ~/volumes/mybpm-aio-debug/elasticsearch/data:/usr/share/elasticsearch/data
705 - ~/volumes/mybpm-aio-debug/elasticsearch/logs:/usr/share/elasticsearch/logs
706 - ./es/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties:ro
707
708
709
710
711
712 Файла перезапуска:
713 714
715 cat > restart.bash
716
717
718
719
720
721 С содержимым:
722 723
724 #!/usr/bin/env bash
725 cd "$(dirname "$0")" || exit 131
726 sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/data"
727 sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/logs"
728 sudo chmod 777 -R "$HOME/volumes/mybpm-aio-debug/elasticsearch"
729
730 docker compose down
731 EXIT="$?"
732 # shellcheck disable=SC2181
733 if [ "$EXIT" != "0" ] ; then
734 echo "%%%"
735 echo "%%% ERROR of : docker compose down"
736 echo "%%%"
737 exit "$EXIT"
738 fi
739
740 docker compose up -d
741 EXIT="$?"
742 # shellcheck disable=SC2181
743 if [ "$EXIT" != "0" ] ; then
744 echo "%%%"
745 echo "%%% ERROR of : docker compose up -d"
746 echo "%%%"
747 exit "$EXIT"
748 fi
749
750 echo "%%%"
751 echo "%%% ГОТОВО (restart)"
752 echo "%%%"
753
754
755
756
757
758 И файл остановки:
759 760
761 cat > stop.bash
762
763
764
765
766
767 С содержимым:
768 769
770 #!/usr/bin/env bash
771 cd "$(dirname "$0")" || exit 131
772 mkdir -p ~/volumes/mybpm-aio-debug/elasticsearch
773 sudo chmod a+rwx ~/volumes/mybpm-aio-debug/elasticsearch
774 docker compose down
775 EXIT="$?"
776 # shellcheck disable=SC2181
777 if [ "$EXIT" != "0" ] ; then
778 echo "%%%"
779 echo "%%% ERROR of : docker compose down"
780 echo "%%%"
781 exit "$EXIT"
782 fi
783
784
785
786
787
788 После чего запускаем файла
789 790
791 bash restart.bash
792
793
794
795
796
797 И смотрим логи системы:
798 799
800 docker compose logs -f api
801
802
803
804
805
806 Может случиться, что эластик не хочет стартовать, потому что у 807 него нет доступа к директориям:
808 809
810 ~/volumes/mybpm-aio-debug/elasticsearch/data
811 ~/volumes/mybpm-aio-debug/elasticsearch/logs
812
813
814
815
816
817 Дайте доступ к этим директориям
818 819
820 sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/data"
821 sudo mkdir -p "$HOME/volumes/mybpm-aio-debug/elasticsearch/logs"
822 sudo chmod 777 -R "$HOME/volumes/mybpm-aio-debug/elasticsearch"
823
824
825
826
827
828 Когда появиться строка
829 830
831 INFO Tomcat started on port(s): 8080 (http) with context path ''
832
833
834
835
836
837 Это означает, что сервер стартанул и готов к работе.
838 839Можно зайти в браузере по адресу:
840 841
842 http://localhost:10000
843
844
845
846
847
848 И получиться страницу логина и пароля. Для первого входа 849 необходим логин и пароль пользователя root - он генерируется 850 системой случайным образом и фиксируется в БД. Чтобы его посмотреть 851 выполните команду:
852 853
854 docker exec -it mybpm-aio-mongo mongo
855
856
857
858
859
860 Откроется приглашение MongoDB:
861 862
863 main:PRIMARY>
864
865
866
867
868
869 Теперь можно вводить команды для MongoDB. Активируйте нужную 870 базу данных:
871 872
873 use mybpm_aux
874
875
876
877
878
879 И посмотрите коллекцию:
880 881
882 db.PersonPassword.find()
883
884
885
886
887
888 Должно появиться много строк, из них одна такая:
889 890
891 { "_id" : ObjectId("3b430203076fff3236b508cf"), "initPassword" : "root : 0jAWS240G2uTQ" }
892
893
894
895
896
897 В ней виден пароль рута.
898 899Войдите в систему с этим паролем и тут-же его поменяйте, чтобы 900 этот стал неактуальный.
901 902Всё можете смотреть платформу MyBPM в демонстрационном 903 режиме.
904