android系统定制开发Ruoyi项目云环境搭建(kuberSphere)

1 android系统定制开发本地环境搭建

  1. android系统定制开发中间件安装
mysql,redis,nacos(2.x.x),nodejs
  1. 库表添加
  1. -- ry-config -> quartz.sql ry_20210908.sql
  2. -- ry-seata -> ry_config_20220424.sql
  3. -- ry-cloud -> ry_seata_20210128.sql
  1. 修改 nacos,改为从 mysql android系统定制开发中加载配置

注意:db的用户名和密码为本机mysql的用户名和密码,连接的配置数据库为 ry-cloud

  1. ### If use MySQL as datasource:
  2. spring.datasource.platform=mysql
  3. ### Count of DB:
  4. db.num=1
  5. ### Connect URL of DB:
  6. db.url.0=jdbc:mysql://127.0.0.1:3306/ry-config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
  7. db.user.0=root
  8. db.password.0=199748
  1. 启动nacos
  1. ./startup.sh -m standalone
  2. 账号:nacos
  3. 密码:nacos
image-20220429002945661.png
  1. 修改 nacos 配置中心的配置文件,将 redis 和 mysql 连接至对应的位置,并修改账号密码
  2. 启动 redis
  1. redis-server redis.conf
  2. # 验证是否启动成功
  3. redis-cli -p 6379
  1. 前端依赖下载并启动
  1. # 进入 ruoyi-ui 模块
  2. npm install --registry=https://registry.npmmirror.com
  3. # 启动
  4. npm run dev
  1. 后端模块启动
image-20220429003128899.png
  1. 顺利进入ruoyi系统
image-20220429003210217.png

2 Ruoyi 云环境搭建

1631670037332-4eab3ef9-8e5f-48ef-aed5-c2792802aeb7.png

2.1 中间件搭建

2.1.1 应用部署三要素

应用部署需要关注的信息

1、应用的部署方式

2、应用的数据挂载(数据,配置文件)

3、应用的可访问性

2.1.2 MySQL 和 Redis 搭建

  1. mysql 搭建
  2. redis 搭建
image-20220430185918459.png

2.1.3 搭建

  1. 迁移本地的数据库表至云环境的 MySQL 中
image-20220430190758094.png
  1. 创建 nacos 的配置文件
  • application.properties

注意:需要将的地址改为集群的DNS域名「mysql.ruoyi」,密码改为集群的密码「wfEaycHCEf」

  1. #
  2. # Copyright 1999-2021 Alibaba Group Holding Ltd.
  3. #
  4. # Licensed under the Apache License, Version 2.0 (the "License");
  5. # you may not use this file except in compliance with the License.
  6. # You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. #
  16. #*************** Spring Boot Related Configurations ***************#
  17. ### Default web context path:
  18. server.servlet.contextPath=/nacos
  19. ### Default web server port:
  20. server.port=8848
  21. #*************** Network Related Configurations ***************#
  22. ### If prefer hostname over ip for Nacos server addresses in cluster.conf:
  23. # nacos.inetutils.prefer-hostname-over-ip=false
  24. ### Specify local server's IP:
  25. # nacos.inetutils.ip-address=
  26. #*************** Config Module Related Configurations ***************#
  27. ### If use MySQL as datasource:
  28. spring.datasource.platform=mysql
  29. ### Count of DB:
  30. db.num=1
  31. ### Connect URL of DB:
  32. db.url.0=jdbc:mysql://mysql.ruoyi:3306/ry-config?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
  33. db.user.0=root
  34. db.password.0=wfEaycHCEf
  35. ### Connection pool configuration: hikariCP
  36. db.pool.config.connectionTimeout=30000
  37. db.pool.config.validationTimeout=10000
  38. db.pool.config.maximumPoolSize=20
  39. db.pool.config.minimumIdle=2
  40. #*************** Naming Module Related Configurations ***************#
  41. ### Data dispatch task execution period in milliseconds: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.delayMs
  42. # nacos.naming.distro.taskDispatchPeriod=200
  43. ### Data count of batch sync task: Will removed on v2.1.X. Deprecated
  44. # nacos.naming.distro.batchSyncKeyCount=1000
  45. ### Retry delay in milliseconds if sync task failed: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.retryDelayMs
  46. # nacos.naming.distro.syncRetryDelay=5000
  47. ### If enable data warmup. If set to false, the server would accept request without local data preparation:
  48. # nacos.naming.data.warmup=true
  49. ### If enable the instance auto expiration, kind like of health check of instance:
  50. # nacos.naming.expireInstance=true
  51. ### will be removed and replaced by `nacos.naming.clean` properties
  52. nacos.naming.empty-service.auto-clean=true
  53. nacos.naming.empty-service.clean.initial-delay-ms=50000
  54. nacos.naming.empty-service.clean.period-time-ms=30000
  55. ### Add in 2.0.0
  56. ### The interval to clean empty service, unit: milliseconds.
  57. # nacos.naming.clean.empty-service.interval=60000
  58. ### The expired time to clean empty service, unit: milliseconds.
  59. # nacos.naming.clean.empty-service.expired-time=60000
  60. ### The interval to clean expired metadata, unit: milliseconds.
  61. # nacos.naming.clean.expired-metadata.interval=5000
  62. ### The expired time to clean metadata, unit: milliseconds.
  63. # nacos.naming.clean.expired-metadata.expired-time=60000
  64. ### The delay time before push task to execute from service changed, unit: milliseconds.
  65. # nacos.naming.push.pushTaskDelay=500
  66. ### The timeout for push task execute, unit: milliseconds.
  67. # nacos.naming.push.pushTaskTimeout=5000
  68. ### The delay time for retrying failed push task, unit: milliseconds.
  69. # nacos.naming.push.pushTaskRetryDelay=1000
  70. ### Since 2.0.3
  71. ### The expired time for inactive client, unit: milliseconds.
  72. # nacos.naming.client.expired.time=180000
  73. #*************** CMDB Module Related Configurations ***************#
  74. ### The interval to dump external CMDB in seconds:
  75. # nacos.cmdb.dumpTaskInterval=3600
  76. ### The interval of polling data change event in seconds:
  77. # nacos.cmdb.eventTaskInterval=10
  78. ### The interval of loading labels in seconds:
  79. # nacos.cmdb.labelTaskInterval=300
  80. ### If turn on data loading task:
  81. # nacos.cmdb.loadDataAtStart=false
  82. #*************** Metrics Related Configurations ***************#
  83. ### Metrics for prometheus
  84. #management.endpoints.web.exposure.include=*
  85. ### Metrics for elastic search
  86. management.metrics.export.elastic.enabled=false
  87. #management.metrics.export.elastic.host=http://localhost:9200
  88. ### Metrics for influx
  89. management.metrics.export.influx.enabled=false
  90. #management.metrics.export.influx.db=springboot
  91. #management.metrics.export.influx.uri=http://localhost:8086
  92. #management.metrics.export.influx.auto-create-db=true
  93. #management.metrics.export.influx.consistency=one
  94. #management.metrics.export.influx.compressed=true
  95. #*************** Access Log Related Configurations ***************#
  96. ### If turn on the access log:
  97. server.tomcat.accesslog.enabled=true
  98. ### The access log pattern:
  99. server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i
  100. ### The directory of access log:
  101. server.tomcat.basedir=
  102. #*************** Access Control Related Configurations ***************#
  103. ### If enable spring security, this option is deprecated in 1.2.0:
  104. #spring.security.enabled=false
  105. ### The ignore urls of auth, is deprecated in 1.2.0:
  106. nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**
  107. ### The auth system to use, currently only 'nacos' and 'ldap' is supported:
  108. nacos.core.auth.system.type=nacos
  109. ### If turn on auth system:
  110. nacos.core.auth.enabled=false
  111. ### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username
  112. # nacos.core.auth.ldap.url=ldap://localhost:389
  113. # nacos.core.auth.ldap.userdn=cn={0},ou=user,dc=company,dc=com
  114. ### The token expiration in seconds:
  115. nacos.core.auth.default.token.expire.seconds=18000
  116. ### The default token:
  117. nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
  118. ### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
  119. nacos.core.auth.caching.enabled=true
  120. ### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
  121. nacos.core.auth.enable.userAgentAuthWhite=false
  122. ### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.
  123. ### The two properties is the white list for auth and used by identity the request from other server.
  124. nacos.core.auth.server.identity.key=serverIdentity
  125. nacos.core.auth.server.identity.value=security
  126. #*************** Istio Related Configurations ***************#
  127. ### If turn on the MCP server:
  128. nacos.istio.mcp.server.enabled=false
  129. #*************** Core Related Configurations ***************#
  130. ### set the WorkerID manually
  131. # nacos.core.snowflake.worker-id=
  132. ### Member-MetaData
  133. # nacos.core.member.meta.site=
  134. # nacos.core.member.meta.adweight=
  135. # nacos.core.member.meta.weight=
  136. ### MemberLookup
  137. ### Addressing pattern category, If set, the priority is highest
  138. # nacos.core.member.lookup.type=[file,address-server]
  139. ## Set the cluster list with a configuration file or command-line argument
  140. # nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
  141. ## for AddressServerMemberLookup
  142. # Maximum number of retries to query the address server upon initialization
  143. # nacos.core.address-server.retry=5
  144. ## Server domain name address of [address-server] mode
  145. # address.server.domain=jmenv.tbsite.net
  146. ## Server port of [address-server] mode
  147. # address.server.port=8080
  148. ## Request address of [address-server] mode
  149. # address.server.url=/nacos/serverlist
  150. #*************** JRaft Related Configurations ***************#
  151. ### Sets the Raft cluster election timeout, default value is 5 second
  152. # nacos.core.protocol.raft.data.election_timeout_ms=5000
  153. ### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
  154. # nacos.core.protocol.raft.data.snapshot_interval_secs=30
  155. ### raft internal worker threads
  156. # nacos.core.protocol.raft.data.core_thread_num=8
  157. ### Number of threads required for raft business request processing
  158. # nacos.core.protocol.raft.data.cli_service_thread_num=4
  159. ### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
  160. # nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
  161. ### rpc request timeout, default 5 seconds
  162. # nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000
  163. #*************** Distro Related Configurations ***************#
  164. ### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
  165. # nacos.core.protocol.distro.data.sync.delayMs=1000
  166. ### Distro data sync timeout for one sync data, default 3 seconds.
  167. # nacos.core.protocol.distro.data.sync.timeoutMs=3000
  168. ### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
  169. # nacos.core.protocol.distro.data.sync.retryDelayMs=3000
  170. ### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
  171. # nacos.core.protocol.distro.data.verify.intervalMs=5000
  172. ### Distro data verify timeout for one verify, default 3 seconds.
  173. # nacos.core.protocol.distro.data.verify.timeoutMs=3000
  174. ### Distro data load retry delay when load snapshot data failed, default 30 seconds.
  175. # nacos.core.protocol.distro.data.load.retryDelayMs=30000
  1. Nacos 服务类型(需要记录每个nacos的域名,故需要创建为有状态服务)
image-20220430191658905.png
  1. 添加 nacos 容器,添加环境变量「MODE=standalone」
image-20220430191742714.png
  1. 挂载配置文件

  2. 指定 nacos 可以外网访问的服务

image-20220430195240357.png
  1. 外网连接测试成功
image-20220430195330444.png

2.2 服务搭建

2.2.1 Nacos配置

  1. 创建 prod 命名空间,用于生产环境的配置
image-20220430202117447.png
  1. 将public空间下的配置文件,导入至 prod 空间,并修改名称
image-20220430202256423.png
  1. 将配置文件中对redis和mysql的依赖改为云环境的DNS域名地址和密码

2.2.2 微服务后端镜像打包

  1. 使用 Maven 将所有涉及的微服务生成 jar 包ruoyi-auth.jar
image-20220430232445437.png
  1. 安装 Dockerfile 的规范,将各个微服务整理

注意:指定为 prod,修改 nacos 的地址为云端的DNS域名地址,指定 nacos 的配置读取的命名空间 prod

  1. FROM openjdk:8-jdk
  2. LABEL maintainer=yaoqijun
  3. ENV PARAMS="--server.port=8080 --spring.profiles.active=prod --spring.cloud.nacos.discovery.server-addr=nacos.ruoyi:8848 --spring.cloud.nacos.config.server-addr=nacos.ruoyi:8848 --spring.cloud.nacos.config.namespace=prod --spring.cloud.nacos.config.file-extension=yml"
  4. RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone
  5. COPY target/*.jar /app.jar
  6. EXPOSE 8080
  7. ENTRYPOINT ["/bin/sh","-c","java -Dfile.encoding=utf8 -Djava.security.egd=file:/dev/./urandom -jar app.jar ${PARAMS}"]
image-20220430233247817.png
  1. Docker 打包为镜像
docker build -t 镜像名:版本 -f Dockerfile .
image-20220501001048486.png

2.2.3 Aliyun镜像仓库

  1. 开通阿里云的容器镜像服务
image-20220430230221100.png
  1. 创建 yqj_ruoyi 命名空间
image-20220430230431666.png
  1. 推送镜像给阿里云镜像仓库
  1. # 1.登陆
  2. docker login --username=yorickjun registry.cn-beijing.aliyuncs.com
  3. # 2.打标签
  4. docker tag ruoyi-system:v1.0 registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-system:v1.0
  5. # ...
  6. # 3.推送
  7. docker push registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-system:v1.0
  8. #...
image-20220501093211647.png

2.2.4 微服务后端构建

微服务均为无状态服务,无需挂载卷,也无需配置挂载。容器启动后,由Dockerfile可知,自动读取nacos的指定命名空间的yml配置文件。

2.2.5 微服务前端打包与构建

  1. 前端需要修改生产环境下访问的网关地址,指向云端网关的DNS域名地址
image-20220501104105508.png
  1. 打包前端服务,将生成的dist文件夹内容放到ruoyi的docker下的nginx里面
npm run build:prod
image-20220501104433438.png
  1. 将nginx的配置文件做修改
image-20220501104631781.png
  1. 将文件上传云服务器,并打包镜像上传至阿里云镜像服务中心
  1. # 构建镜像,(直接打成推送需要的标签)
  2. docker build -t registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-ui:v1.0 -f dockerfile .
  3. # 推送镜像到阿里云
  4. docker push registry.cn-beijing.aliyuncs.com/yqj_ruoyi/ruoyi-ui:v1.0
  1. kuberSphere 搭建无状态服务,并开启外网端口

  2. 访问,添加数据成功

image-20220501105708841.png
image-20220501105802284.png
网站建设定制开发 软件系统开发定制 定制软件开发 软件开发定制 定制app开发 app开发定制 app开发定制公司 电商商城定制开发 定制小程序开发 定制开发小程序 客户管理系统开发定制 定制网站 定制开发 crm开发定制 开发公司 小程序开发定制 定制软件 收款定制开发 企业网站定制开发 定制化开发 android系统定制开发 定制小程序开发费用 定制设计 专注app软件定制开发 软件开发定制定制 知名网站建设定制 软件定制开发供应商 应用系统定制开发 软件系统定制开发 企业管理系统定制开发 系统定制开发