一些经常或不经常用到的镜像启动方法
设置容器的TZ另一种办法
参考: https://github.com/spujadas/elk-docker/blob/master/start.sh
## override default time zone (Etc/UTC) if TZ variable is set
if [ ! -z "$TZ" ]; then
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
fi
带ssh的centos
docker run -d -p 0.0.0.0:2222:22 tutum/centos6
docker run -d -p 0.0.0.0:2222:22 tutum/centos
docker run -d -p 0.0.0.0:2222:22 -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro tutum/centos6
docker run -d -p 0.0.0.0:2222:22 -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro tutum/centos
支持两种验证方式:
docker run -d -p 0.0.0.0:2222:22 -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro -e ROOT_PASS="mypass" tutum/centos
docker run -d -p 2222:22 -e AUTHORIZED_KEYS="`cat ~/.ssh/id_rsa.pub`" tutum/centos
docker logs <CONTAINER_ID>
ssh -p <port> root@<host>
参考: https://hub.docker.com/r/tutum/centos/
带ping/curl/nslookup的busybox
docker run -itd --name=test1 --net=test-network radial/busyboxplus /bin/sh
nginx
mkdir -p /data/nginx-html
echo "maotai" > /data/nginx-html/index.html
docker run -d \
--net=host \
--restart=always \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /etc/localtime:/etc/localtime:ro \
-v /data/nginx-html:/usr/share/nginx/html \
--name nginx \
nginx
portainer多单节点管理界面的部署
cp /etc/docker/daemon.json /etc/docker/daemon.json.bak.$(date +%F)
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"hosts": [
"tcp://0.0.0.0:2375",
"unix:///var/run/docker.sock"
]
}
EOF
systemctl daemon-reload
systemctl restart docker && systemctl enable docker
docker run -d \
-p 9000:9000 \
--restart=always \
-v /etc/localtime:/etc/localtime:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
portainer/portainer
nginx配置
mv /etc/nginx /etc/nginx_$(date +%F)
mkdir -p /etc/nginx/conf.d/
mkdir -p /data/nginx-html
echo "maotai" > /data/nginx-html/index.html
cat >> /etc/nginx/nginx.conf<<EOF
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
server_name_in_redirect off;
client_max_body_size 20m;
client_header_buffer_size 16k;
large_client_header_buffers 4 16k;
sendfile on;
tcp_nopush on;
keepalive_timeout 65;
server_tokens off;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_proxied any;
gzip_http_version 1.1;
gzip_comp_level 3;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{"@timestamp": "$time_iso8601",'
'"@version": "1",'
'"client": "$remote_addr",'
'"url": "$uri", '
'"status": $status, '
'"domain": "$host", '
'"host": "$server_addr",'
'"size":"$body_bytes_sent", '
'"response_time": $request_time, '
'"referer": "$http_referer", '
'"http_x_forwarded_for": "$http_x_forwarded_for", '
'"ua": "$http_user_agent" } ';
access_log /var/log/nginx/access.log json;
include /etc/nginx/conf.d/*.conf;
}
EOF
tree /etc/nginx/
cat >> /etc/nginx/conf.d/default.conf <<EOF
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log json;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
EOF
tree /etc/nginx/
nginx-lb
docker run --name nginx-lb \
-d \
-v /etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro \
--net=host \
--restart=always \
-v /etc/localtime:/etc/localtime \
nginx:1.13.3-alpine
lnmp(每个组件独立)
参考: https://github.com/micooz/docker-lnmp
docker-compose up
启动一个mysql
cat /root/dockerfile/mysql/start.sh
docker run -p 3306:3306 -v /data/mysql:/var/lib/mysql -v /etc/localtime:/etc/localtime --name mysql5 --restart=always -d mysql:5.6.23 --character-set-server=utf8 --collation-server=utf8_general_ci
docker run \
-p 3306:3306 \
-v /data/mysql:/var/lib/mysql \
-v /etc/localtime:/etc/localtime \
--name mysql5 \
--restart=always \
-e MYSQL_ROOT_PASSWORD=123456 \
-d mysql:5.6.23 --character-set-server=utf8 --collation-server=utf8_general_ci
show VARIABLES like '%max_allowed_packet%';
show variables like '%storage_engine%';
show variables like 'collation_%';
show variables like 'character_set_%';
mysql主从库
#+++++++++++++++++++++++++++
# mysql主从库
#+++++++++++++++++++++++++++
docker run -d -e REPLICATION_MASTER=true -e REPLICATION_PASS=mypass -p 3306:3306 --name mysql tutum/mysql
docker run -d -e REPLICATION_SLAVE=true -p 3307:3306 --link mysql:mysql tutum/mysql
gogs安装(不过建议用gitlab)
docker run -itd \
-p 53000:3000 -p 50022:22 \
-v /data/gogs:/data \
-v /etc/localtime:/etc/localtime \
--restart=always \
gogs/gogs
cowcloud
docker run -v /data/owncloud-data:/var/www/html -v /etc/localtime:/etc/localtime -v :/var/www/html/config --restart=always -itd -p 8000:80 owncloud
nextcloud(和owncloud一样,据说这个支持在线md记录笔记,总之感觉功能更强大)
参考: https://hub.docker.com/_/nextcloud/
docker run -d \
-p 8080:80
-v nextcloud:/var/www/html \
nextcloud
安装confluence
docker run \
-v /data/confluence/conflu_data:/var/atlassian/application-data/confluence \
-v /etc/localtime:/etc/localtime \
-v /data/confluence/server.xml:/opt/atlassian/confluence/conf/server.xml \
--restart=always \
--link mysql5:db \
--name="confluence" -d \
-p 8090:8090 \
-p 8091:8091 \
cptactionhank/atlassian-confluence
参考:http://wuyijun.cn/shi-yong-dockerfang-shi-an-zhuang-he-yun-xing-confluence/
- 配置confluence
- 创建数据库
create database confluence default character set utf8 collate utf8_bin;
grant all on confluence.* to 'confluence'@"172.17.0.%" identified by "confluenceman";
grant all on confluence.* to 'confluence'@"192.168.6.%";
grant all on confluence.* to 'confluence'@"192.168.8.%";
- 安装破解
1.导出后用破机器破解
docker cp confluence:/opt/atlassian/confluence/confluence/WEB-INF/lib/atlassian-extras-decoder-v2-3.2.jar ./
mv atlassian-extras-decoder-v2-3.2.jar atlassian-extras-2.4.jar
2. 将破解文件导入系统
mv atlassian-extras-2.4.jar atlassian-extras-decoder-v2-3.2.jar
docker cp ./atlassian-extras-decoder-v2-3.2.jar confluence:/opt/atlassian/confluence/confluence/WEB-INF/lib/
3.重启confluence
docker stop confluence
docker start confluence
- 1.贴上破机器的序列号
- 2.选jdbc连mysql url写:
jdbc:mysql://db:3306/confluence?sessionVariables=storage_engine%3DInnoDB&useUnicode=true&characterEncoding=utf8
- 3.导入既有的数据
参考:https://www.ilanni.com/?p=11989
如:xmlexport-20170902-100808-153.zip
这里包含了数据库数据. - 4.安装完毕
管理员帐号密码登陆 http://192.168.x.x:8090
admin
xxxxx
- 5.配置邮箱
这里我没用server.xml里配置(配了测试有问题),直接smtp用新浪邮箱配的
smtp.sina.com
mt@sina.com
123456
phabricator审计系统(客服给开发提bug)
docker run -d \
-p 9080:80 -p 9443:443 -p 9022:22 \
--env PHABRICATOR_HOST=sj.pp100.net \
--env MYSQL_HOST=192.168.x.x \
--env MYSQL_USER=root \
--env MYSQL_PASS=elc123 \
--env PHABRICATOR_REPOSITORY_PATH=/repos \
--env PHABRICATOR_HOST_KEYS_PATH=/hostkeys/persisted \
-v /data/phabricator/hostkeys:/hostkeys \
-v /data/phabricator/repo:/repos \
redpointgames/phabricator
hackmarkdown安装(内网markdown服务器,支持贴图权限,还有专门的客户端等)
https://github.com/hackmdio/docker-hackmd/blob/master/docker-compose.yml
docker-compose up -d
参考: 数据的备份等都有.
https://github.com/hackmdio/docker-hackmd https://hub.docker.com/r/hackmdio/hackmd/
容器启动常用选项
- 1, 时区
- 2, 自动重启
- 3, 日志
docker run \
-v /etc/localtime:/etc/localtime:ro
-v /etc/timezone:/etc/timezone:ro
--restart=always \
docker run \
-v /etc/localtime:/etc/localtime:ro
-v /etc/timezone:/etc/timezone:ro
-v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro
记录两份 一份是前台输出,另一份
docker run -it --rm -p 80:80 nginx
ll /var/lib/docker/containers/*/*.log
针对容器的日志切割(不然日志越滚越大)
容器日志目录: /var/lib/docker/containers//.log.*
docker run -d -v /var/lib/docker/containers:/var/lib/docker/containers:rw \
-v /etc/localtime:/etc/localtime:ro \
--restart=always \
tutum/logrotate
- 原理(logrotated的一个copytruncate选项很好,不截断日志情况下滚动日志)
## 可以进到容器里看看日志滚动策略.
#https://hub.docker.com/r/tutum/logrotate/
/ # cat /etc/logrotate.conf
/var/lib/docker/containers/*/*.log {
rotate 0
copytruncate
sharedscripts
maxsize 10M
postrotate
rm -f /var/lib/docker/containers/*/*.log.*
endscript
#logrotate说明copytruncate
# http://www.lightxue.com/how-logrotate-works
#让我联想起了nginx日志切割
cat > /etc/logrotate.d/nginx
/usr/local/nginx/logs/*.log {
daily
missingok
rotate 7
dateext
compress
delaycompress
notifempty
sharedscripts
postrotate
if [ -f /usr/local/nginx/logs/nginx.pid ]; then
kill -USR1 `cat /usr/local/nginx/logs/nginx.pid`
fi
endscript
}
清理长时间不用的镜像和volumes
docker run -d \
--privileged \
-v /var/run:/var/run:rw \
-v /var/lib/docker:/var/lib/docker:rw \
-e IMAGE_CLEAN_INTERVAL=1 \
-e IMAGE_CLEAN_DELAYED=1800 \
-e VOLUME_CLEAN_INTERVAL=1800 \
-e IMAGE_LOCKED="ubuntu:trusty, tutum/curl:trusty" \
tutum/cleanup
# https://hub.docker.com/r/tutum/cleanup/
# IMAGE_CLEAN_INTERVAL (optional) How long to wait between cleanup runs (in seconds), 1 by default.
# IMAGE_CLEAN_DELAYED (optional) How long to wait to consider an image unused (in seconds), 1800 by default.
# VOLUME_CLEAN_INTERVAL (optional) How long to wait to consider a volume unused (in seconds), 1800 by default.
# IMAGE_LOCKED (optional) A list of images that will not be cleaned by this container, separated by ,
- 原理:调用二进制程序
/ # cat run.sh
#!/bin/sh
if [ ! -e "/var/run/docker.sock" ]; then
echo "=> Cannot find docker socket(/var/run/docker.sock), please check the command!"
exit 1
fi
if [ "${IMAGE_LOCKED}" == "**None**" ]; then
exec /cleanup \
-imageCleanInterval ${IMAGE_CLEAN_INTERVAL} \
-imageCleanDelayed ${IMAGE_CLEAN_DELAYED}
else
exec /cleanup \
-imageCleanInterval ${IMAGE_CLEAN_INTERVAL} \
-imageCleanDelayed ${IMAGE_CLEAN_DELAYED} \
-imageLocked "${IMAGE_LOCKED}"
fi
zk集群
version: '2'
services:
zoo1:
image: zookeeper
restart: always
container_name: zoo1
volumes:
- /etc/localtime:/etc/localtime
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo2:
image: zookeeper
restart: always
container_name: zoo2
volumes:
- /etc/localtime:/etc/localtime
ports:
- "2182:2181"
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
zoo3:
image: zookeeper
restart: always
volumes:
- /etc/localtime:/etc/localtime
container_name: zoo3
ports:
- "2183:2181"
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
检查:
echo stat|nc127.0.0.1 2181
或者进入到容器去看
#docker exec zoo1 /zookeeper-3.4.10/bin/zkCli.sh -server 127.0.0.1:2181
#/zookeeper-3.4.10/bin/zkCli.sh -server 127.0.0.1:2181
zabbix(monitoringartist这小伙把组件搞在一个镜像了)
docker run \
-d \
--name dockbix-db \
-v /backups:/backups \
-v /etc/localtime:/etc/localtime:ro \
--volumes-from dockbix-db-storage \
--env="MARIADB_USER=zabbix" \
--env="MARIADB_PASS=my_password" \
monitoringartist/zabbix-db-mariadb
# Start Dockbix linked to the started DB
docker run \
-d \
--name dockbix \
-p 80:80 \
-p 10051:10051 \
-v /etc/localtime:/etc/localtime:ro \
--link dockbix-db:dockbix.db \
--env="ZS_DBHost=dockbix.db" \
--env="ZS_DBUser=zabbix" \
--env="ZS_DBPassword=my_password" \
--env="XXL_zapix=true" \
--env="XXL_grapher=true" \
monitoringartist/dockbix-xxl:latest
分开的zabbix,这个我没测
docker run --name zabbix-server-mysql -t \
-v /etc/localtime:/etc/localtime:ro \
-v /data/zabbix-alertscripts:/usr/lib/zabbix/alertscripts \
-v /etc/zabbix/zabbix_server.conf:/etc/zabbix/zabbix_server.conf \
-e DB_SERVER_HOST="192.168.14.132" \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="Tx66sup" \
-e MYSQL_ROOT_PASSWORD="Tinsu" \
-e ZBX_JAVAGATEWAY="127.0.0.1" \
--network=host \
-d registry.docker-cn.com/zabbix/zabbix-server-mysql:ubuntu-3.4.0
docker run --name mysql-server -t \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/my.cnf:/etc/my.cnf \
-v /data/mysql-data:/var/lib/mysql \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="bix66sup" \
-e MYSQL_ROOT_PASSWORD="adminsu" \
-p 3306:3306 \
-d registry.docker-cn.com/mysql/mysql-server:5.7
docker run --name zabbix-java-gateway -t \
-v /etc/localtime:/etc/localtime:ro \
--network=host \
-d registry.docker-cn.com/zabbix/zabbix-java-gateway:latest
bdocker run --name zabbix-web-nginx-mysql -t \
-v /etc/localtime:/etc/localtime:ro \
-e DB_SERVER_HOST="192.168.14.132" \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="TCzp" \
-e MYSQL_ROOT_PASSWORD="TC6u" \
-e PHP_TZ="Asia/Shanghai" \
--network=host \
-d registry.docker-cn.com/zabbix/zabbix-web-nginx-mysql:ubuntu-3.4.0
docker监控advisor
docker run \
--volume=/:/rootfs:ro \
--volume=/var/run:/var/run:rw \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--publish=8080:8080 \
--detach=true \
--name=cadvisor \
google/cadvisor:latest
http://192.168.14.133:8080/
centos7跑cAdvisor-InfluxDB-Grafana
http://www.pangxie.space/docker/456
https://www.brianchristner.io/how-to-setup-docker-monitoring/
https://github.com/vegasbrianc/docker-monitoring/blob/master/docker-monitoring-0.9.json
- 启动influxdb(使用最新的发现不好使)
docker run -d -p 8083:8083 -p 8086:8086 --expose 8090 --expose 8099 --name influxsrv tutum/influxdb:0.10
- 创建db
docker exec -it influxsrv bash
use cadvisor
CREATE USER "root" WITH PASSWORD 'root' WITH ALL PRIVILEGES
CREATE DATABASE cadvisor
show users
- 启动cadvisor
docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --link influxsrv:influxsrv --name=cadvisor google/cadvisor:latest -storage_driver=influxdb -storage_driver_db=cadvisor -storage_driver_host=influxsrv:8086
- 启动grafna, 加db源.导入dashboard
docker run -d -p 3000:3000 -e INFLUXDB_HOST=192.168.14.133 -e INFLUXDB_PORT=8086 -e INFLUXDB_NAME=cadvisor -e INFLUXDB_USER=root -e INFLUXDB_PASS=root --link influxsrv:influxsrv --name grafana grafana/grafana
Prometheus+Grafana(这个比cAdvisor-InfluxDB-Grafana展示效果更好一些)
A Prometheus & Grafana docker-compose stack
参考: https://github.com/vegasbrianc/prometheus
docker-compose up -d
elk
elk容器要占2g内存,vm分配至少给2g
参考:http://elk-docker.readthedocs.io/#installation
https://github.com/gregbkr/elk-dashboard-v5-docker
sysctl -w vm.max_map_count=262144
docker run -d -v /etc/localtime:/etc/localtime --restart=always -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk
docker run -d -v /etc/localtime:/etc/localtime --restart=always -p 9100:9100 mobz/elasticsearch-head:5
或
docker-compose up -d
纯手动安装elastic+kibana(elk)
useradd elk
cd /usr/local/src/
tar xf elasticsearch-5.6.4.tar.gz -C /usr/local/
tar xf kibana-5.6.4-linux-x86_64.tar.gz -C /usr/local/
ln -s /usr/local/elasticsearch-5.6.4 /usr/local/elasticsearch
ln -s /usr/local/kibana-5.6.4-linux-x86_64 /usr/local/kibana
chown -R elk. /usr/local/elasticsearch
chown -R elk. /usr/local/elasticsearch/
chown -R elk. /usr/local/kibana
chown -R elk. /usr/local/kibana/
mkdir /data/es/{data,logs} -p
chown -R elk. /data
修改es配置
0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
修改内核:
vim /etc/security/limits.conf
* soft nproc 65536
* hard nproc 65536
* soft nofile 65536
* hard nofile 65536
sysctl -w vm.max_map_count=262144
sysctl -p
nohup /bin/su - elk -c "/usr/local/elasticsearch/bin/elasticsearch" > /data/es/es-start.log 2>&1 &
nohup /bin/su - elk -c "/usr/local/kibana/bin/kibana" > /data/es/kibana-start.log 2>&1 &
docker run -d -v /etc/localtime:/etc/localtime --restart=always -p 9100:9100 mobz/elasticsearch-head:5
安装elk的head插件
先修改es的配置文件: elasticsearch.yml追加
http.cors.enabled: true
http.cors.allow-origin: "*"
docker run -d -v /etc/localtime:/etc/localtime --restart=always -p 9100:9100 mobz/elasticsearch-head:5
物理机安装elk之前的优化操作
sudo sysctl -w vm.max_map_count=262144
make it persistent:
$ vim /etc/sysctl.conf
vm.max_map_count=262144
es常用操作参考:
## 备份,扩容等脚本,有点老,但是思路可以参考,https://github.com/gregbkr/docker-elk-cadvisor-dashboards
http://192.168.14.133:9200/_cat/health?v #查看集群状态
http://192.168.14.133:9200/_cat/nodes?v #查看节点状态
http://192.168.14.133:9200/_cat/indices?v #查看index列表
#创建index
curl -XPUT http://vm1:9200/customer?pretty
#添加一个document
[es@vm1 ~]$ curl -XPUT vm1:9200/customer/external/1?pretty -d '{"name":"lisg"}'
#检索一个document
[es@vm1 ~]$ curl -XGET vm1:9200/customer/external/1?pretty
#删除一个document
[es@vm1 ~]$ curl -XDELETE vm1:9200/customer/external/1?pretty
#删除一个type
[es@vm1 ~]$ curl -XDELETE vm1:9200/customer/external?pretty
#删除一个index
[es@vm1 ~]$ curl -XDELETE vm1:9200/customer?pretty
#POST方式可以添加一个document,不用指定ID
[es@vm1 ~]$ curl -XPOST vm1:9200/customer/external?pretty -d '{"name":"zhangsan"}'
#使用doc更新document
[es@vm1 ~]$ curl -XPUT vm1:9200/customer/external/1?pretty -d '{"name":"lisg4", "age":28}'
#使用script更新document(1.4.3版本动态脚本是被禁止的)
[es@vm1 ~]$ curl -XPOST vm1:9200/customer/external/1/_update?pretty -d '{"script":"ctx._source.age += 5"}'
启动jenkins
docker run -d -u root \
-p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/bin/docker \
-v /var/jenkins_home:/var/jenkins_home \
jenkins
带ssh的tomcat
之前一直使用单个app的容器,如tomcat,我只需要catalina.sh run来启动前台容器.其中方法:我可以CMD ['run.sh'],其中run.sh有了我想执行的命令.
我也可以通过ENTRYPOINT ["docker-entrypoint.sh"],这样更加灵活了.可以通过CMD往这个脚本传参了.
后台tomcat容器需要ssh进去管理.这就意味着必须sshd也要同时前台启动,只能用supervisor来管理了.
但是我感觉还是不太完善.
- 1,熟悉dockerfile语法
- 2,手动构建centos7
- 3,使用官网centos7
- 4,系统层--基于官网cenos7 添加 supervisor+ssh,启动后即启动ssh
- 5,运行层—安装jdk
- 6,app层安装tomcat,暴露8080.—supervisor接管.
新总结下supervisord.conf的配置(tomcat+ssh镜像)
参考: https://github.com/zabbix/zabbix-docker/blob/3.4/web-apache-mysql/alpine/conf/etc/supervisor/conf.d/supervisord_zabbix.conf
[supervisord]
nodaemon = true
[program:sshd]
command=/usr/sbin/sshd -D
process_name=%(program_name)s
auto_start = true
autorestart = true
[program:tomcat]
command=/data/tomcat/bin/catalina.sh run
process_name=%(program_name)s
auto_start = true
autorestart = true
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
这是tomcat的dockerfile[tomcat+ssh镜像],
其中要准备,下载解压这些目录到Dockerfile所在目录, jdk, tomcat,tomcat的server.xml(后期我k8s集群使用cm来覆盖)
Dockerfile
FROM centos:6.8
# Init centos
ENV TERM="linux"
ENV TERMINFO="/etc/terminfo"
ENV LANG="en_US.UTF-8"
ENV LANGUAGE="en_US.UTF-8"
ENV LC_ALL="en_US.UTF-8"
ENV TZ="PRC"
COPY localtime /etc/localtime
#ssh
RUN yum -y install openssh-server epel-release && \
rm -f /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_rsa_key && \
ssh-keygen -q -N "" -t dsa -f /etc/ssh/ssh_host_dsa_key && \
ssh-keygen -q -N "" -t rsa -f /etc/ssh/ssh_host_rsa_key && \
sed -i "s/#UsePrivilegeSeparation.*/UsePrivilegeSeparation no/g" /etc/ssh/sshd_config && \
sed -i "s/UsePAM.*/UsePAM yes/g" /etc/ssh/sshd_config && \
sed -i 's#\#UseDNS yes#UseDNS no#g' /etc/ssh/sshd_config && \
sed -i 's#GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config && \
echo "root:123456" | chpasswd && \
yum clean all
#supervisor
RUN yum -y install supervisor && \
mkdir -p /etc/supervisor/
COPY supervisord.conf /etc/supervisor/
# Prepare jdk and tomcat environment
ENV JAVA_HOME /usr/local/jdk
ENV CLASSPATH .:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar
ENV TOMCAT_HOME /data/tomcat
ENV PATH $JAVA_HOME/bin:$TOMCAT_HOME/bin:$PATH
ENV CATALINA_HOME=/data/tomcat
ENV ENVCATALINA_BASE=/data/tomcat
#RUN export JAVA_HOME CLASSPATH TOMCAT_HOME PATH CATALINA_HOME ENVCATALINA_BASE
# Install Oracle jdk-8u25
COPY jdk /usr/local/jdk
# Install apache-tomcat-7.0.62
RUN mkdir -p /data/tomcat && mkdir -p /data/web/elc/ && \
ulimit -SHn 65535 && \
echo '* - nofile 65536' >>/etc/security/limits.conf
COPY tomcat /data/tomcat
COPY server.xml /tmp/server.xml
RUN ln -s /tmp/server.xml /data/tomcat/conf/server.xml
WORKDIR /data/tomcat
EXPOSE 8080 22
CMD ["supervisord","-c","/etc/supervisor/supervisord.conf"]
其中centos的dockerfile参考: https://github.com/tutumcloud/tutum-centos/blob/master/centos6/Dockerfile
这里可以指定ssh的密码,你也可以使用pwdgen(yum install)工具随机生成密码,打印在console口通过docker logs -f来查看到密码,后期直接自己改密码.参考那个github吧.
docker容器volume从容器里挂文件到宿主机
后来发现,-v选项 之前是把容器外的数据挂容器里用 刚想把容器里的某个文件挂到宿主机用,
只能挂出 run之后容器产生的数据,
如nginx: 可以获取到nginx的access日志和error日志,因为这些日志都是容器启动后生成的
docker run -itd -v /tmp/nginx/:/var/log/nginx/ -p 80:80 nginx
在比如centos: 我只在宿主机/tmp下发现hostname hosts resolv.conf这三个文件,这些文件是容器run之后产生的文件.
docker run -itd -v /tmp/etc/:/tmp/etc/ centos
nginx基于centos的dockerfile
参考: https://github.com/nginxinc/docker-nginx/blob/3ba04e37d8f9ed7709fd30bf4dc6c36554e578ac/mainline/stretch/Dockerfile
FROM centos:6.8
ENV NGINX_VERSION 1.13.6
RUN CONFIG="\
--user=nginx \
--group=nginx \
--prefix=/usr/local/nginx \
--with-http_stub_status_module \
--with-http_ssl_module \
" \
&& useradd nginx -s /sbin/nologin \
&& yum install openssl openssl-devel pcre pcre-devel gcc c++ -y \
&& curl -fSL http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz -o /usr/local/src/nginx-${NGINX_VERSION}.tar.gz \
&& tar -xvf /usr/local/src/nginx-$NGINX_VERSION.tar.gz -C /usr/local/src \
&& cd /usr/local/src/nginx-$NGINX_VERSION \
&& ./configure $CONFIG \
&& make \
&& make install \
&& rm -rf /usr/local/src/*
RUN ln -sf /dev/stdout /usr/local/nginx/log/access.log \
&& ln -sf /dev/stderr /usr/local/nginx/log/error.log
EXPOSE 80 443
CMD ["/usr/local/nginx/sbin/nginx", "-g", "daemon off;"]