Sayfalar

6 Ekim 2023 Cuma

Postgresql HA Yüksek kullanılabilir mimari- Patroni kurulumu

 Merhaba arkadaşlar burada postgresql database'in Streaming Replication ile gelen cluster mimarisini manage eden yöneten patroni uygulamasının debian- ubuntu kurulumunu açıklayacağım. 

bu mimari da 3 node'lu bir clusterımız streaming replicaiton alt yapısını kullanarak patroni tarafından yönetilmesini sağlayacağız.


yapı şöyle olacak: 3 node'da da etcd,postgresql,patroni ve vip-manager kurarak clusterımız tamamlamış olacağız.


1- Sunucu kurulumlarında bazen default da image den otamatik kurulum yaptığı durumlar olabiliyor. bunların olmadığını teyit ediyoruz.

apt list --installed | grep postgres

apt list --installed | grep etcd

apt list --installed | grep vip_manager

apt list --installed | grep patroni


2- host dosyasına ekleme yapıyoruz.

>>vim /etc/hosts

192.168.10.21 psql01

192.168.10.22 psql02

192.168.10.23 psql03


3- Burada Debian- Ubuntu türevini kullandığımız için selinux ' ü kontrol etmeye gerek yok. firewall acık ise disable edip kapatmamız yeterli.

systemctl stop ufw

systemctl disable ufw


4- ETCD kurulumunu gerçekleştiriyoruz

mkdir /tmp/etcd

cd /tmp/etcd

wget https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz  --no-check-certificate

tar -xzvf etcd-v3.5.9-linux-amd64.tar.gz


groupadd --system etcd

useradd --create-home --home-dir /var/lib/etcd --shell /bin/bash --system --gid etcd etcd


mkdir /var/log/etcd

chown etcd: /var/log/etcd

chmod 700 /var/log/etcd


mkdir /etc/etcd.d

chown etcd: /etc/etcd.d

chmod 700 /etc/etcd.d


mkdir /var/lib/etcd/data

chown -R etcd: /var/lib/etcd

chmod -R 700 /var/lib/etcd


install /tmp/etcd/etcd-v3.5.9-linux-amd64/etcd /usr/bin/etcd

install /tmp/etcd/etcd-v3.5.9-linux-amd64/etcdctl /usr/bin/etcdctl


5- etcd configurasyonlarını yapıyoruz. burada 3 node için ayrı ayrı gireceksini o kısım önemli

vim /etc/etcd.d/etcd.conf

name: "psql01"

data-dir: "/var/lib/etcd/data"

strict-reconfig-check: true

enable-v2: true

logger: zap

log-outputs:

  - stdout

  - stderr


initial-cluster: "psql01=http://192.168.10.21:2380,psql02=http://192.168.10.22:2380,psql03=http://192.168.10.23:2380"

initial-cluster-state: "new"

initial-cluster-token: "23D69B8D-B236-4D74-ACC3-26ED82891DE9"

initial-advertise-peer-urls: "http://192.168.10.21:2380"


listen-peer-urls: "http://192.168.10.21:2380"

listen-client-urls: "http://192.168.10.21:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.10.21:2379"

listen-metrics-urls: "http://0.0.0.0:2381"

heartbeat-interval: 150

election-timeout: 1500

------------------------------------------node2

vim /etc/etcd.d/etcd.conf

name: "psql02"

data-dir: "/var/lib/etcd/data"

strict-reconfig-check: true

enable-v2: true

logger: zap

log-outputs:

  - stdout

  - stderr


initial-cluster: "psql01=http://192.168.10.21:2380,psql02=http://192.168.10.22:2380,psql03=http://192.168.10.23:2380"

initial-cluster-state: "new"

initial-cluster-token: "23D69B8D-B236-4D74-ACC3-26ED82891DE9"

initial-advertise-peer-urls: "http://192.168.10.22:2380"


listen-peer-urls: "http://192.168.10.22:2380"

listen-client-urls: "http://192.168.10.22:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.10.22:2379"

listen-metrics-urls: "http://0.0.0.0:2381"

heartbeat-interval: 150

election-timeout: 1500


------------------------------------------node3

vim /etc/etcd.d/etcd.conf

name: "psql03"

data-dir: "/var/lib/etcd/data"

strict-reconfig-check: true

enable-v2: true

logger: zap

log-outputs:

  - stdout

  - stderr


initial-cluster: "psql01=http://192.168.10.21:2380,psql02=http://192.168.10.22:2380,psql03=http://192.168.10.23:2380"

initial-cluster-state: "new"

initial-cluster-token: "23D69B8D-B236-4D74-ACC3-26ED82891DE9"                      

initial-advertise-peer-urls: "http://192.168.10.23:2380"


listen-peer-urls: "http://192.168.10.23:2380"

listen-client-urls: "http://192.168.10.23:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.10.23:2379"

listen-metrics-urls: "http://0.0.0.0:2381"

heartbeat-interval: 150

election-timeout: 1500


6- Etcd servisini oluşturuyoruz. 3 node da da ayınısı yapıyoruz

vim /etc/systemd/system/etcd.service

[Unit]

Description="etcd - A distributed, reliable key-value store for the most critical data of a distributed system"

Documentation=https://etcd.io/docs/latest

Requires=network-online.target

After=network-online.target

ConditionFileNotEmpty=/etc/etcd.d/etcd.conf


[Service]

Type=notify

User=etcd

Group=etcd

WorkingDirectory=/var/lib/etcd

Environment=ETCD_CONFIG_FILE=/etc/etcd.d/etcd.conf

Environment=GOMAXPROCS=1


StandardOutput=append:/var/log/etcd/etcd.log

StandardError=append:/var/log/etcd/etcd-error.log


ExecStart=/usr/bin/etcd


Restart=on-failure

RestartSec=10s


LimitNOFILE=65536

OOMScoreAdjust=-1000


[Install]

WantedBy=multi-user.target


7- etcd profile dosyasını oluşturuyoruz. 3 node da da aynı bilgiler olmalı

vim /etc/profile.d/etcd_profile.sh

export ETCDCTL_API=2

export ETCDCTL_ENDPOINT='http://127.0.0.1:2379,http://192.168.10.21:2379,http://192.168.10.22:2379,http://192.168.10.23:2379'

export ETCD_CONFIG_FILE=/etc/etcd.d/etcd.conf


8- şimdi etcdyi çalıştırarak kontrol ediyoruz

systemctl daemon-reload

systemctl start etcd

systemctl status etcd

systemctl enable etcd


etcdctl cluster-health








V3 versiyonlarda aşağıdaki komut ile kontrol edilebilir.
etcdctl --cluster=true endpoint health

9- makinamızı reboot ediyoruz. 3 node da da.
reboot

10-###################Postgresql kurulum#########
sudo sh -c 'echo "deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
apt update
sudo apt-get update
apt list | grep postgres
sudo apt-get -y install postgresql-15 postgresql-contrib-15
sudo su - postgres
pg_lsclusters
pg_dropcluster 15 main --stop
pg_lsclusters

11- postgresql config ayarları için ilgili dizinler oluşturulur.
mkdir -p /pg_data/15/main
chown -R postgres: /pg_data
chmod 700 -R /pg_data

12- postgresql de locale dil tr-utf8 ekliyoruz. 3 node da bu ayarı yapmamız gerekiyor aksi halde sonradan bu ayarı yapmamız için restart gerekir. o açıdan şimdiden bu dil desteğini ekliyoruz
root@psql01: localectl list-locales
C.UTF-8
en_US.UTF-8
root@psql01: vi /etc/locale.gen
   tr_TR.UTF-8 UTF-8
root@psql01: locale-gen 
root@psql01: localectl list-locales
C.UTF-8
en_US.UTF-8
tr_TR.UTF-8

13-#####PATRONI kurulumu##########
mkdir -m 700 /var/log/patroni
chown postgres: /var/log/patroni

  apt-get install libpq-dev
  apt-get install python3-psycopg2
  apt-get install python3-dev
  apt-get install python3-pip
pip3 install -U pip
pip3 install patroni[etcd]

mkdir /etc/patroni
touch /etc/patroni/env.conf
chmod 600 /etc/patroni/env.conf

14- patroni sh dosyasını düzenliyoruz. [3 node]
vim /etc/profile.d/patroni.sh
export PATRONICTL_CONFIG_FILE=/etc/patroni/patroni.yml

15-Patroni servisini ekliyoruz.  [3 node]
vim /lib/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target

[Service]
Type=simple

User=postgres
Group=postgres

Environment="PATRONI_CONFIGURATION_FILE=/etc/patroni/patroni.yml"

EnvironmentFile=-/etc/patroni/env.conf

ExecStart=/usr/local/bin/patroni ${PATRONI_CONFIGURATION_FILE}
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=process
TimeoutSec=30
Restart=no

[Install]
WantedBy=multi-user.target

16- Burada patroni yml dosyasını 3 node için giriyoruz alt alta yazıyorum dikkatli izleyiniz.

vim /etc/patroni/patroni.yml
name: "psql01"
namespace: "/postgres/"
scope: "main-postgres"

log:
  level: INFO
  dir: /var/log/patroni

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      remove_data_directory_on_rewind_failure: true
      remove_data_directory_on_diverged_timelines: true
      parameters:
        logging_collector: 'on'
        log_filename: 'postgresql-%a.log'
        log_truncate_on_rotation: 'on'
        log_rotation_size: '0'
        log_min_duration_statement: '1s'
        log_lock_waits: on
        log_checkpoints: 'on'
        log_temp_files: '0'
        log_autovacuum_min_duration: '0'
        lc_messages: 'C'
        wal_compression: "on"
        wal_log_hints: "on"
        fsync: "on"
        max_connections: "500"
        superuser_reserved_connections: "10"
        archive_mode: "on"
        archive_command: "/bin/true"
        password_encryption : "scram-sha-256"
        log_statement: ddl
        shared_preload_libraries: pg_stat_statements,auto_explain
        work_mem: '10MB'
        effective_cache_size: '6144MB'
        maintenance_work_mem: '512MB'
        shared_buffers: '2048MB'
        autovacuum_work_mem: '512MB'
        autovacuum_vacuum_threshold: 3000
        autovacuum_analyze_threshold: 1000
        autovacuum_vacuum_scale_factor: 0.002
        autovacuum_analyze_scale_factor: 0.001
        checkpoint_completion_target: '0.8'

  initdb:
    - data-checksums
    - encoding: UTF8

  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

  users:
    standby:
      password: 123
      options:
        - replication
        - login

etcd:
  hosts: 192.168.10.21:2379,192.168.10.22:2379,192.168.10.23:2379

postgresql:
  authentication:
    superuser:
      username: postgres
      password: 123
    replication:
      username: standby
      password: 123
  connect_address: "192.168.10.21:5432"
  data_dir: /pg_data/15/main
  config_dir: /pg_data/15/main
  bin_dir: /usr/lib/postgresql/15/bin
  listen: "0.0.0.0"
  use_unix_socket: true
  pgpass: /var/lib/postgresql/patroni.pgpass
  pg_hba:
   - local all postgres peer
   - host all all 127.0.0.1/32 scram-sha-256
   - host replication standby 127.0.0.1/32 scram-sha-256
   - host replication standby 192.168.10.21/24 scram-sha-256
   - host replication standby 192.168.10.22/24 scram-sha-256
   - host replication standby 192.168.10.23/24 scram-sha-256
   - host all postgres 192.168.10.21/24 scram-sha-256
   - host all postgres 192.168.10.22/24 scram-sha-256
   - host all postgres 192.168.10.23/24 scram-sha-256

restapi:
  listen: 192.168.10.21:8009
  connect_address: 192.168.10.21:8009

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
  

--------------------------node2
name: "psql02"
namespace: "/postgres/"
scope: "main-postgres"

log:
  level: INFO
  dir: /var/log/patroni

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      remove_data_directory_on_rewind_failure: true
      remove_data_directory_on_diverged_timelines: true
      parameters:
        logging_collector: 'on'
        log_filename: 'postgresql-%a.log'
        log_truncate_on_rotation: 'on'
        log_rotation_size: '0'
        log_min_duration_statement: '1s'
        log_lock_waits: on
        log_checkpoints: 'on'
        log_temp_files: '0'
        log_autovacuum_min_duration: '0'
        lc_messages: 'C'
        wal_compression: "on"
        wal_log_hints: "on"
        fsync: "on"
        max_connections: "500"
        superuser_reserved_connections: "10"
        archive_mode: "on"
        archive_command: "/bin/true"
        password_encryption : "scram-sha-256"
        log_statement: ddl
        shared_preload_libraries: pg_stat_statements,auto_explain
        work_mem: '10MB'
        effective_cache_size: '4096MB'
        maintenance_work_mem: '512MB'
        shared_buffers: '1024MB'
        autovacuum_work_mem: '512MB'
        autovacuum_vacuum_threshold: 3000
        autovacuum_analyze_threshold: 1000
        autovacuum_vacuum_scale_factor: 0.002
        autovacuum_analyze_scale_factor: 0.001
        checkpoint_completion_target: '0.8'

  initdb:
    - data-checksums
    - encoding: UTF8

  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

  users:
    standby:
      password: 123
      options:
        - replication
        - login

etcd:
  hosts: 192.168.10.21:2379,192.168.10.22:2379,192.168.10.23:2379

postgresql:
  authentication:
    superuser:
      username: postgres
      password: 123
    replication:
      username: standby
      password: 123
  connect_address: "192.168.10.22:5432"
  data_dir: /pg_data/15/main
  config_dir: /pg_data/15/main
  bin_dir: /usr/lib/postgresql/15/bin
  listen: "0.0.0.0"
  use_unix_socket: true
  pgpass: /var/lib/postgresql/patroni.pgpass
  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

restapi:
  listen: 192.168.10.22:8009
  connect_address: 192.168.10.22:8009

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
  


--------------------------node3
name: "psql03"
namespace: "/postgres/"
scope: "main-postgres"

log:
  level: INFO
  dir: /var/log/patroni

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      remove_data_directory_on_rewind_failure: true
      remove_data_directory_on_diverged_timelines: true
      parameters:
        logging_collector: 'on'
        log_filename: 'postgresql-%a.log'
        log_truncate_on_rotation: 'on'
        log_rotation_size: '0'
        log_min_duration_statement: '1s'
        log_lock_waits: on
        log_checkpoints: 'on'
        log_temp_files: '0'
        log_autovacuum_min_duration: '0'
        lc_messages: 'C'
        wal_compression: "on"
        wal_log_hints: "on"
        fsync: "on"
        max_connections: "500"
        superuser_reserved_connections: "10"
        archive_mode: "on"
        archive_command: "/bin/true"
        password_encryption : "scram-sha-256"
        log_statement: ddl
        shared_preload_libraries: pg_stat_statements,auto_explain
        work_mem: '10MB'
        effective_cache_size: '4096MB'
        maintenance_work_mem: '512MB'
        shared_buffers: '1024MB'
        autovacuum_work_mem: '512MB'
        autovacuum_vacuum_threshold: 3000
        autovacuum_analyze_threshold: 1000
        autovacuum_vacuum_scale_factor: 0.002
        autovacuum_analyze_scale_factor: 0.001
        checkpoint_completion_target: '0.8'

  initdb:
    - data-checksums
    - encoding: UTF8

  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

  users:
    standby:
      password: 123
      options:
        - replication
        - login

etcd:
  hosts: 192.168.10.21:2379,192.168.10.22:2379,192.168.10.23:2379

postgresql:
  authentication:
    superuser:
      username: postgres
      password: 123
    replication:
      username: standby
      password: 123
  connect_address: "192.168.10.23:5432"
  data_dir: /pg_data/15/main
  config_dir: /pg_data/15/main
  bin_dir: /usr/lib/postgresql/15/bin
  listen: "0.0.0.0"
  use_unix_socket: true
  pgpass: /var/lib/postgresql/patroni.pgpass
  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

restapi:
  listen: 192.168.10.23:8009
  connect_address: 192.168.10.23:8009

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
----------------------------------------------
17- Patroni servisini sisteme sinyal yollayarak start ederek açacağız.
chown -R postgres: /etc/patroni
systemctl daemon-reload
rm -rf /pg_data/15/main/*
systemctl start patroni
  patronictl list
  patronictl -c /etc/patroni/patroni.yml list
  systemctl enable patroni
  tail /var/log/patroni/patroni.log





18- Burda vip manager olarak yeni versiyon kurulursa etcd ile uyumsuz olabilyor o açıdan eski versiyon vip managaer kurulumunu gerçekleştireceğiz
 #######################VIP MANAGER kurulumu######### 
cd /tmp/
  wget https://github.com/cybertec-postgresql/vip-manager/releases/download/v1.0.2/vip-manager_1.0.2-1_amd64.deb
 
  dpkg -i /tmp/vip-manager_2.1.0_Linux_x86_64.deb
  systemctl daemon-reload
  mv /etc/default/vip-manager.yml /etc/default/vip-manager.yml.backup

19- vip manager yml dosyasını düzenliyoruz. [3 node]
 
vim /etc/default/vip-manager.yml
interval: 1000
trigger-key: "/postgres/main-postgres/leader"
trigger-value: "psql01"
ip: 192.168.10.25 # the virtual ip address to manage
netmask: 24 # netmask for the virtual ip
interface: eth1 #interface to which the virtual ip will be added
hosting-type: basic # possible values: basic, or hetzner.
dcs-type: etcd # etcd or consul
dcs-endpoints: # a list that contains all DCS endpoints to which vip-manager could talk.
  - http://127.0.0.1:2379
  - https://192.168.10.21:2379
  - https://192.168.10.22:2379
  - https://192.168.10.23:2379

#etcd-user: "patroni"
#etcd-password: "Julian's secret password"

retry-num: 2
retry-after: 250  #in milliseconds

verbose: false
------------------------------------node2

interval: 1000
trigger-key: "/postgres/main-postgres/leader"
trigger-value: "psql02"
ip: 192.168.10.25 # the virtual ip address to manage
netmask: 24 # netmask for the virtual ip
interface: eth1 #interface to which the virtual ip will be added
hosting-type: basic # possible values: basic, or hetzner.
dcs-type: etcd # etcd or consul
dcs-endpoints: # a list that contains all DCS endpoints to which vip-manager could talk.
  - http://127.0.0.1:2379
  - https://192.168.10.21:2379
  - https://192.168.10.22:2379
  - https://192.168.10.23:2379

#etcd-user: "patroni"
#etcd-password: "Julian's secret password"

retry-num: 2
retry-after: 250  #in milliseconds

verbose: false
---------------------------------node3
interval: 1000
trigger-key: "/postgres/main-postgres/leader"
trigger-value: "psql02"
ip: 192.168.10.25 # the virtual ip address to manage
netmask: 24 # netmask for the virtual ip
interface: eth1 #interface to which the virtual ip will be added
hosting-type: basic # possible values: basic, or hetzner.
dcs-type: etcd # etcd or consul
dcs-endpoints: # a list that contains all DCS endpoints to which vip-manager could talk.
  - http://127.0.0.1:2379
  - https://192.168.10.21:2379
  - https://192.168.10.22:2379
  - https://192.168.10.23:2379

#etcd-user: "patroni"
#etcd-password: "Julian's secret password"

retry-num: 2
retry-after: 250  #in milliseconds

verbose: false
---------------------------

19-  vip manager servisimizi açarak kontrol ediyoruz.
systemctl enable --now vip-manager










umarım faydalı olmuştur. sevgiler



1 yorum:

  1. Merhaba hocam paylaşımınız çok güzel olmuş ellerinize saglik. Ben bu alanda yeniyim, şimdi trafigi herhangi bir IP ye yönlendirmem yeterli mi?

    YanıtlaSil