Sayfalar

1 Ekim 2024 Salı

Postgresql TimeScaleDB ve mevcut tabloyu timeseri tabloya dönüştürme

 PostgreSQL'de Zaman Serisi Veritabanı Oluşturmak ve Mevcut tablolarınızı Zaman Seri tablolarda saklayarak performansı artırabilirsiniz. bunun için güçlü bir kullanım ve performans olarak ta ciddi kazanımlar sağlamaktadır.


Postgresql 16 da sürümünde henüz resmi pakteleri desteklememektedir, 15 sürümünde doğrudan kurabilirsiniz.   ben hem 15 hemde 16 versiyon için kurulum paktlerini paylaşacağım.


postgresql 15 versiyon ullanıyorsanız. aşağıdaki komutlar ile timescale extentionu postgresql kurulu makinanıza kolay bi şekilde kurabilirsiniz. tabi bu kurulumlar debian ve türevleri için,



--postgresql 15 versioyon için

sudo apt update

sudo apt install postgresql-15-timescaledb-2-postgresql-15




--postgresql 16 versiyon için;

--timescale db nin ihtiyaç duyduğu araçları yükleyelim.

sudo apt update
sudo apt install build-essential cmake git libssl-dev clang llvm libclang-dev postgresql-server-dev-16


--TimeScaleDb kaynak kodu indiriyoruz githup tan.
cd /usr/local/src/
sudo git clone https://github.com/timescale/timescaledb.git
cd timescaledb

# Uyumlu sürüm için TimescaleDB'nin ana dalına çekelim
sudo git checkout latest


-- TimeScaleDB yi Derliyoruz
# Kuruluma hazırlık
sudo ./bootstrap

# Derleme
cd build && sudo make

# Kurulum
sudo make install


buraya kadar hem 15 hemde 16 versiyon postgresql sürümleri için kurulumu tamamladık. Şimdide extention ı db de oluşturacağız

sudo systemctl restart postgresql


-- TimescaleDB'nin yüklü olduğunu doğrulayın
\dx

-- Eklentiyi yüklemek için bir veritabanına bağlanın ve TimescaleDB'yi ekleyin
biz burada test_timescale_db adında bir db oluşturup işlemlerimizi orada yapacağız.

create database test_timescale_db

\c test_timescale_db

CREATE EXTENSION IF NOT EXISTS timescaledb;

-- Tablomuzun createini pgadmin üzerinden alarak yeni veritabanımızda oluşturalım.

>>> buradaki adımı pgadmin üzerinden yapıyorsunuz..

--TimescaleDB ile Hypertable Dönüşümü
Tabloyu TimescaleDB’nin performans özelliklerinden faydalanmak için hypertable’a dönüştürelim. Bu işlemi sendtime sütununu kullanarak yapacağız (zaman temelli veri olduğu için bu sütunu seçtik).

Burada önemli bir konu daha var, eğer time seri data da null varsa bunu yapmaya izin vermeyecektir. data da null olmaması gerekir.

-- Test veritabanına geri dönün
\c test_timescale_db;

-- Tabloyu hypertable'a dönüştürün
SELECT create_hypertable('public.projectadata', 'sendtime');


Timescale db de sıkıştırma özelliğini de kullanabilirsiniz.  

-- Sıkıştırmayı tabloya uygulayın
ALTER TABLE public.projectadata SET (timescaledb.compress, timescaledb.compress_orderby = 'sendtime');

-- Sıkıştırmayı uygula
SELECT compress_chunk(i.chunk_name) 
FROM show_chunks('public.projectdata') i;



tüm işlemler bu kadar,

şimdi burada aynı instance yani cluster da bulunan farklı db lerde bulunan bir tabloyu test_timescale_db ye atarak kullanıma başlayacağız.

bunun için dblinki kullanabiliriz, yada pg_admin , pg_dump gibi diğer toollar ile aktarımı yapabiliriz.

.-dblink Kullanarak Veri Kopyalayın

-- Mevcut veritabanında
CREATE EXTENSION dblink;

-- Test veritabanında
\c test_timescale_db
CREATE EXTENSION dblink;


daha sonra verileri atıyoruz..

tabi bu işlemden önce data da null olmadığını kontrol ediyoruz..

--varsa farklı kolon secebiliriz.
SELECT count(*) 
FROM public.projectdata 
WHERE sendtime IS NULL;


-- Mevcut veritabanına bağlanarak veri çekin ve yeni veritabanındaki tabloya ekleyin

INSERT INTO test_timescale_db.public.projectdata
SELECT * FROM dblink('dbname=test user=postgres', 'SELECT * FROM public.projectdata') 
AS t(customername VARCHAR(25), platformname VARCHAR(25), systemname VARCHAR(25), categoryname VARCHAR(255), tagname VARCHAR(100), sensorname VARCHAR(25), tagvalue DOUBLE PRECISION, tagdate VARCHAR(10), tagtime TIME, des_unit VARCHAR(155), hour INTEGER, minute INTEGER, second INTEGER, tagdescription VARCHAR(100), tagminvalue DOUBLE PRECISION, tagmaxvalue DOUBLE PRECISION, tagunit VARCHAR(50), data_status VARCHAR(8), tagerrvalue INTEGER, tagdatetime VARCHAR(26), sendtime TIMESTAMP);


tablo üzerinde chunklar olup olmadığını görebiliriz


SELECT show_chunks('public.proectdata');

--eğer chunk'lar varsa bunları sıkıştırabiliriz.

SELECT compress_chunk(chunk)
FROM show_chunks('public.projectdata') AS chunk;



12 Mart 2024 Salı

linux locale dil ayar değişimi sonrası Patroni Create cluster failed hatası

 Merhabalar,


Ubunutu/debian türevleri üzerine kurulu olan bir patroni cluster yapısınnda diğer node larda aynı dil ayarı yoksa patroni direkt failed verecektir..


Peki nasıl çözeriz.? Hem local hemde sistem dilini istediğimiz ayara cevirebilir miyiz.?


makinalara ssh ile 3 party toollar ile bağlandığında aslında değişiklikleri yapıldığını herhangi bri hata ekranı vermediği görülebilir. bu durumda aslında doğrudan sunucu üzerinden yada sisteme Vm üzerinden direkt bağlanmak gerekebilir.












öncelikle ~/.bashrc ve ~/.profile dosyası altında bulunan export ile başlayan dil ayarlarını kaldırın.


root@node:# vi /etc/default/locales altında bulunan dil ayarlarınızı kaldırın.

root@node:#dpkg-reconfigure locales ile gelen mor ekranda en_US.UTF-8 ve tr_TR.UTF-8 secili olduğuna emin olalım (bu ayar aslında ana makinada bulunan ayar ile aynı olmalıdır.)


daha sonra secimimiz local dil ayarı ile devam ediyor olacağız. aşağdaıki ekranda local dil ayarımızın hangisi olmasını istiyorsak onu secerek terminal ekranımızın dilinin de o olmasını sağlamış olacağız.










Burada önemli olan nokta bunu mobaXterm ile yada putty ile yapıldığında  failed olması durumunda ekrana information bilgisi vermemsinden kaynaklanmaktadır. bu durumda direk olarak idirect, vm portal yada vcenter üzerinden doğrudan makina arayüzüne bağlanarak gerçekleştirmek doğru olacaktır.


   

6 Ekim 2023 Cuma

Postgresql HA Yüksek kullanılabilir mimari- Patroni kurulumu

 Merhaba arkadaşlar burada postgresql database'in Streaming Replication ile gelen cluster mimarisini manage eden yöneten patroni uygulamasının debian- ubuntu kurulumunu açıklayacağım. 

bu mimari da 3 node'lu bir clusterımız streaming replicaiton alt yapısını kullanarak patroni tarafından yönetilmesini sağlayacağız.


yapı şöyle olacak: 3 node'da da etcd,postgresql,patroni ve vip-manager kurarak clusterımız tamamlamış olacağız.


1- Sunucu kurulumlarında bazen default da image den otamatik kurulum yaptığı durumlar olabiliyor. bunların olmadığını teyit ediyoruz.

apt list --installed | grep postgres

apt list --installed | grep etcd

apt list --installed | grep vip_manager

apt list --installed | grep patroni


2- host dosyasına ekleme yapıyoruz.

>>vim /etc/hosts

192.168.10.21 psql01

192.168.10.22 psql02

192.168.10.23 psql03


3- Burada Debian- Ubuntu türevini kullandığımız için selinux ' ü kontrol etmeye gerek yok. firewall acık ise disable edip kapatmamız yeterli.

systemctl stop ufw

systemctl disable ufw


4- ETCD kurulumunu gerçekleştiriyoruz

mkdir /tmp/etcd

cd /tmp/etcd

wget https://github.com/etcd-io/etcd/releases/download/v3.5.9/etcd-v3.5.9-linux-amd64.tar.gz  --no-check-certificate

tar -xzvf etcd-v3.5.9-linux-amd64.tar.gz


groupadd --system etcd

useradd --create-home --home-dir /var/lib/etcd --shell /bin/bash --system --gid etcd etcd


mkdir /var/log/etcd

chown etcd: /var/log/etcd

chmod 700 /var/log/etcd


mkdir /etc/etcd.d

chown etcd: /etc/etcd.d

chmod 700 /etc/etcd.d


mkdir /var/lib/etcd/data

chown -R etcd: /var/lib/etcd

chmod -R 700 /var/lib/etcd


install /tmp/etcd/etcd-v3.5.9-linux-amd64/etcd /usr/bin/etcd

install /tmp/etcd/etcd-v3.5.9-linux-amd64/etcdctl /usr/bin/etcdctl


5- etcd configurasyonlarını yapıyoruz. burada 3 node için ayrı ayrı gireceksini o kısım önemli

vim /etc/etcd.d/etcd.conf

name: "psql01"

data-dir: "/var/lib/etcd/data"

strict-reconfig-check: true

enable-v2: true

logger: zap

log-outputs:

  - stdout

  - stderr


initial-cluster: "psql01=http://192.168.10.21:2380,psql02=http://192.168.10.22:2380,psql03=http://192.168.10.23:2380"

initial-cluster-state: "new"

initial-cluster-token: "23D69B8D-B236-4D74-ACC3-26ED82891DE9"

initial-advertise-peer-urls: "http://192.168.10.21:2380"


listen-peer-urls: "http://192.168.10.21:2380"

listen-client-urls: "http://192.168.10.21:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.10.21:2379"

listen-metrics-urls: "http://0.0.0.0:2381"

heartbeat-interval: 150

election-timeout: 1500

------------------------------------------node2

vim /etc/etcd.d/etcd.conf

name: "psql02"

data-dir: "/var/lib/etcd/data"

strict-reconfig-check: true

enable-v2: true

logger: zap

log-outputs:

  - stdout

  - stderr


initial-cluster: "psql01=http://192.168.10.21:2380,psql02=http://192.168.10.22:2380,psql03=http://192.168.10.23:2380"

initial-cluster-state: "new"

initial-cluster-token: "23D69B8D-B236-4D74-ACC3-26ED82891DE9"

initial-advertise-peer-urls: "http://192.168.10.22:2380"


listen-peer-urls: "http://192.168.10.22:2380"

listen-client-urls: "http://192.168.10.22:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.10.22:2379"

listen-metrics-urls: "http://0.0.0.0:2381"

heartbeat-interval: 150

election-timeout: 1500


------------------------------------------node3

vim /etc/etcd.d/etcd.conf

name: "psql03"

data-dir: "/var/lib/etcd/data"

strict-reconfig-check: true

enable-v2: true

logger: zap

log-outputs:

  - stdout

  - stderr


initial-cluster: "psql01=http://192.168.10.21:2380,psql02=http://192.168.10.22:2380,psql03=http://192.168.10.23:2380"

initial-cluster-state: "new"

initial-cluster-token: "23D69B8D-B236-4D74-ACC3-26ED82891DE9"                      

initial-advertise-peer-urls: "http://192.168.10.23:2380"


listen-peer-urls: "http://192.168.10.23:2380"

listen-client-urls: "http://192.168.10.23:2379,http://127.0.0.1:2379"

advertise-client-urls: "http://192.168.10.23:2379"

listen-metrics-urls: "http://0.0.0.0:2381"

heartbeat-interval: 150

election-timeout: 1500


6- Etcd servisini oluşturuyoruz. 3 node da da ayınısı yapıyoruz

vim /etc/systemd/system/etcd.service

[Unit]

Description="etcd - A distributed, reliable key-value store for the most critical data of a distributed system"

Documentation=https://etcd.io/docs/latest

Requires=network-online.target

After=network-online.target

ConditionFileNotEmpty=/etc/etcd.d/etcd.conf


[Service]

Type=notify

User=etcd

Group=etcd

WorkingDirectory=/var/lib/etcd

Environment=ETCD_CONFIG_FILE=/etc/etcd.d/etcd.conf

Environment=GOMAXPROCS=1


StandardOutput=append:/var/log/etcd/etcd.log

StandardError=append:/var/log/etcd/etcd-error.log


ExecStart=/usr/bin/etcd


Restart=on-failure

RestartSec=10s


LimitNOFILE=65536

OOMScoreAdjust=-1000


[Install]

WantedBy=multi-user.target


7- etcd profile dosyasını oluşturuyoruz. 3 node da da aynı bilgiler olmalı

vim /etc/profile.d/etcd_profile.sh

export ETCDCTL_API=2

export ETCDCTL_ENDPOINT='http://127.0.0.1:2379,http://192.168.10.21:2379,http://192.168.10.22:2379,http://192.168.10.23:2379'

export ETCD_CONFIG_FILE=/etc/etcd.d/etcd.conf


8- şimdi etcdyi çalıştırarak kontrol ediyoruz

systemctl daemon-reload

systemctl start etcd

systemctl status etcd

systemctl enable etcd


etcdctl cluster-health








V3 versiyonlarda aşağıdaki komut ile kontrol edilebilir.
etcdctl --cluster=true endpoint health

9- makinamızı reboot ediyoruz. 3 node da da.
reboot

10-###################Postgresql kurulum#########
sudo sh -c 'echo "deb https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
apt update
sudo apt-get update
apt list | grep postgres
sudo apt-get -y install postgresql-15 postgresql-contrib-15
sudo su - postgres
pg_lsclusters
pg_dropcluster 15 main --stop
pg_lsclusters

11- postgresql config ayarları için ilgili dizinler oluşturulur.
mkdir -p /pg_data/15/main
chown -R postgres: /pg_data
chmod 700 -R /pg_data

12- postgresql de locale dil tr-utf8 ekliyoruz. 3 node da bu ayarı yapmamız gerekiyor aksi halde sonradan bu ayarı yapmamız için restart gerekir. o açıdan şimdiden bu dil desteğini ekliyoruz
root@psql01: localectl list-locales
C.UTF-8
en_US.UTF-8
root@psql01: vi /etc/locale.gen
   tr_TR.UTF-8 UTF-8
root@psql01: locale-gen 
root@psql01: localectl list-locales
C.UTF-8
en_US.UTF-8
tr_TR.UTF-8

13-#####PATRONI kurulumu##########
mkdir -m 700 /var/log/patroni
chown postgres: /var/log/patroni

  apt-get install libpq-dev
  apt-get install python3-psycopg2
  apt-get install python3-dev
  apt-get install python3-pip
pip3 install -U pip
pip3 install patroni[etcd]

mkdir /etc/patroni
touch /etc/patroni/env.conf
chmod 600 /etc/patroni/env.conf

14- patroni sh dosyasını düzenliyoruz. [3 node]
vim /etc/profile.d/patroni.sh
export PATRONICTL_CONFIG_FILE=/etc/patroni/patroni.yml

15-Patroni servisini ekliyoruz.  [3 node]
vim /lib/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL
After=syslog.target network.target

[Service]
Type=simple

User=postgres
Group=postgres

Environment="PATRONI_CONFIGURATION_FILE=/etc/patroni/patroni.yml"

EnvironmentFile=-/etc/patroni/env.conf

ExecStart=/usr/local/bin/patroni ${PATRONI_CONFIGURATION_FILE}
ExecReload=/bin/kill -s HUP $MAINPID
KillMode=process
TimeoutSec=30
Restart=no

[Install]
WantedBy=multi-user.target

16- Burada patroni yml dosyasını 3 node için giriyoruz alt alta yazıyorum dikkatli izleyiniz.

vim /etc/patroni/patroni.yml
name: "psql01"
namespace: "/postgres/"
scope: "main-postgres"

log:
  level: INFO
  dir: /var/log/patroni

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      remove_data_directory_on_rewind_failure: true
      remove_data_directory_on_diverged_timelines: true
      parameters:
        logging_collector: 'on'
        log_filename: 'postgresql-%a.log'
        log_truncate_on_rotation: 'on'
        log_rotation_size: '0'
        log_min_duration_statement: '1s'
        log_lock_waits: on
        log_checkpoints: 'on'
        log_temp_files: '0'
        log_autovacuum_min_duration: '0'
        lc_messages: 'C'
        wal_compression: "on"
        wal_log_hints: "on"
        fsync: "on"
        max_connections: "500"
        superuser_reserved_connections: "10"
        archive_mode: "on"
        archive_command: "/bin/true"
        password_encryption : "scram-sha-256"
        log_statement: ddl
        shared_preload_libraries: pg_stat_statements,auto_explain
        work_mem: '10MB'
        effective_cache_size: '6144MB'
        maintenance_work_mem: '512MB'
        shared_buffers: '2048MB'
        autovacuum_work_mem: '512MB'
        autovacuum_vacuum_threshold: 3000
        autovacuum_analyze_threshold: 1000
        autovacuum_vacuum_scale_factor: 0.002
        autovacuum_analyze_scale_factor: 0.001
        checkpoint_completion_target: '0.8'

  initdb:
    - data-checksums
    - encoding: UTF8

  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

  users:
    standby:
      password: 123
      options:
        - replication
        - login

etcd:
  hosts: 192.168.10.21:2379,192.168.10.22:2379,192.168.10.23:2379

postgresql:
  authentication:
    superuser:
      username: postgres
      password: 123
    replication:
      username: standby
      password: 123
  connect_address: "192.168.10.21:5432"
  data_dir: /pg_data/15/main
  config_dir: /pg_data/15/main
  bin_dir: /usr/lib/postgresql/15/bin
  listen: "0.0.0.0"
  use_unix_socket: true
  pgpass: /var/lib/postgresql/patroni.pgpass
  pg_hba:
   - local all postgres peer
   - host all all 127.0.0.1/32 scram-sha-256
   - host replication standby 127.0.0.1/32 scram-sha-256
   - host replication standby 192.168.10.21/24 scram-sha-256
   - host replication standby 192.168.10.22/24 scram-sha-256
   - host replication standby 192.168.10.23/24 scram-sha-256
   - host all postgres 192.168.10.21/24 scram-sha-256
   - host all postgres 192.168.10.22/24 scram-sha-256
   - host all postgres 192.168.10.23/24 scram-sha-256

restapi:
  listen: 192.168.10.21:8009
  connect_address: 192.168.10.21:8009

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
  

--------------------------node2
name: "psql02"
namespace: "/postgres/"
scope: "main-postgres"

log:
  level: INFO
  dir: /var/log/patroni

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      remove_data_directory_on_rewind_failure: true
      remove_data_directory_on_diverged_timelines: true
      parameters:
        logging_collector: 'on'
        log_filename: 'postgresql-%a.log'
        log_truncate_on_rotation: 'on'
        log_rotation_size: '0'
        log_min_duration_statement: '1s'
        log_lock_waits: on
        log_checkpoints: 'on'
        log_temp_files: '0'
        log_autovacuum_min_duration: '0'
        lc_messages: 'C'
        wal_compression: "on"
        wal_log_hints: "on"
        fsync: "on"
        max_connections: "500"
        superuser_reserved_connections: "10"
        archive_mode: "on"
        archive_command: "/bin/true"
        password_encryption : "scram-sha-256"
        log_statement: ddl
        shared_preload_libraries: pg_stat_statements,auto_explain
        work_mem: '10MB'
        effective_cache_size: '4096MB'
        maintenance_work_mem: '512MB'
        shared_buffers: '1024MB'
        autovacuum_work_mem: '512MB'
        autovacuum_vacuum_threshold: 3000
        autovacuum_analyze_threshold: 1000
        autovacuum_vacuum_scale_factor: 0.002
        autovacuum_analyze_scale_factor: 0.001
        checkpoint_completion_target: '0.8'

  initdb:
    - data-checksums
    - encoding: UTF8

  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

  users:
    standby:
      password: 123
      options:
        - replication
        - login

etcd:
  hosts: 192.168.10.21:2379,192.168.10.22:2379,192.168.10.23:2379

postgresql:
  authentication:
    superuser:
      username: postgres
      password: 123
    replication:
      username: standby
      password: 123
  connect_address: "192.168.10.22:5432"
  data_dir: /pg_data/15/main
  config_dir: /pg_data/15/main
  bin_dir: /usr/lib/postgresql/15/bin
  listen: "0.0.0.0"
  use_unix_socket: true
  pgpass: /var/lib/postgresql/patroni.pgpass
  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

restapi:
  listen: 192.168.10.22:8009
  connect_address: 192.168.10.22:8009

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
  


--------------------------node3
name: "psql03"
namespace: "/postgres/"
scope: "main-postgres"

log:
  level: INFO
  dir: /var/log/patroni

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      use_slots: true
      remove_data_directory_on_rewind_failure: true
      remove_data_directory_on_diverged_timelines: true
      parameters:
        logging_collector: 'on'
        log_filename: 'postgresql-%a.log'
        log_truncate_on_rotation: 'on'
        log_rotation_size: '0'
        log_min_duration_statement: '1s'
        log_lock_waits: on
        log_checkpoints: 'on'
        log_temp_files: '0'
        log_autovacuum_min_duration: '0'
        lc_messages: 'C'
        wal_compression: "on"
        wal_log_hints: "on"
        fsync: "on"
        max_connections: "500"
        superuser_reserved_connections: "10"
        archive_mode: "on"
        archive_command: "/bin/true"
        password_encryption : "scram-sha-256"
        log_statement: ddl
        shared_preload_libraries: pg_stat_statements,auto_explain
        work_mem: '10MB'
        effective_cache_size: '4096MB'
        maintenance_work_mem: '512MB'
        shared_buffers: '1024MB'
        autovacuum_work_mem: '512MB'
        autovacuum_vacuum_threshold: 3000
        autovacuum_analyze_threshold: 1000
        autovacuum_vacuum_scale_factor: 0.002
        autovacuum_analyze_scale_factor: 0.001
        checkpoint_completion_target: '0.8'

  initdb:
    - data-checksums
    - encoding: UTF8

  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

  users:
    standby:
      password: 123
      options:
        - replication
        - login

etcd:
  hosts: 192.168.10.21:2379,192.168.10.22:2379,192.168.10.23:2379

postgresql:
  authentication:
    superuser:
      username: postgres
      password: 123
    replication:
      username: standby
      password: 123
  connect_address: "192.168.10.23:5432"
  data_dir: /pg_data/15/main
  config_dir: /pg_data/15/main
  bin_dir: /usr/lib/postgresql/15/bin
  listen: "0.0.0.0"
  use_unix_socket: true
  pgpass: /var/lib/postgresql/patroni.pgpass
  pg_hba:
    - local all postgres peer
    - host all all 127.0.0.1/32 scram-sha-256
    - host replication standby 127.0.0.1/32 scram-sha-256
    - host replication standby 192.168.10.21/24 scram-sha-256
    - host replication standby 192.168.10.22/24 scram-sha-256
    - host replication standby 192.168.10.23/24 scram-sha-256
    - host all postgres 192.168.10.21/24 scram-sha-256
    - host all postgres 192.168.10.22/24 scram-sha-256
    - host all postgres 192.168.10.23/24 scram-sha-256

restapi:
  listen: 192.168.10.23:8009
  connect_address: 192.168.10.23:8009

tags:
  nofailover: false
  noloadbalance: false
  clonefrom: false
  nosync: false
----------------------------------------------
17- Patroni servisini sisteme sinyal yollayarak start ederek açacağız.
chown -R postgres: /etc/patroni
systemctl daemon-reload
rm -rf /pg_data/15/main/*
systemctl start patroni
  patronictl list
  patronictl -c /etc/patroni/patroni.yml list
  systemctl enable patroni
  tail /var/log/patroni/patroni.log





18- Burda vip manager olarak yeni versiyon kurulursa etcd ile uyumsuz olabilyor o açıdan eski versiyon vip managaer kurulumunu gerçekleştireceğiz
 #######################VIP MANAGER kurulumu######### 
cd /tmp/
  wget https://github.com/cybertec-postgresql/vip-manager/releases/download/v1.0.2/vip-manager_1.0.2-1_amd64.deb
 
  dpkg -i /tmp/vip-manager_2.1.0_Linux_x86_64.deb
  systemctl daemon-reload
  mv /etc/default/vip-manager.yml /etc/default/vip-manager.yml.backup

19- vip manager yml dosyasını düzenliyoruz. [3 node]
 
vim /etc/default/vip-manager.yml
interval: 1000
trigger-key: "/postgres/main-postgres/leader"
trigger-value: "psql01"
ip: 192.168.10.25 # the virtual ip address to manage
netmask: 24 # netmask for the virtual ip
interface: eth1 #interface to which the virtual ip will be added
hosting-type: basic # possible values: basic, or hetzner.
dcs-type: etcd # etcd or consul
dcs-endpoints: # a list that contains all DCS endpoints to which vip-manager could talk.
  - http://127.0.0.1:2379
  - https://192.168.10.21:2379
  - https://192.168.10.22:2379
  - https://192.168.10.23:2379

#etcd-user: "patroni"
#etcd-password: "Julian's secret password"

retry-num: 2
retry-after: 250  #in milliseconds

verbose: false
------------------------------------node2

interval: 1000
trigger-key: "/postgres/main-postgres/leader"
trigger-value: "psql02"
ip: 192.168.10.25 # the virtual ip address to manage
netmask: 24 # netmask for the virtual ip
interface: eth1 #interface to which the virtual ip will be added
hosting-type: basic # possible values: basic, or hetzner.
dcs-type: etcd # etcd or consul
dcs-endpoints: # a list that contains all DCS endpoints to which vip-manager could talk.
  - http://127.0.0.1:2379
  - https://192.168.10.21:2379
  - https://192.168.10.22:2379
  - https://192.168.10.23:2379

#etcd-user: "patroni"
#etcd-password: "Julian's secret password"

retry-num: 2
retry-after: 250  #in milliseconds

verbose: false
---------------------------------node3
interval: 1000
trigger-key: "/postgres/main-postgres/leader"
trigger-value: "psql02"
ip: 192.168.10.25 # the virtual ip address to manage
netmask: 24 # netmask for the virtual ip
interface: eth1 #interface to which the virtual ip will be added
hosting-type: basic # possible values: basic, or hetzner.
dcs-type: etcd # etcd or consul
dcs-endpoints: # a list that contains all DCS endpoints to which vip-manager could talk.
  - http://127.0.0.1:2379
  - https://192.168.10.21:2379
  - https://192.168.10.22:2379
  - https://192.168.10.23:2379

#etcd-user: "patroni"
#etcd-password: "Julian's secret password"

retry-num: 2
retry-after: 250  #in milliseconds

verbose: false
---------------------------

19-  vip manager servisimizi açarak kontrol ediyoruz.
systemctl enable --now vip-manager










umarım faydalı olmuştur. sevgiler



9 Kasım 2020 Pazartesi

no supported authentication methods available (server sent publickey gssapi-keyex gssapi-with-mic) on google Cloud , AWS, Azure

 no supported authentication methods available (server sent publickey gssapi-keyex gssapi-with-mic)

Hello,

You got an error like this on Google Cloud , AWS, Azure or something like that.



the problem is about your sshd_config file. 

change the PasswordAuthentication yes from no 

 sudo vi /etc/ssh/sshd_config

PasswordAuthentication yes 

:wq!

and restart sshd service

##sudo systemctl restart sshd


thats All.

6 Kasım 2020 Cuma

How to move SQL Server database files (LDF and MDF) to another location (path)

DatabaseName: CND    (example)

NewPathNameData: C:\DATA1\

NewPathNameLog: C:\LOG\


checking to Database PathLocation:

SELECT name, physical_name AS CurrentLocation, state_desc

FROM sys.master_files

WHERE database_id = DB_ID(N'CND');

 

use master

go

ALTER DATABASE CND

    MODIFY FILE ( NAME = CND,   

                  FILENAME = 'C:\DATA1\CND.mdf');  

GO

 

ALTER DATABASE CND

    MODIFY FILE ( NAME = CND_log,   

                  FILENAME = 'C:\LOG\CND_Log.ldf');  

GO


 

ALTER DATABASE CND SET OFFLINE;  

GO


move data (*.mdf and *.ldf ) to new location (as phisical) "like copy and past to new location etc."


ALTER DATABASE CND SET ONLINE;  

GO


checking to New Database PathLocation: 

SELECT name, physical_name AS NewLocation, state_desc AS OnlineStatus

FROM sys.master_files  

WHERE database_id = DB_ID(N'CND')  

GO

10 Ekim 2020 Cumartesi

How to reset/crack the password of the "dba" account - SQL Anywhere

 

  • Trying to login as account "dba/sysadmin" and receiving the error -103 "Invalid user ID or password."
  • We have "lost/forget" the password of the "dba/sysadmin" account.
  • How  can we reset the password of the "dba"  account?





password_error   

Cause

  • Prior to build 17.0.5739
    • The default password for the "dba"1 account is "sql" has been changed.2
    • A different password can be defined to the account at creation time or later using the "ALTER USER ..." statement.
    • A password change, after the creation of the database, will be logged in the transaction action log, with the password encrypted.
    • There is no option to start a database and print a new dba password to console (as in ASE or SQL Server).
      SQL Anywhere has been designed to run on systems where security from the operating system level cannot be guaranteed.
    • Passwords are stored encrypted from the moment they are written to the database and there is no option to decrypt them or reset it, 
      unless another administrator account or transaction log with logged password change is available (see below).
  • From build 17.0.5739 or above, a new feature has been introduced:

    New feature is added to allow to reset the forgotten password.

    1. New role (SYS_OFFLINE_RESET_PASSWORD_ROLE) is added. It can be granted by the role administrator to a user who is authorized to reset password.
    2. New server command line switch is added to reset the password of the specified user by the user whom SYS_OFFLINE_RESET_PASSWORD_ROLE role has been granted.

            -orp "{UserID|UID}=<userid>;{ NewPassword|NEWPWD}=<newpassword>;{ AuthUserID |AUTHUID}=<authuserid>;{ AuthPassword|AUTHPWD}=<authpassword>"

    • <userid>: the user whose password is going to be reset.
    • <newpassword>: new password of the <userid>
    • <authuserid>: the user who has SYS_OFFLINE_RESET_PASSWORD_ROLE granted by the role administrator
    • <authpassword>: password of <authuserid>

    If -orp is used, the server would start a database and attempt to reset the password with given parameters, and immediately stop the database and shutdown. The console output reports the success message or an error message if failed.

    Note:

    - The password reset would not be allowed if the <userid> has no password.

    - Must specify one database with the -orp option

    - No database option is not allowed with -orp server option

    - No other server option (except -o) is allowed with -orp server option

    - minimum password length of <newpassword> is 6.

 Notes:

  1. Some applications may have chosen a different account to serve as their dba but you may still be
    able to follow this KBA if you know what the name of that account is supposed to be.
  2. There is no predefined default dba or password as of version 17.

Resolution

You do have three options to resolve this situation

  • use the -orp database server option added in version 17.0.5739 or above
  • reset the password using another DBA account
  • rebuild the database using the transaction log files

Use the -orp database server option added in version 17.0.5739 or above

Example:

User ‘DBA’ had password ‘sql123’ and the user ‘reset_user’ has password ‘sql456’. As a role administrator, user ‘DBA’ granted SYS_OFFLINE_RESET_PASSWORD_ROLE to user ‘reset_user’ by the following statement.

grant role sys_offline_reset_password_role to reset_user;

Suppose that user ‘DBA’ has forgotten the password. After shutting down the server, it can be reset to ‘newpassword’ by the following server command line.

dbeng17 -orp "UID=DBA;NEWPWD=newpassword;AUTHUID=reset_user;AUTHPWD=sql456" mydb.db

Reset the password using another DBA account

The account "dba" is the default super-user of a database. Additional super-user accounts can exist in a database. If such an additional super-user exists and the credentials are known, this account can be used to reset the password of the "dba" account. Please refer to KBA 2207121  for more information about super-user account creation.

  1. In Sybase Central use the credentials of another DBA account to log in to the database (If you do not have Sybase Central please refer to KBA 2040854)
  2. In the left hand Folders Double Click Users
  3. Right Click the user credentials you want to reset > Click Properties
  4. Change the password of the account, confirm it and click OK

Rebuild the database using the transaction log files

A rebuild can be done, if one the following conditions is meet.

  • All Transaction log files do exist, since the creation of the database.
  • A database file backup is available and all transaction log files, created after the backup, do exist.

The important aspect is that the database can only be rebuild, using one or more transaction log files, if these do contain all data definition language (DDL) and data manipulation language (DML) statements since the creation of the database.

Using all transaction log files, since the database was created

  1. Create a new database, with the same parameters as the original database was built with.
    • For example: dbinit -dba DBA,sql -p 4k demo.db
      You can find the original parameters of a database from a console/server log. In the console log file, on the top it should say something like the following, describing how much memory/cache size/ page size are you using

  2. Translate the transaction log files into .sql script files, using the SQL Anywhere tool "dbtran".
    • For example: dbtran demo.log demo.sql
  3. If the password of the "dba" account was changed after the creation of the database, the "ALTER USER" statement needs to be located in the created .sql files and commented out.
    OR
    If you cannot find the ALTER USER statement, go to the bottom of .sql file and add the following statement:
    ALTER USER dba IDENTIFIED BY sql;
    Please replace sql with the new password you want for your database.
    Adding this statement will override the previous password change.
  4. In the same order as the corresponding transaction files were created, apply each single .sql script file to the database, using e.g. Interactive SQL (dbisql)
    • For example: dbisql -c "DBF=demo;UID=DBA;PWD=sql" demo.sql

Using all transaction log files, since the backup of the database was created

  1. Translate the transaction log files into .sql script files, using the SQL Anywhere tool "dbtran".
    • For example: dbtran demo.log demo.sql
  2. If the password of the "dba" account was changed after the creation of the database, the "ALTER USER" statement needs to be located in the created .sql files and commented out.
    OR
    If you cannot find the ALTER USER statement, go to the bottom of .sql file and add the following statement:
    ALTER USER dba IDENTIFIED BY sql;
    Please replace sql with the new password you want for your database.
    Adding this statement will override the previous password change.
  3. Get the backup of the database
  4. In the same order as the corresponding transaction files were created, apply each single .sql script file to the database, using e.g. Interactive SQL (dbisql)
    • For example: dbisql -c "DBF=demo;UID=DBA;PWD=sql" demo.sql

Caveats:

Rebuilding a database has in general some side effects you should be aware of.

  • This will break any MobiLink or SQL Remote synchronization that is ongoing.
  • This should only be done in a development environment.
  • For a database of an OEM Authenticated edition, further steps might be required to re-authenticate the database.
  • If there is an event that created and scheduled in the past, error "Start date/time for non-recurring event %1 is in the past" might return. You need to open the translated sql file, and comment out all the event that schedule in the past or change the schedule time to future time.

Notes: The tool "dbtran" prints the start & end offset of the translated transaction log file. This way you can verify that no transaction log is missing. The end offset of the first transaction log should be identical to the start offset of the second transaction, and so on.


Source:Sybaseinfocenter

26 Eylül 2020 Cumartesi

attempt to fetch logical page (1:XXXXXX) in database XX failed. It belongs to allocation unit XXX not to XXX failed

Merhabalar VeriTabanlarında index yada Non-Cluster index bozulmalarından dolayı aşağıdaki gibi bir hata alabilirsiniz? Bu index bakımlarının zamanında yapılmaması, yanlış index oluşturulması, tabloların datalarının bulunduğu disklerin corrupt olması yada sistem disklerinin crush olması gibi birçok sebebten kaynaklı olabilir. Peki bu hatayı aldık sonra herşey bitti mi? hayır. data kaybında önce veriyi kurtarmak için birçok yöntem vardır.

Örnek olarak aşağıdaki bir hatayı ele alalım.

attempt to fetch logical page (1:438927981) in database 14 failed. It belongs to allocation unit 72060409138315264 not to 72060409197035520

öncelikle tablomuzun yedeğini alalım. ve satır satır row row tüm sağlam veriyi ayrı  bir tabloya alalım.

bunun için aşağıdaki scripti çalıştırabilirsiniz?

 CODE:

set nocount on;

DECLARE @Counter INT = 1;

DECLARE @LastRecord INT = 10000000; --your table_count


 WHILE @Counter < @LastRecord 

BEGIN 

BEGIN TRY  

    BEGIN

        insert into your_table_new SELECT * FROM your_table  WHERE your_column= @Counter --dont forget! create your_table_new before

    END 

END TRY

BEGIN CATCH    

     BEGIN

        insert into error_code select @Counter,'error_number' --dont forget the create error_code table before.

     END

 END CATCH

SET @Counter += 1;

END;


-----------------------

tablomuzun yedeği alındı. şimdi tablo üzerinde hertürlü kurtarma senaryolarımızı uygulayabiliriz.


Öncelikle tablodaki cluster mı noncluster indexlerde mi bozluma var tespit etmeye çalışıyoruz. Eğer noncluster ise işimiz kolay tablomuzu kurtarabiliriz. Ancak cluster ise fiziki olarak diskimizde arıza var demek anlamına gelir ve onarma yöntemlerinin diğer adımlarına geçerek onarma yoluna gideceğiz.


 

DBCC TRACEON(3604)

DBCC PAGE('table_name',1, 438927981,3) WITH TABLERESULTS

DBCC TRACEOFF(3604)


Metadata: IndexId 

 0 (heap) yada 1 (clustered index) ise backupdan dönmeniz gerekir.

 0 yada 1 den farklı bir değerse ise noncluster indexi rebuilt etmeniz yeterlidir.

 

 bunların hiç biri işe yaramadı diyelim. tabloyu onarmayı deneyeceğiz.

 

 DBCC CHECKTABLE(TabloAdi, REPAIR_REBUILD )

GO


bu da çzümolmazsa. Son bir seçenek Veri Kaybını da göze alarak onarma yoluna gideceğiz.


DBCC CHECKTABLE(TabloAdi, REPAIR_ALLOW_DATA_LOSS )

GO

 

 şu sitede index datalarına göz atmak için bakabilirsiniz.

 KAYNAK: https://www.mssqltips.com/sqlservertip/1578/using-dbcc-page-to-examine-sql-server-table-and-index-data/