PHP Developer News

Getting Started with ProxySQL in Kubernetes

There are plenty of ways to run ProxySQL in Kubernetes (K8S). For example, we can deploy sidecar containers on the application pods, or run a dedicated ProxySQL service with its own pods.
We are going to discuss the latter approach, which is more likely to be used when dealing with a large number of application pods. Remember each ProxySQL instance runs a number of checks against the database backends. These checks monitor things like server-status and replication lag. Having too many proxies can cause significant overhead.
Creating a Cluster
For the purpose of this example, I am going to deploy a test cluster in GKE. We need to follow these steps:
1. Create a clustergcloud container clusters create ivan-cluster --preemptible --project my-project --zone us-central1-c --machine-type n2-standard-4 --num-nodes=32. Configure command-line accessgcloud container clusters get-credentials ivan-cluster --zone us-central1-c --project my-project3. Create a Namespacekubectl create namespace ivantest-ns4. Set the context to use our new Namespacekubectl config set-context $(kubectl config current-context) --namespace=ivantest-ns
Dedicated Service Using a StatefulSet
One way to implement this approach is to have ProxySQL pods use persistent volumes to store the configuration. We can rely on ProxySQL Cluster mode to make sure the configuration is kept in sync.
For simplicity, we are going to use a ConfigMap with the initial config for bootstrapping the ProxySQL service for the first time.
Exposing the passwords in the ConfigMap is far from ideal, and so far the K8S community hasn’t made up its mind about how to implement Reference Secrets from ConfigMap.
1. Prepare a file for the ConfigMaptee proxysql.cnf <<EOF
datadir="/var/lib/proxysql"

admin_variables=
{
admin_credentials="admin:admin;cluster:secret"
mysql_ifaces="0.0.0.0:6032"
refresh_interval=2000
cluster_username="cluster"
cluster_password="secret"
}

mysql_variables=
{
threads=4
max_connections=2048
default_query_delay=0
default_query_timeout=36000000
have_compress=true
poll_timeout=2000
interfaces="0.0.0.0:6033;/tmp/proxysql.sock"
default_schema="information_schema"
stacksize=1048576
server_version="8.0.23"
connect_timeout_server=3000
monitor_username="monitor"
monitor_password="monitor"
monitor_history=600000
monitor_connect_interval=60000
monitor_ping_interval=10000
monitor_read_only_interval=1500
monitor_read_only_timeout=500
ping_interval_server_msec=120000
ping_timeout_server=500
commands_stats=true
sessions_sort=true
connect_retries_on_failure=10
}

mysql_servers =
(
{ address="mysql1" , port=3306 , hostgroup=10, max_connections=100 },
{ address="mysql2" , port=3306 , hostgroup=20, max_connections=100 }
)

mysql_users =
(
{ username = "myuser", password = "password", default_hostgroup = 10, active = 1 }
)

proxysql_servers =
(
{ hostname = "proxysql-0.proxysqlcluster", port = 6032, weight = 1 },
{ hostname = "proxysql-1.proxysqlcluster", port = 6032, weight = 1 },
{ hostname = "proxysql-2.proxysqlcluster", port = 6032, weight = 1 }
)
EOF2. Create the ConfigMapkubectl create configmap proxysql-configmap --from-file=proxysql.cnf3. Prepare a file with the StatefulSettee proxysql-ss-svc.yml <<EOF
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: proxysql
labels:
app: proxysql
spec:
replicas: 3
serviceName: proxysqlcluster
selector:
matchLabels:
app: proxysql
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: proxysql
spec:
restartPolicy: Always
containers:
- image: proxysql/proxysql:2.3.1
name: proxysql
volumeMounts:
- name: proxysql-config
mountPath: /etc/proxysql.cnf
subPath: proxysql.cnf
- name: proxysql-data
mountPath: /var/lib/proxysql
subPath: data
ports:
- containerPort: 6033
name: proxysql-mysql
- containerPort: 6032
name: proxysql-admin
volumes:
- name: proxysql-config
configMap:
name: proxysql-configmap
volumeClaimTemplates:
- metadata:
name: proxysql-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
app: proxysql
name: proxysql
spec:
ports:
- name: proxysql-mysql
nodePort: 30033
port: 6033
protocol: TCP
targetPort: 6033
- name: proxysql-admin
nodePort: 30032
port: 6032
protocol: TCP
targetPort: 6032
selector:
app: proxysql
type: NodePort
EOF4. Create the StatefulSetkubectl create -f proxysql-ss-svc.yml5. Prepare the definition of the headless Service (more on this later)tee proxysql-headless-svc.yml <<EOF
apiVersion: v1
kind: Service
metadata:
name: proxysqlcluster
labels:
app: proxysql
spec:
clusterIP: None
ports:
- port: 6032
name: proxysql-admin
selector:
app: proxysql
EOF6. Create the headless Servicekubectl create -f proxysql-headless-svc.yml7. Verify the Serviceskubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
proxysql NodePort 10.3.249.158 6033:30033/TCP,6032:30032/TCP 12m
proxysqlcluster ClusterIP None 6032/TCP 8m53s
Pod Name Resolution
By default, each pod has a DNS name associated in the form pod-ip-address.my-namespace.pod.cluster-domain.example.
The headless Service causes K8S to auto-create a DNS record with each pod’s FQDN as well. The result is we will have the following entries available:
proxysql-0.proxysqlcluster
proxysql-1.proxysqlcluster
proxysql-3.proxysqlcluster
We can then use these to set up the ProxySQL cluster (the proxysql_servers part of the configuration file).
Connecting to the Service
To test the service, we can run a container that includes a MySQL client and connect its console output to our terminal. For example, use the following command (which also removes the container/pod after we exit the shell):kubectl run -i --rm --tty percona-client --image=percona/percona-server:latest --restart=Never -- bash -ilThe connections from other pods should be sent to the Cluster-IP and port 6033 and will be load balanced. We can also use the DNS name proxysql.ivantest-ns.svc.cluster.local that got auto-created.mysql -umyuser -ppassword -h10.3.249.158 -P6033Use port 30033 instead if the client is connecting from an external network:mysql -umyuser -ppassword -h10.3.249.158 -P30033
Cleanup Steps
In order to remove all the resources we created, run the following steps:kubectl delete statefulsets proxysql
kubectl delete service proxysql
kubectl delete service proxysqlcluster
Final Words
We have seen one of the possible ways to deploy ProxySQL in Kubernetes. The approach presented here has a few shortcomings but is good enough for illustrative purposes. For a production setup, consider looking at the Percona Kubernetes Operators instead.
Complete the 2021 Percona Open Source Data Management Software Survey
Have Your Say!

Most Popular in Database