How to override an ASP.NET Core configuration array setting using environment variables

假如一个appsettings.json文件, 需要在环境变量中添加配置覆盖AppSettings/EnabledWorkspace节:

{
  "Logging": {
    "LogLevel": {
      "Default": "Warning",
      "System": "Information",
      "Microsoft": "Information"
    }
  },
  //Basic Auth for Tapd api
  "AppSettings": {
    "EnabledWorkspace": [ "58645295", "44506107", "84506239" ]
  }
}

这样配置:

AppSettings__EnabledWorkspace__0 = 58645295
AppSettings__EnabledWorkspace__1 = 44506107
AppSettings__EnabledWorkspace__2 = 84506239

 

Refer to:

How to override an ASP.NET Core configuration array setting using environment variables

Configuration in ASP.NET Core

Disable Out of memory killer in linux

By default Linux has a somewhat brain-damaged concept of memory management: it lets you allocate more memory than your system has, then randomly shoots a process in the head when it gets in trouble. (The actual semantics of what gets killed are more complex than that – Google “Linux OOM Killer” for lots of details and arguments about whether it’s a good or bad thing).


To restore some semblance of sanity to your memory management:

  1. Disable the OOM Killer (Put vm.oom-kill = 0 in /etc/sysctl.conf)
  2. Disable memory overcommit (Put vm.overcommit_memory = 2 in /etc/sysctl.conf)
    Note that this is a trinary value: 0 = “estimate if we have enough RAM”, 1 = “Always say yes”, 2 = “say no if we don’t have the memory”)

These settings will make Linux behave in the traditional way (if a process requests more memory than is available malloc() will fail and the process requesting the memory is expected to cope with that failure).

Reboot your machine to make it reload /etc/sysctl.conf, or use the proc file system to enable right away, without reboot:

echo 2 > /proc/sys/vm/overcommit_memory 

 

refer: https://serverfault.com/questions/141988/avoid-linux-out-of-memory-application-teardown

[Kong] Batch change SNIs’ certificate

Kong 0.13.1, I have a few snis bind to a cert which will be expired soon. So write a sh to bind these snis to a new cert (need install jq first):

#!/bin/sh
SNIS=`curl -s "http://kong-admin.kong:8001/snis"`
LEN=`echo $SNIS | jq '.data | length'`
# bash # for (( i=0; i<LEN; i++ ))
for i in $(seq 0 $(($LEN-1)))
do
  sni=$(echo $SNIS | jq .data[$i] | jq -r .name)
  found=0

  echo $sni | grep domain1.com

  if [ $? -eq 0 ]; then
    found=1
  else
    echo $sni | grep domain2.com

    if [ $? -eq 0 ]; then
      found=1
    fi    
  fi

  if [ $found -eq 1 ]; then
    curl -X PATCH "http://kong-admin.kong:8001/snis/${sni}" -H "Content-Type: application/json" --data "{ \"ssl_certificate_id\": \"CHANGE TO YOUR NEW CERT ID\"}"
  fi
done

Getting real client IP in Docker Swarm

在Docker Swarm中通过Stack Deploy部署Service的时候,在Service中默认无法获取到客户端的IP地址, Github中有一个issue在track这个问题:Unable to retrieve user’s IP address in docker swarm mode

目前的解决方法或Workaround是把port改成host模式, 以kong为例.

默认的port发布模式:

version: "3.7"
services:
  kong-proxy:
    image: kong:1.0.3-alpine
    deploy:
      mode: global
      labels:
        - "tier=frontend"
      restart_policy:
        condition: any
    ports:
      - "80:8000"
      - "443:8443"
    depends_on:
      - database-postgresql
    environment:
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_DATABASE: postgres
      KONG_PG_DATABASE: kong
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: PaSsW0rd
      KONG_PG_HOST: database-postgresql
      KONG_PG_PORT: "5432"

    volumes:
      - type: "bind"
        source: "/var/log/kong/"
        target: "/usr/local/kong/logs/"
#        read_only: true
    networks:
      - backend
      - frontend
networks:
  frontend:
  backend:

 

修改port为host模式:

version: "3.7"
services:
  kong-proxy:
    image: kong:1.0.3-alpine
    deploy:
      mode: global
      labels:
        - "tier=frontend"
      restart_policy:
        condition: any
    ports:
      - target: 8000
        published: 80
        mode: host
      - target: 8443
        published: 43
        mode: host
    depends_on:
      - database-postgresql
    environment:
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_DATABASE: postgres
      KONG_PG_DATABASE: kong
      KONG_PG_USER: kong
      KONG_PG_PASSWORD: PaSsW0rd
      KONG_PG_HOST: database-postgresql
      KONG_PG_PORT: "5432"

    volumes:
      - type: "bind"
        source: "/var/log/kong/"
        target: "/usr/local/kong/logs/"
#        read_only: true
    networks:
      - backend
      - frontend
networks:
  frontend:
  backend:

 

Aggregate in Mongodb in C#

Filter record from collection “OperationSession”, sort by “WorldId” descending, then group by “WorldId”, then pick first record from each group, then sort result:

Way #1:

db.getCollection('OperationSession').aggregate(
[
  { "$match": {"ActivityId":74,"GameId":2109} },
  { "$sort":{ "CreateTime" : -1} },
  { "$group":
    { 
        _id:"$WorldId",
        SessionId:{"$first": "$_id" },
        GameId:{"$first": "$GameId" },
        WorldId:{"$first": "$WorldId" },
        ActivityId:{"$first": "$ActivityId" },
        Type:{"$first": "$Type" },
        Status:{"$first": "$Status" },
        ActivityStatus:{"$first": "$ActivityStatus" }
     }
   },
   { "$sort":{ "WorldId" : 1} },
   { "$skip": 20},
   { "$limit": 10}
  ]
)

 

Way #2:

db.OperationSession.aggregate()
      .match({"ActivityId":74,"GameId":2109})
      .sort({"CreateTime":-1})
      .group({
            "_id":"$WorldId",
            "SessionId":{"$first": "$_id" },
            "GameId":{"$first": "$GameId" },
            "WorldId":{"$first": "$WorldId" },
            "ActivityId":{"$first": "$ActivityId" },
            "Type":{"$first": "$Type" },
            "Status":{"$first": "$Status" },
            "ActivityStatus":{"$first": "$ActivityStatus" }
        })
      .sort({"WorldId":1})
      .skip(20)
      .limit(10)

 

In C#:

collection.Aggregate<DataEntity.OperationSession>()
                .Match(s => s.ActivityId == 74 && s.GameId == 2109)
                .SortByDescending(s => s.CreateTime)
                .Group(
                    s => s.WorldId,
                    s => new Interface.OperationSession
                    {
                        SessionId = s.Select(x => x.Id).First(),
                        GameId = s.Select(x => x.GameId).First(),
                        WorldId = s.Select(x => x.WorldId).First(),
                        ActivityId = s.Select(x => x.ActivityId).First(),
                        Type = s.Select(x => x.Type).First(),
                        Status = s.Select(x => x.Status).First(),
                        ActivityStatus = s.Select(x => x.ActivityStatus).First()
                    })
                .SortBy(s => s.WorldId)
                .Skip(20)
                .Limit(10).ToList();

Split string to array by delimiter in shell

#!/bin/bash

STR="Sarah,Lisa,Jack,Rahul,Johnson" #String with names
IFS=',' read -ra NAMES <<< "$STR" #Convert string to array

#Print all names from array
for name in "${NAMES[@]}"; do
  echo $name
done

#Print index from array
for name in "${!NAMES[@]}"; do
  echo $name
done

ref: https://tecadmin.net/split-a-string-on-a-delimiter-in-bash-script/

Notify script is not working on Keepalived v1.3.5

在公司内网通过Keepalived的VIP机制及诊断脚本提供一些高可用服务及智能选择服务,但由于内网网络设备及环境复杂,导致在上面组建的Overlay网络内的ARP广播不通畅,在Keepalived主备切换切换后当前有效节点的MAC地址通知不到客户端,从而导致客户端不能连接到正确的服务节点,因此就想使用Keepalived的notify scripts, 在Keepalived状态转变后在所有客户端执行一个脚本更新MAC地址。

脚本也简单,定义客户端列表直接shell上去设置MAC地址(节点间通过ssh key 配置了自动登录):

#!/bin/sh

declare -a nodes=(
  "192.168.126.8"
  "192.168.126.9"
  "192.168.126.10"
  "192.168.126.11"
  "192.168.126.12"
  "192.168.126.13"
  "192.168.126.14"
  "192.168.126.15"
)

ip addr show eth0 | grep 192.168.126.99
foundVip=$?

for ip in "${nodes[@]}"; do
    if [ $foundVip -eq 0 ]; then
        MAC=`ip addr show eth0 | grep ff:ff:ff:ff:ff:ff | awk '{print $2}'`
        ssh root@$ip arp --set 192.168.126.99 $MAC
    fi
done

环境:CentOS 7.6, Kernel 4.20.5, Keepalived v1.3.5

按照track_script和real_server中定义MISC_TASK脚本的惯例用法, 把脚本放在了/usr/libexec/keepalived目录下, 配置了execute权限, 单独测试脚本工作正常。在vrrp_instance中配置notify 脚本路径及执行用户root, 如下:

vrrp_instance VI_POSTGRESQL {
...
    virtual_ipaddress {
        192.168.126.99/24 dev eth0 label eth0:1
    }
...
    notify "/usr/libexec/keepalived/update_mac_for_vip.sh" root
}

配置完成后停止/启动脚本测试了多次, 都不能得到执行, Google了半天, 有些资料提到Keepalive其中的一个patch把notify的配置方法改成了array, 于是尝试按列表方式配置notify scripts, 如下:

notify {

    "/usr/libexec/keepalived/update_mac_for_vip.sh" root

}

依然不奏效。

 

一个偶然的尝试把脚本从/usr/libexec/keepalived路径移到了/etc/keepalived目录下, 并更新Keepalived配置文件/etc/keepalived/keepalived.conf, 重启Keepalived后, notify机制竟然工作正常了。

暂未探寻究竟,也未尝试其它目录,权且记录,以备后查。

 

Install Python 3.6 on CentOS 7

1)安装IUS软件源

#安装EPEL依赖
sudo yum install epel-release

#安装IUS软件源
sudo yum install https://centos7.iuscommunity.org/ius-release.rpm

2)安装Python3.6

sudo yum install python36u

#创建符号链接(可选)
sudo ln -s /bin/python3.6 /bin/python3

3)安装pip3(可选)

sudo yum install python36u-pip

#创建一个到pip3的符号链接(可选)
sudo ln -s /bin/pip3.6 /bin/pip3

Kubernetes 1.13.3 external etcd clean up | Kubernetes外部etcd数据清除

Kubernetes配置过程中如果出了问题, 可以用kubeadm reset重置Kubernetes cluster状态, 但如果使用了外部etcd cluster, 则执行kubeadm reset不会清除外部etcd cluster中的数据, 也就意味着如果再次执行kubeadm init, 则会看到上一个kubenetes cluster中的数据。

查询和手动清除外部etcd cluster的方式如下(以Kubernetes 1.13.3为例):

1. 查询所有数据:

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.2.24 etcdctl --cert="/etc/kubernetes/pki/etcd/healthcheck-client.crt" --key="/etc/kubernetes/pki/etcd/healthcheck-client.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt" --endpoints https://etcd1.cloud.k8s:2379 get "" --prefix

2. 删除所有数据:

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.2.24 etcdctl --cert="/etc/kubernetes/pki/etcd/healthcheck-client.crt" --key="/etc/kubernetes/pki/etcd/healthcheck-client.key" --cacert="/etc/kubernetes/pki/etcd/ca.crt" --endpoints https://etcd1.cloud.k8s:2379 del "" --prefix

指令中的几个关键点:

  1. 运行使用docker镜像k8s.gcr.io/etcd:3.2.24中的etcdctl指令, 也可以使用外部的
  2. 通过docker -e参数设置环境变量ETCDCTL_API=3设置API Version为3
  3. 挂载外部的etcd ca和客户端证书连接etcd cluster

参考:

External etcd clean up: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-reset/#external-etcd-clean-up