Red Hat OpenStack 4.0 vlan mode for multi-nodes installation

Scenario One : 1 controller node + 2 compute nodes + neutron network + vlan mode

Server Spec : 4 x 1Gbs NICs per server

Physical Switch and port arrangement :
nic1 –> ext –> vlan10, untagged
nic2 –> mgt –> vlan20, untagged
nic3 –> vm/instance network –> vlan30 trunk enable , untagged
nic4 –> unused –> vlan40, untagged

Network Topology and arrangement

Network Topology and arrangement

OS layer software and network arrangement :
Controller node :
Base OS : RHEL 6.5
Disable NetworkManager : “service NetworkManager stop ; chkconfig NetworkManager off
Disable Selinux : vi /etc/selinux/config , SELINUX=disabled

cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=controller.zzzzz.com
cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=”eth0″
BOOTPROTO=”static”
IPV6INIT=”yes”
MTU=”1500″
IPADDR=172.16.26.102
NETMASK=255.255.0.0
GATEWAY=172.16.1.254
DNS1=8.8.8.8
ONBOOT=”yes”
TYPE=”Ethernet”
cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.43.102
DEFROUTE=no
cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
ONBOOT=no
DEFROUTE=no
cat /etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=eth3
TYPE=Ethernet
ONBOOT=no
DEFROUTE=no

Compute node 1:
Base OS : RHEL 6.5
Disable NetworkManager : “service NetworkManager stop ; chkconfig NetworkManager off
Disable Selinux : vi /etc/selinux/config , SELINUX=disabled

cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=compute1.zzzzz.com
cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=”eth0″
BOOTPROTO=”static”
IPV6INIT=”yes”
MTU=”1500″
IPADDR=172.16.26.103
NETMASK=255.255.0.0
GATEWAY=172.16.1.254
DNS1=8.8.8.8
ONBOOT=”yes”
TYPE=”Ethernet”
cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.43.103
DEFROUTE=no
cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
ONBOOT=no
DEFROUTE=no
cat /etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=eth3
TYPE=Ethernet
ONBOOT=no
DEFROUTE=no

Compute node 2:
Base OS : RHEL 6.5
Disable NetworkManager : “service NetworkManager stop ; chkconfig NetworkManager off
Disable Selinux : vi /etc/selinux/config , SELINUX=disabled

cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=compute2.zzzzz.com
cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=”eth0″
BOOTPROTO=”static”
IPV6INIT=”yes”
MTU=”1500″
IPADDR=172.16.26.104
NETMASK=255.255.0.0
GATEWAY=172.16.1.254
DNS1=8.8.8.8
ONBOOT=”yes”
TYPE=”Ethernet”
cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.43.104
DEFROUTE=no
cat /etc/sysconfig/network-scripts/ifcfg-eth2

DEVICE=eth2
TYPE=Ethernet
ONBOOT=no
DEFROUTE=no
cat /etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=eth3
TYPE=Ethernet
ONBOOT=no
DEFROUTE=no

Reboot all machines

Subscription manager to register the machine to RHSM :
Please follow the SOP in this guide(2.1.2)  : Red_Hat_Enterprise_Linux_OpenStack_Platform-4-Getting_Started_Guide-en-US

Make sure all servers’ yum repositories are setting up correctly
Must be able to access RHEL 6 and openstack-4.0 package:
rhel-6-server-openstack-4.0-rpms(Red Hat OpenStack 4.0 (RPMs))
rhel-6-server-rpms(Red Hat Enterprise Linux 6 Server (RPMs))

Service placement:
Controller node(192.168.43.102) : keystone,mysqld,glance,cinder,swift,ceilometer,heat,neutron(server,l3 agent, openvswitch plugin,dhcp agent,lbaas-agent,metadata-agent),nova-compute,nova-api,nova-cert,nova-conductor,nova-scheduler
Controller node(172.16.26.102) : horizon,vncproxy,nagios
Compute node(192.168.43.103,192.168.43.104) : nova-compute,openvswitch plugin

Install the openstack deployment tool packstack:
yum install openstack-packstack -y

Run the packstack using this answer file: multi-node-vlan.txt
packstack –asnwer-file=multi-node-vlan.txt

Post Install

1.
Controller node
vi /etc/keystone/keystone.conf
token_format =UUID
service openstack-keystone restart

2.
Controller and compute node
vi /etc/sysconfig/network-scripts/ifcfg-eth2
ONBOOT=yes
“service network restart”

3.Configure bridge for external network: br-ex
Controller node
vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=172.16.26.102
NETMASK=255.255.0.0
GATEWAY=172.16.1.254
ONBOOT=yes

vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
“service network restart”

Verify the Installation :

Get the admin login password by “cat ~/keystone_admin”

Login dashboard and create networks:http://172.16.26.102/dashboard

Create external network and subnet:
Networks > Create Network > Name: pub > Project : admin > check “Admin state and External”
Networks > pub > Create Subnet > Subnet Name = pubsub , Network Address = 172.16.0.0/16, IP Version=IPv4, Gateway IP = 172.16.1.254 > Subnet Detail > Allocation Pools = 172.16.26.30,172.16.26.100 > Create

Create Router and set gateway:
Click Project > Routers > Create Router > Router name = router1 > Create router
Click Project > Routers > Click “Set Gateway” for router1

Create private network:
Click Project > Networks > Create Network > Network Name = priv > Subnet * > Subnet Name = pubsub , Network Address = 10.0.0.0/24, IP Version=IPv4, Gateway IP = leave blank > Create

Create or upload image:
Click Project > Images & Snapshots > Name = Cirros_img > Description = CirrOS > Image Source = Image Location > Image Location = http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img > Format = QCOW2 – QEMU Emulator > Tick Public > Create Image

Create Instance :Click Project > Instances > Launch Instance > Availability Zone = nova > Instance Name = test > Flavor = m1.tiny > Instance Count = 3 > Instance Boot Source = Boot from image > Select Image = Cirros_img > Networking to select priv > Launch

vSphere 5.5 亮點 : VSAN & vFRC

雖然有些久了(十月份),但今天心血來潮,就介紹一下VMWARE推出的這兩個新功能VSAN & vFRC

VSANvirtual SAN,顧名思義就是虛擬的SAN,也就是Storage了。
Storage有啥了不起?就因為扣上了一個v字。把儲存虛擬化了。講到虛擬化,幾年前當MIS聽到會很反感,但現在已經能接受了,因為要雲端化嘛。要自動化嘛。

現今,在設計虛擬化環境架構的時候,除了網路,最頭痛的就是Storage了。Storage可說是整個虛擬化環境架構的心臟。
會有三個問題煩你: HA, Performance,Scalibility。

  1. 要怎麼設計架構,才能達到HA或reduntant?
  2. 要怎麼增加Performance,讓VM可以更快讀寫?
  3. 如果VM數增加,讀寫需求增加,要怎麼擴充來因應?

VSAN對以上三個問題做出以下的回答:

  • 在VSAN架構下,Hypervisor跟Storage是一體的。這種稱為 Hyperconverged infrastructure。這種架構下,Hosts貢獻出本機不含任何partitions的HDDs&SSDs給VSAN storage Pool當作儲存。VSAN最少需要三個Hosts,每個Host至少一顆不含任何partition的SSD跟HDD。SSD不算成容量,它主要是當作Read Cache 跟 Write Buffer。講到HA,這樣三台的架構下他是這樣表達的:(2n+1) = 需要的host數,n 是可以容許壞掉的Host或HDD數。定義在Storage Policy中。n=1是它的預設Policy。講到Policy,你可以把不同的 Policy運用到不同的VM或VMDK物件。你可以設定自己的Policy,如n=2,那就需要5台Host每個Host至少一顆不含任何partition的SSD跟HDD。有些VM比較重要,就把好一點的policy,指定給它。這就是所謂的Policy driven storage。是Software defined storage的基楚。看圖:

VSAN as scale out architecture both for storage and hypervisor

1 host or 1 HDD failure is tolerable for this VM

1 host or 1 HDD failure is tolerable for this VM

 

  • Storage Policy的設定裡面除了可容許壞掉的HOST/HDD數量外,還有兩個參數可以定義performance需求 : Number of disk stripes per objectFlash read cache reservation。Number of disk stripes per object是說,這個物件要跨到多少顆HDDs,可能在同一個HOST的HDDs,也可能跨在不同HOST的HDDs。目的是可增加讀寫效能。Flash read cache reservation是定義要保留多少的SSD或flash給該VM/VMDK或物件做cache,來特別的增加讀取效能,間接地增加了寫入效能。也就是說VSAN給的還不能滿足需求時,還可以用它來補救。一般是很難走到這步的 。看圖:
Storage policy

Parameters can be configured for a vm storage policy

  • 第三個問題是在講Scalibility。VSAN架構下,想開更多VM就加Host想要更多空間就加HDD&SSD完全是水平擴充的。隨著Host的增加,運算能力也增加。隨著HDD跟SSD的增加容量也增加。可設定的參數上限值也隨著增加。注意一下,目前beta版可設定的參數上限值是有限的。
    http://www.virtual-blog.com/2013/09/vmware-virtual-san-scalability-limits-vsan/

從一些截圖來了解VSAN

VSAN status

VSAN status

VSAN's disk groups &  backing devices

VSAN’s disk groups & backing devices

另外一個亮點是vFRC(Virtual Flash Read Cache)。先把VSAN忘掉。vFRC是for集中式儲存的加速選項。是利用Hosts上的SSD或PCIe flash devices來做成cache針對個別VM,assign給適當的cache。作法就是把Hosts上的SSD或PCIe flash devices,透過vSphere Web Client,format 成 VFFS格式。就可由Host上的VM取用。可增加讀取速度及response time。VFFS也是一種cluster file system,跨在多個Host之上。看圖:

An aggregated flash pool to offer read cache

An aggregated flash pool to offer read cache

VFFS Pool
VFFS Pool

Add vFRC

Create or add vFRC from SSD backing devices

Assign some vFRC to a VM.

Assign vFRC to a VM

Flash Read Cache Advanced setting
Flash Read Cache Advanced setting

 

我個人非常喜歡VSAN這種架構,他非常適合用來做VDI。概念簡單,設定簡單,擴充簡單。三個Hosts就可起步VDI。跟傳統最不一樣的地方就是VM的Storage Policy在現有的條件下(幾個Hosts,SSD,HDD),設定各種的Policy,並用在不同重要等級的VM上。這在傳統上應該就是不同tier的Storage。VSAN一定有API。這就是Software defined storage的基礎了吧。

vCenter : Prepare SQL DB for vCloud and VDI installation

For the detailed steps, please refer to :
http://vmwaremine.com/2012/11/12/prepare-dbs-for-vsphere-5-1-installation-or-upgrade-part-1/

The important thing is to execute the following 4 DB scripts in MSSQL management studio.Which will create the relative DBs and users. Please change the password in the script accordingly.

vcdb_db_schema  <DB schema for vCenter Server>
vum_db_schema  <DB schema for VMWare Update Manager>
RSA_db_schema  <DB schema for vCenter SSO RSA DB>
RSAUser_db_schema <DB schema for vCenter SSO RSA USER>

Then configure the privilege for vpxuser and vumuser.

Then configure MS SQL ODBC on vCenter server.
Note, only 32bit odbc(C:\Windows\SysWOW64\odbcad32.exe) can be used to establish the VUM DB connection.
For VCDB, you can use the 2008R2 builtin 64bit odbc.
For the detailed steps, please refer to :
http://vmwaremine.com/2012/11/12/prepare-dbs-for-vsphere-5-1-installation-or-upgrade-part-1/

vCloud Director : How to renew SSL certificate?

By default, the SSL certificates generated by the following commands only valid for 90 days :

1
2
3
4
5
#ssh root@vcloudip
#cd /opt/vmware/vcloud-director/jre/bin/
#./keytool -keystore certificates.ks -storetype JCEKS -storepass yourpasswd -genkey -keyalg RSA -alias http
And
#./keytool -keystore certificates.ks -storetype JCEKS -storepass yourpasswd -genkey -keyalg RSA -alias consoleproxy

We can create  longer(example 360 days) certificates by assigning -validity option as the following commands:

1
2
3
4
5
#ssh root@vcloudip
#cd /opt/vmware/vcloud-director/jre/bin/
#./keytool -keystore certificates.ks -storetype JCEKS -storepass yourpasswd -validity 360 -genkey -keyalg RSA -alias http
#./keytool -keystore certificates.ks -storetype JCEKS -storepass yourpasswd -validity 360 -genkey -keyalg RSA -alias consoleproxy
#cp certificates.ks /opt/vmware/vcloud-director/ssl/

After creating the certificate , use the following command to replace the old certificate:

1
2
3
4
5
6
7
8
9
10
#ssh root@vcloudip
#service vmware-vcd stop
#cd /opt/vmware/vcloud-director/bin/
#./configure
Specify your generated SSL certificate's path(this example: /opt/vmware/vcloud-director/ssl/certificates.ks)
Enter the keystore and certificate passwords.
Please enter the password for the keystore:
Please enter the private key password for the 'http' SSL certificate:yourpasswd
Please enter the private key password for the 'consoleproxy' SSL certificate:yourpasswd
Choose "Yes" when it asks you to start the VCD service.

At the browser side , clean the browser cookies and cache and link to vCloud Director URL.

簡單介紹一下Zabbix

Zabbix Logo

Zabbix Logo

Zabbix 是一個監控套件
可監控網路上的devices

主要利用的監控方法有 Zabbix Agent, SNMP, SNMP trap, IPMI, SSH,TELNET,WEB,Database,JMX 跟自訂的scripts(Zabbix 稱External check)

Zabbix Agent 又可分主動(Active)跟被動(Passive)

Passive Agent : 由zabbix server 定期訪問Client device 上的 zabbix agent, 取得的讀值
Active  Agent : Client device 上的 zabbix agent 定期主動回報自己的讀值

Zabbix Agent 內建有多種對OS層的監控項目
https://www.zabbix.com/documentation/2.0/manual/config/items/itemtypes/zabbix_agent

可自訂自己的方法到Zabbix Agent來加強想要監控的項目。Zabbix 稱 UserParameterm。如get ipmi sdr from KCS。只要OS能拿到的都能監控。

Zabbix 主要的監控觀念  : Host, Item , Trigger ,  Action

Host : 要被監控的Device
Item : 監控的方法或是資料蒐集的方法
Trigger : 邏輯判斷蒐集到的值是否在定義的範圍內
Action : 若不是就行動。
             如Email 給誰 or IM 給走XMPP標準的的IM or SMS 給誰orv執行scripts等~

Zabbix 主要的設定項目有 : Host , item, trigger, template

Host : 設定哪個device要被監控
item : 對這個host要監控些什麼,要蒐集什麼讀值,要如何蒐集
trigger : 當蒐集到的值超出定義的範圍就 …
template : Template 包含以上辛苦設定好的三項,可export 成xml,再import到其他台zabbix上重複使用。import時可選擇不要Host, 使得template能通用。所有動作在Web UI完成,非常容易方變

跟Nagios不一樣的地方 :

1. Zabbix不需特別安裝RRD來達到繪圖功能,Zabbix內建簡單的繪圖功能,每個搜集的值都有簡易趨勢圖。
2. Nagios NRPE plugin(perl script) 在Client 端上判斷是否為問題後回傳,
Zabbix卻是在 Server端做邏輯判斷,非常彈性的可設定/改變判斷值
如 :
    Problem: If cpu1 temperature over 60 °C for last 10 minutes,
        then define it as a warning event.
    Recovery: If cpu1 temperature is within 20~60 °C for last 10 minutes,
        then define it as a recovery event.
3. 所有監控設定都是在WEB UI用滑鼠鍵盤完成。
4. 有內建對IPMI Device的監控,2.2版已加強到能監控descrete sensor
5. 前端用php
6. Zabbix 綁DB,Nagios 不用DB。

一些截圖:

用pXe遠端安裝多台Servers

所需設備

1.1台pXe Server(dhcp,tftp,apache)

2.1~40台 blank servers

3.每台servers需要有BMC功能 (沒有的話要一台一台手動開機調bios也ok)

4.一台48 Ports的SWITCH

pXe Server利用工具

1.dhcpd,tftpd,apache

2.ipmitool (控制bios開機程序)

準備工具

#yum install dhcp tftp-server httpd

設定 dhcpd.conf

Continue reading

以perceus安裝部屬實作cluster叢集

perceus 是一套可用來部屬/安裝/管理 叢集(cluster)系統的運用軟體。是一道指令。
是以master-slave的概念部屬,Master就是負責管理指派vnfs/IPs 給 slave nodes的機器。
只要指定好,各個node只要以PXE mode 開機,就會讀取到master那邊指派給的OS (tftp的方式傳輸)。
可非常快速的部屬不同服務的叢集群(cluster group)。因為都在記憶體執行,所以各node不一定需要硬碟。
故稱diskless  node

時下的機器(master)可承受32個nodes的同時開機。如果vnfs檔案作的夠輕便,512個nodes的開機時間約4分鐘。
可參考官方手冊:http://altruistic.infiscale.org/docs/

不多說了,開始動手做:

1.至perceus 官網下載 http://www.perceus.org/site/html/download.html , 本例使用redhat 5版本

1
#wget http://altruistic.infiscale.org/rhel/5/RPMS/x86_64/perceus-1.5.2-2111.x86_64.rpm

Continue reading

DNS bind9 的一些有用指令

bind9 就是用於Linux的dns伺服器套件。

bind設定檔對一些小錯誤都不計較,就算錯了一點,也能成功重啟SERVER。
但這是一個困擾的問題,等DNS生效需要一點時間,到時候才知道錯,也浪費時間。

有幾個bind相關的指令可幫助你重啟named(dns server)前,確認設定檔沒有錯誤。如下:

1.named-checkconf (檢測 named.conf 檔是否OK)

1
named-checkconf /var/named/named.conf

2.named-checkzone (檢測個別 zone 檔是否OK)

1
named-checkzone zawmin.com /var/named/named.zawmin.com.conf

3.named-bootconf (將舊的bind4 的named.boot檔轉換成bind8格式)
named-bootconf  < named.boot >  named.conf

自訂linux shell 環境

若用putty ssh 到Linux , terminal裡ls的預設資料夾顏色太暗,可以調整如下,

1.登入後
2.$vi ~/.bash_profile

把以下這兩行加入即可。

1
2
LS_COLORS="no=00:fi=00:di=00;94:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35:"
export LS_COLORS

3.$source ~/.bash_profile

本例把di從34(深藍)調成94(淺藍)。

順便把shell prompt也調一下,讓它顏色變亮+顯示路徑資訊,加入以下兩行至 ~/.bash.profile,

1
2
PS1="\n\e[1;37m[\e[0;32m\u\e[0;35m@\e[0;32m\h\e[1;37m]\e[1;37m[\e[0;31m\w\e[1;37m]\n$ "
export PS1

4.$source ~/.bash_profile

檔案類型跟色碼可參考下表 Continue reading