I built a home NAS server running NetBSD 5.1 back in Sep. 2010 with Gigabyte D525(Atom) with 4GB RAM and 4*2TB HDs. I configured RAIDFrame with RAID5 on 3*2TB and one hot-standby. This provided me with 4TB total usable space.
I had the root on RAID1(mirrored) raid partition and two(2) 2TB raid5 partions across 3 drives. This had worked very nicely for me hosting all of the data needed for home.
This also been my NAT box running BIND9, sshd, httpd(apache), dhcpd, ntpd, smtpd, owncloud, samba and etc.
Now it is end of 2013 and I am running out of the disk space and more importantly, I have not done any of the OS upgrade since NetBSD 5.1(now they have 6.2)
I want to upgrade the disks with at least 3TBs or even 4TBs but more than anything, I would like to run zfs filesystems with snapshot capability….
For this, I do not think that staying with NetBSD is not a good idea and the choices that I have are now narrowed down to either FreeNAS or NAS4Free….
Let’s see what happens….
Here are some configurations that I would like to save for future references:
# atactl wd0 identify
Model: ST2000DL003-9VT166, Rev: CC32, Serial #: 5YD2####
Device type: ATA, fixed
Cylinders: 16383, heads: 16, sec/track: 63, total sectors: 268435455
Device supports command queue depth of 31
Device capabilities:
DMA
LBA
ATA standby timer values
IORDY operation
IORDY disabling
Device supports following standards:
ATA-4 ATA-5 ATA-6 ATA-7
Command set support:
READ BUFFER command (enabled)
WRITE BUFFER command (enabled)
Host Protected Area feature set (enabled)
look-ahead (enabled)
write cache (enabled)
Power Management feature set (enabled)
Security Mode feature set (disabled)
SMART feature set (enabled)
FLUSH CACHE EXT command (enabled)
FLUSH CACHE command (enabled)
Device Configuration Overlay feature set (enabled)
48-bit Address feature set (enabled)
Automatic Acoustic Management feature set (enabled)
SET MAX security extension (disabled)
DOWNLOAD MICROCODE command (enabled)
World Wide name
WRITE DMA/MULTIPLE FUA EXT commands
General Purpose Logging feature set
SMART self-test
SMART error logging
Serial ATA capabilities:
1.5Gb/s signaling
3.0Gb/s signaling
Native Command Queuing
PHY Event Counters
Serial ATA features:
Device-Initiated Interface Power Managment (disabled)
Software Settings Preservation (enabled)
# fdisk /dev/rwd0d
NetBSD disklabel disk geometry:
cylinders: 3876021, heads: 16, sectors/track: 63 (1008 sectors/cylinder)
total sectors: 3907029168
BIOS disk geometry:
cylinders: 1024, heads: 255, sectors/track: 63 (16065 sectors/cylinder)
total sectors: 3907029167
Partition table:
0: NetBSD (sysid 169)
start 63, size 3907029105 (1907729 MB, Cyls 0-243201/80/63), Active
1:
2:
3:
Bootselector disabled.
First active partition: 0
# disklabel -r /dev/wd0
type: ESDI
disk: ST2000DL003-9VT1
label: fictitious
flags:
bytes/sector: 512
sectors/track: 63
tracks/cylinder: 16
sectors/cylinder: 1008
cylinders: 3876021
total sectors: 3907029168
rpm: 3600
interleave: 1
trackskew: 0
cylinderskew: 0
headswitch: 0 # microseconds
track-to-track seek: 0 # microseconds
drivedata: 0
5 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 16777216 63 RAID # (Cyl. 0*- 16644*)
b: 1945125888 16777280 RAID # (Cyl. 16644*- 1946332*)
c: 3907029105 63 unused 0 0 # (Cyl. 0*- 3876020)
d: 3907029168 0 unused 0 0 # (Cyl. 0 - 3876020)
e: 1945125888 1961903168 RAID # (Cyl. 1946332*- 3876020*)
$ cat raid0.conf
# raidctl config file for /dev/rraid0d
START array
# numRow numCol numSpare
1 2 1
START disks
/dev/wd0a
/dev/wd1a
START spare
/dev/wd4a
START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
128 1 1 1
START queue
fifo 100
$ cat raid1.conf
# raidctl config file for /dev/rraid1d
START array
# numRow numCol numSpare
1 3 1
START disks
/dev/wd0b
/dev/wd1b
/dev/wd3b
START spare
/dev/wd4b
START layout
# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
64 1 1 5
START queue
fifo 100
$ cat raid0.addwedges
newfs -O 1 -b 16k -f 4k
$ cat raid1.addwedges
#gpt create raid1
#gpt add -b 34 -i 1 -s 4294966973 -t ufs raid1
#gpt label -i 1 -l "raid1-export1" raid1
#dkctl raid1 addwedge raid1-export1 34 4294966973 ffs
# newfs -O 2 -b 64k -f 8k /dev/rdk0
$ cat notes
# raidctl -v -C raid0.conf.start raid0
# raidctl -v -I 2011041101 raid0
# raidctl -v -i raid0
RAID and file system performance tuning
http://mail-index.netbsd.org/current-users/2008/08/26/msg004155.html
http://mail-index.netbsd.org/netbsd-users/2010/04/09/msg006043.html
http://mail-index.netbsd.org/netbsd-users/2010/04/09/msg006047.html
http://mail-index.netbsd.org/netbsd-users/2010/04/11/msg006050.html
http://mail-index.netbsd.org/current-users/2008/08/29/msg004212.html
64K stripe size + 64K bsize = Optimal performce
32K stripe size + 32K bsize = very good
32K stripe size + 64K bsize = very good
BAD:
64K stripe size + 16K block size + 2K frag size
64K stripe size + 32K block size + 2K frag size
32K stripe size + 16K block size + 2K frag size
GOOD(3 disks RAID5):
64K stripe size + 64K block size + 4K frag size
32K stripe size + 32K block size + 4K frag size
32K stripe size + 64K block size + 8K frag size
raid1 stripe size w/ 2 disks(128 1 1 1) : 64k
raid5 stripe size w/ 3 disks(128 1 1 5) : 128k : bad
raid5 stripe size w/ 3 disks( 64 1 1 5) : 64k
raid5 stripe size w/ 3 disks( 32 1 1 5) : 32k
raid5 stripe size w/ 4 disks( 64 1 1 5) : 96k : bad
raid5 stripe size w/ 4 disks( 32 1 1 5) : 48k
64K MAXPHYS value - the largest amount RAIDframe will ever be handed for one IO
default newfs parameter
-b 16k -f 2k
raid1 optimal parameter:
(http://zhadum.org.uk/2008/07/25/raid-and-file-system-performance-tuning/)
64k(128 1 1 1) + 16k block size + 2k frag size
3 disks RAID5:
64K stripe size + 64K block size + 8K frag size
32K stripe size + 32K block size + 4K frag size
32K stripe size + 64K block size + 8K frag size