doc:os:freebsd:samba:prepare_zfs_pool_for_samba
Различия
Показаны различия между двумя версиями страницы.
Следующая версия | Предыдущая версия | ||
doc:os:freebsd:samba:prepare_zfs_pool_for_samba [d.m.Y H:i] – создано dbehterev | doc:os:freebsd:samba:prepare_zfs_pool_for_samba [d.m.Y H:i] (текущий) – dbehterev | ||
---|---|---|---|
Строка 1: | Строка 1: | ||
====== Подготавливаем ZFS пул для Samba ====== | ====== Подготавливаем ZFS пул для Samba ====== | ||
+ | |||
+ | Задача: | ||
+ | |||
+ | Для начала создаем раздел GPT: | ||
+ | < | ||
+ | gpart create -s gpt /dev/ada1 # | ||
+ | </ | ||
+ | |||
+ | Далее создаем zfs-раздел с меткой disk1, используя весь диск: | ||
+ | < | ||
+ | gpart add -t freebsd-zfs -l disk1 / | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | Просмотреть все ваши метки можно так: | ||
+ | < | ||
+ | ls /dev/gpt | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | Чтобы создать партицию определенного размера (в данном случае 2Тб), причем указываем метку после параметра -l: | ||
+ | < | ||
+ | gpart add -t freebsd-zfs -l diskmirror1 -s 2T /dev/ada1 | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | Внимание! При разбивке диска с помощью gpart (равно как и с помощью команды zpool) рекомендуется использовать метки. В этом случае, | ||
+ | </ | ||
+ | |||
+ | Создаем зеркало на ZFS: | ||
+ | * Сначала просто пул на одном диске (обратите внимание, | ||
+ | < | ||
+ | zpool create -m / | ||
+ | </ | ||
+ | * а затем добавляем второй диск (тоже через метки) и тем самым получаем «зеркало»: | ||
+ | < | ||
+ | zpool attach zfsdatamirror / | ||
+ | </ | ||
+ | |||
+ | Cделаем теперь «страйп», | ||
+ | < | ||
+ | zpool create –m / | ||
+ | </ | ||
+ | |||
+ | Теперь сделаем файловую систему на страйпе (аналогично на зеркале) с именем Designers: | ||
+ | < | ||
+ | zfs create zfsdatastripe/ | ||
+ | </ | ||
+ | |||
+ | Далее некоторый свод команд для пула с именем zfspool: | ||
+ | < | ||
+ | zfs set mountpoint=/ | ||
+ | zfs set aclmode=passthrough zfspool # | ||
+ | zfs set aclinherit=passthrough zfspool # | ||
+ | zfs set atime=off zfspool # | ||
+ | zfs set exec=off zfspool # | ||
+ | zfs set setuid=off zfspool # | ||
+ | zfs set compression=gzip zfspool # | ||
+ | zfs set dedup=on zfspool # | ||
+ | </ | ||
+ | |||
+ | Включаем zfs в rc.conf, если еще не сделано: | ||
+ | < | ||
+ | echo ' | ||
+ | </ | ||
+ | |||
+ | В / | ||
+ | < | ||
+ | zfs_load=" | ||
+ | </ | ||
+ | |||
+ | Если после перезагрузки пул не примонтировался по каким-либо причинам: | ||
+ | < | ||
+ | zpool import -R /mnt_new -f zfspool | ||
+ | </ | ||
+ | |||
+ | ====== Выдержки по тюнингу ZFS ====== | ||
+ | |||
+ | В сети натолкнулся на статью, | ||
+ | |||
+ | Система | ||
+ | ======================= | ||
+ | * Case - Supermicro SC733T-645B | ||
+ | * MB - Supermicro X7SBA | ||
+ | * CPU - Intel Core 2 Duo E8400 | ||
+ | * RAM - CT2KIT25672AA800, | ||
+ | * RAM - CT2KIT25672AA80E, | ||
+ | * Disk - Intel X25-V SSD (ada0, boot) | ||
+ | * Disk - WD1002FAEX (ada1, ZFS " | ||
+ | * Disk - WD2001FASS (ada2, ZFS " | ||
+ | |||
+ | |||
+ | |||
+ | Samba | ||
+ | ======================= | ||
+ | |||
+ | В smb.conf: | ||
+ | |||
+ | [global] | ||
+ | socket options = TCP_NODELAY SO_SNDBUF=131072 SO_RCVBUF=131072 | ||
+ | use sendfile = no | ||
+ | min receivefile size = 16384 | ||
+ | aio read size = 16384 | ||
+ | aio write size = 16384 | ||
+ | aio write behind = yes | ||
+ | |||
+ | ZFS пулы | ||
+ | ======================= | ||
+ | pool: backups | ||
+ | | ||
+ | | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | backups | ||
+ | ada2 ONLINE | ||
+ | |||
+ | errors: No known data errors | ||
+ | |||
+ | pool: data | ||
+ | | ||
+ | | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | data ONLINE | ||
+ | ada1 ONLINE | ||
+ | |||
+ | errors: No known data errors | ||
+ | |||
+ | |||
+ | |||
+ | Тюнинг ZFS | ||
+ | ======================= | ||
+ | Your tunings here are " | ||
+ | of vfs.zfs.txg.synctime=" | ||
+ | addition to your choice to enable prefetching (every ZFS FreeBSD system | ||
+ | I've used has benefit tremendously from having prefetching disabled, | ||
+ | even on systems with 8GB RAM and more). | ||
+ | vm.kmem_size_max, | ||
+ | Also get rid of your vdev tunings, I'm not sure why you have those. | ||
+ | |||
+ | My relevant / | ||
+ | the version of FreeBSD you're running, and build date, matters greatly | ||
+ | here so do not just blindly apply these without thinking first): | ||
+ | |||
+ | # We use Samba built with AIO support; we need this module! | ||
+ | aio_load=" | ||
+ | |||
+ | # Increase vm.kmem_size to allow for ZFS ARC to utilise more memory. | ||
+ | vm.kmem_size=" | ||
+ | vfs.zfs.arc_max=" | ||
+ | |||
+ | # Disable ZFS prefetching | ||
+ | # http:// | ||
+ | # Increases overall speed of ZFS, but when disk flushing/ | ||
+ | # system is less responsive (due to extreme disk I/O). | ||
+ | # NOTE: Systems with 8GB of RAM or more have prefetch enabled by | ||
+ | # default. | ||
+ | vfs.zfs.prefetch_disable=" | ||
+ | |||
+ | # Decrease ZFS txg timeout value from 30 (default) to 5 seconds. | ||
+ | # should increase throughput and decrease the " | ||
+ | # happen during immense I/O with ZFS. | ||
+ | # http:// | ||
+ | # http:// | ||
+ | vfs.zfs.txg.timeout=" | ||
+ | |||
+ | |||
+ | |||
+ | sysctl tunings | ||
+ | ======================= | ||
+ | Please note that the below kern.maxvnodes tuning is based on my system | ||
+ | usage, and yours may vary, so you can remove or comment out this option | ||
+ | if you wish. The same goes for vfs.ufs.dirhash_maxmem. | ||
+ | vfs.zfs.txg.write_limit_override, | ||
+ | commented out for starters; it effectively "rate limits" | ||
+ | this smooths out overall performance (otherwise I was seeing what | ||
+ | appeared to be incredible network transfer speed, then the system would | ||
+ | churn hard for quite some time on physical I/O, then fast network speed, | ||
+ | physical I/O, etc... very " | ||
+ | |||
+ | # Increase send/ | ||
+ | # FreeBSD 7.x and later will auto-tune the size, but only up to the max. | ||
+ | net.inet.tcp.sendbuf_max=16777216 | ||
+ | net.inet.tcp.recvbuf_max=16777216 | ||
+ | |||
+ | # Double send/ | ||
+ | # amount of memory taken up by default *per socket*. | ||
+ | net.inet.tcp.sendspace=65536 | ||
+ | net.inet.tcp.recvspace=131072 | ||
+ | |||
+ | # dirhash_maxmem defaults to 2097152 (2048KB). | ||
+ | # this limit a few times, so we should increase dirhash_maxmem to | ||
+ | # something like 16MB (16384*1024). | ||
+ | vfs.ufs.dirhash_maxmem=16777216 | ||
+ | |||
+ | # | ||
+ | # ZFS tuning parameters | ||
+ | # NOTE: Be sure to see / | ||
+ | # | ||
+ | |||
+ | # Increase number of vnodes; we've seen vfs.numvnodes reach 115,000 | ||
+ | # at times. | ||
+ | kern.maxvnodes=250000 | ||
+ | |||
+ | # Set TXG write limit to a lower threshold. | ||
+ | # the throughput rate (see "zpool iostat" | ||
+ | # for systems with 4GB of RAM, while 1GB works well for us w/ 8GB on | ||
+ | # disks which have 64MB cache. | ||
+ | vfs.zfs.txg.write_limit_override=1073741824 | ||
+ | |||
+ | |||
+ | TAG: {{tag> samba FreeBSD ZFS}} | ||
+ | |||
/var/www/wiki.itcall.ru/data/attic/doc/os/freebsd/samba/prepare_zfs_pool_for_samba.1349549247.txt.gz · Последнее изменение: d.m.Y H:i (внешнее изменение)