ViewVC Help
View File | Revision Log | Show Annotations | Download File | View Changeset | Root Listing
root/www/trunk/documentation/zfs.html
Revision: 642
Committed: Fri Oct 9 21:29:01 2015 UTC (8 years, 6 months ago) by laffer1
Content type: text/html
File size: 8997 byte(s)
Log Message:
add languages back to website files

File Contents

# Content
1 <!DOCTYPE html>
2 <html lang="en-US">
3 <head>
4 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
5 <title>MidnightBSD ZFS Documentation</title>
6 <link rel="shortcut icon" href="/favicon.ico">
7 <link rel="stylesheet" type="text/css" href="../css/essence.css">
8 <!-- Begin Cookie Consent plugin by Silktide - http://silktide.com/cookieconsent -->
9 <script type="text/javascript">
10 window.cookieconsent_options = {"message":"This website uses cookies to ensure you get the best experience on our website","dismiss":"Got it!","learnMore":"More info","link":null,"theme":"dark-top"};
11 </script>
12 <script type="text/javascript" src="//s3.amazonaws.com/cc.silktide.com/cookieconsent.latest.min.js"></script>
13 <!-- End Cookie Consent plugin -->
14 </head>
15 <body>
16 <div id="globe">
17 <div id="header"><h1 title="MidnightBSD Home"><a href="../" title="MidnightBSD Home">MidnightBSD: The BSD For Everyone</a></h1></div>
18 <!--#include virtual="/menu.html"-->
19 <div class="clear"></div>
20 <div id="text">
21 <h2><img src="../images/oxygen/doc32.png" alt="" /> ZFS Documentation</h2>
22 <div id="toc">
23 <h3>Contents</h3>
24 <ul>
25 <li><a href="#s1b">Introduction</a></li>
26 <li><a href="#s1c">Preparing</a></li>
27 <li><a href="#s1d">Examples</a></li>
28 <li><a href="#s1e">Recovery</a></li>
29 <li><a href="#s1f">Snapshots</a></li>
30 <li><a href="#s1g">Send/Receive</a></li>
31 <li><a href="#s1h">Advanced format hard drives (4k sector)</a></li>
32 <li><a href="index.html"><strong>Documentation</strong></a></li>
33 <li><a href="../wiki/"><strong>MidnightBSD Wiki (more)</strong></a></li>
34 </ul>
35 </div>
36 <h3 id="s1a">ZFS</h3>
37 <h4 id="s1b">Introduction</h4>
38 <p>ZFS is a file system developed for Oracle Solaris. It was released as open source under the CDDL
39 with OpenSolaris. FreeBSD created a port of the file system for FreeBSD 7.0-CURRENT. It was imported
40 into MidnightBSD with 0.3-CURRENT.</p>
41
42 <p>ZFS is considered an alternative file system to UFS2 in MidnightBSD. It has independant RAID features
43 that are not tied to GEOM classes. It does not make use of the VFS cache and has some issues with
44 NFS. Advantages include support for very large file systems and large pools of disks. it supports
45 checksum data integrety checking and can repair bad data when raidz is used.</p>
46
47 <p>MidnightBSD includes ZFS file system and storage pool version 6. You may access pools created
48 on other operating systems at or below this version. If you upgrade to version 6, you will no
49 longer be able to read them on older versions.</p>
50
51 <h4 id="s1c">Prepairing</h4>
52 <p>ZFS can be used in two ways. You may either dedicate entire disks to ZFS (recommended) or use
53 GPT partitions (mnbsd-zfs in 0.4-CURRENT) to add to a pool. ZFS shines when used with RAID features.
54 </p>
55
56 <p>If you're going to use RAID, determine how many disks you want to use. It's best to group them
57 in identical sizes. If possible, use the same brand and model of drives when using mirroring.
58 If you have two drives, use mirror. If you have more than two drives, consider using raidz. You
59 may add multiple mirror sets (2 at a time) to the pool.</p>
60
61 <p>ZFS also supports adding spare drives to the pool. They will be used automatically when a drive fails.</p>
62
63 <p>It is strongly recommended to use ZFS only with amd64 MidnightBSD and only on systems with more
64 than 1GB of RAM. It will require tuning sysctl's to get the right balance of memory usage. Particularly,
65 you need to watch the ARC size as it can grow very large</p>
66
67 <p>MidnightBSD does not support booting from ZFS at this time. It may be added in a future release. You need
68 a UFS/UFS2 partition for / including /boot, but /var, /tmp, /usr and /home can be on ZFS.</p>
69
70 <h4 id="s1d">Examples</h4>
71
72 <p>In these examples, mpool and tank are used as pool names. You can pick any name for the pool, but tank is very
73 common. After creating a pool named tank, you'll see /tank</p>
74
75 <p>You will most likely want to add zfs_enable="YES" into /etc/rc.conf so that ZFS is loaded on system startup</p>
76
77 <p>Create a mirror</p>
78 <code>zpool create mpool mirror /dev/ad0 /dev/ad1</code>
79
80 <p>Add a spare drive</p>
81 <code> zpool add mpool spare /dev/ad3</code>
82
83 <p>Check status</p>
84 <code>zpool status</code>
85
86 <p>Listing information about pools</p>
87 <code>zpool list</code>
88 <code>zfs list</code>
89
90 <p>Create file systems</p>
91 <code>zfs create mpool/data</code>
92
93 <p>Use raidz instead (raid 5 like mode)</p>
94 <code>zpool add tank raidz /dev/ad0 /dev/ad1 /dev/ad2</code>
95
96 <p>Scrub data (check for errors)</p>
97 <code>zpool scrub tank</code>
98
99 <h4 id="s1e">Recovery</h4>
100
101 <p>During a hardware upgrade such as moving to a new motherboard or controller,
102 one might find their zpool damaged. Usually the cause is that the device name has
103 changed. For instance, a recent upgrade moved ad6 to ad12.</p>
104 <p>To fix this problem,
105 several steps are required.
106 <ul><li><code>rm /boot/zfs/zpool.cache & shutdown -r now</code></li>
107 <li>
108 <code>zpool list</code>. This should not show the pool.
109 </li>
110 <li><code>zpool import</code>. It should show you possible pools to recover.</li>
111 <li>
112 Finally, try <code>zpool import <em>name of pool</em></code>.
113 </li>
114 </ul>
115
116 <p>To verify it worked, run <code>zpool list</code></p>
117
118 <h4 id="s1f">Snapshots</h4>
119
120 <p>A ZFS snapshot, is a point in time copy or bookmark of your data. You can use
121 it to compare changes made to a file system or to backup a file system. This allows
122 you to get your data back after trying an upgrade, etc. It can be a handy trick
123 to make copies of jails easily.
124 </p>
125
126 <p>You can create a snapshot named 1 using the following:</p>
127 <code>zfs snapshot tank/test@1</code>
128
129 <p>You can also apply a snapshot recursively to all file systems in a pool
130 with the r flag</p>
131
132 <code>zfs snapshot -r tank/home@now</code>
133
134 <p>As more changes occur to a file system, the amount of disk space a snapshot
135 takes increases. You will want to purge old snapshots to free up disk space
136 when they are no longer needed.</p>
137
138 <code>zfs destroy tank/home@now</code>
139
140 <p>You can use the rename command to rename a snapshot, the hold command to
141 prevent removal of a snapshot, and many more options. Consult the
142 relevant man pages for more information.</p>
143
144 <p>Finally, you can list snapshots</p>
145 <code>zfs list -t snapshot</code>
146
147 <p>You can also make zfs list show snapshots by default by changing this setting
148 <code>zpool set listsnapshots=on tank</code></p>
149
150 <h4 id="s1g">Using Send and Receive</h4>
151 <p>You can use zfs to send a snapshot to the same or another pool with the
152 zfs send and receive commands. This can be used to backup ZFS file systems
153 to another location such as an external disk.</p>
154
155 <p>To backup the snapshot named 1 from file system test:
156 <code>
157 zfs send tank/test@1 | zfs receive tank/testback
158 </code>
159 </p>
160
161 <h4 id="s1h">Advanced format hard drives (4k sector)</h4>
162
163 <p>Many 4k sector drives do not report their size properly in a bad attempt
164 at backward compatibility. ZFS works fine with drives that report
165 properly, but for the rest of them the following workaround is
166 recommended.
167 </p>
168 <p>
169 <code>
170 gpart create -s gpt ada0<br>
171
172 # create partitions<br>
173 gpart add -a 1m -t mnbsd-zfs -l drive0 ada0<br>
174 gpart add -a 1m -t mnbsd-zfs -l drive1 ada1<br>
175
176 # use gnop to make 4k friendly devices <br>
177 gnop create -S 4k gpt/drive0<br>
178 gnop create -S 4k gpt/drive1<br>
179
180 # make a mirror<br>
181 zpool create mpool mirror /dev/gpt/drive0.nop /dev/gpt/drive1.nop<br>
182
183 # export pool and remove virtual devices<br>
184 zpool export mpool<br>
185 gnop destroy gpt/drive0.nop<br>
186 gnop destroy gpt/drive1.nop<br>
187
188 # import and keep labels (via -d flag)<br>
189 zpool import -d /dev/gpt mpool<br>
190 </code>
191 </p>
192
193
194 <div id="disqus_thread"></div>
195 <script type="text/javascript">
196 var disqus_shortname = 'midnightbsd';
197
198 (function() {
199 var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
200 dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js';
201 (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
202 })();
203 </script>
204 <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
205 <a href="http://disqus.com" class="dsq-brlink">blog comments powered by <span class="logo-disqus">Disqus</span></a>
206 </div>
207 <!--#include virtual="/footer.html"-->
208 </body>
209 </html>

Properties

Name Value
svn:executable *