Friday, May 13, 2011

NetApp Training Brain Dump: Aggregate/RAID Group/Shelf Planning Calculator

Capacity Planning Calculator
 Part of understanding how to implement a NetApp system is figuring out the layout of your disks. Doing the math a couple of times definitely helps, but at some point it's nice to have a Capacity Planning Calculator. So here it is, a quick and easy calculator in excel format to help you figure out the relationship between your RAID Group size and quantity, shelves, drive sizes, and spare requirements.

Please note that although this table is true to NetApp's documentation, there's no way for this to be 100% accurate in this without using NetApp's actual ONTAP code, which I don't have access to :-) This calculator is meant for planning in a simple, understandable format. Technical notes below, but I highly recommend reading this before working with this tool.

Capacity Planning Calculator Link (not a virus I promise): http://www.box.net/shared/l8v66b8jzx
I password protected portions of the calculations to make it clear what you can edit and keep life simple.  If you wish to improve upon or edit this sheet, the password is netapp

Enjoy!

Courtesy: me!



Notes (You're gonna want to read these):
-  Disk manufacturers reserve 7% of space to account for failed sectors.
-  WAFL reserves 10%
-  Fields you may alter are marked white.  Do not change fields marked grey.
-  Two drives per RG are reserved for RAID DP.  You may edit the number of spares you wish to keep.
-  All numbers are in TB.  Convert your drive size to TB (e.g. 300GB = 300GB/1024GB = .293TB).
-  The largest 15k SAS disk available as of 5.13.2011 is 600GB
-  The largest FC disk available as of 5.13.2011 is 750GB
-  Aggregates size limits do not count space lost to parity, spares, or disk reserve.  Click here for NetApp documentation (NOW login required)
-  Assumes full shelves.
-  If you want a super deep dive into space reservation with ONTAP, try here: http://rogerluethy.wordpress.com/2011/01/14/play-with-netapp-numbers/


No comments:

Post a Comment