Since I'm always looking out for the underdog, I'm aware that many customers can't afford to buy 20 shelves of disks at a time. And even though they are smaller customers, getting in at a ground floor with great technology and earning loyalty from a customer is a big priority for any smart company.
If you've just invested in a small NetApp deployment, here are the questions you should be asking yourself:
1. How can I get the most usable space out of my investment?
2. How can I ensure full redundancy and data protection?
3. What configuration will squeeze the most performance out of this system?
4. Where are my performance bottlenecks today in #3's configuration?
5. How long will it take to saturate that bottleneck, and what will be my plans to expand?
I'm going to discuss the configuration options that will both maximize your initial investment and set you up for success in the long term. Be aware that this is a textbook study of tradeoffs between stability, scalability, space, and performance.
A few basics:
1. Each controller needs to put its root volume somewhere. Where yo put it makes a big difference when working with <100 disks.
a. For an enterprise user, the recommended configuration is to create a 3 disk aggregate whose only responsibility is to hold this root volume, which requires no more than a few GB's of space. If you only purchased 24 or 48 disks, you could understandably consider this to be pretty wasteful.
The rational behind this setup is that you isolate these three OS disks from other IO, making your base OS more secure. More importantly, if you ever have to recover the OS due to corruption, the checkdisk process will only have to run against 3 disks rather than several dozen. Lastly, if your root vol ever runs out of space, expect a system panic. By creating a 3 disk aggregate, you protect it from being crowded out by expanding snapshots.
b. For a smaller deployment, another option would be to create an aggregate spanning those 24-48 disks and have the root volume reside there. This is a valid option that is taken by many customers.
2. Each RAID Group (RG) has 2 disks dedicated to parity. Consider this when looking at space utilization.
3. You typically want to avoid having a mixed ownership loop/stack. What this means is within a stack of shelves, do your best to have the disks only owned by a single controller. This is not always achievable right away, but could be after an expansion.
4. Before creating a RG plan for any SAN, one should read TR-3437 and understand it thoroughly. It covers everything you need to know.
Scenario: You purchase a FAS3100 series cluster with 2 shelves of SAS 450GB, no FlashCache. Here are a few of the options available. Note: these IOP numbers are not meant to be accurate, just illustrate the relative merits of each configuration.
1. Create two stacks of 1 shelf. Create two RG's per shelf of 11 and 12 disks each, combined in one aggregate. Leave one spare disk per shelf. Results:
Usable space: 12.89TB
IOPS the disks can support: 175 IOPS/disk * 19 data disks = 3325 IOPS each controller.
Advantages: Easily expandable (existing RG's will be expanded onto new shelves, improving stability), full controller CPU utilization available, volumes on each controller are shielded in terms of performance from volumes on the other controller.
Disadvantages: Lower usable space, lower IOPS available for any one volume, no RG dedicated to root volume.
2. Create two stacks of 1 shelf. Create 1 RG per shelf of 23 disks each, combined in one aggregate. Leave one spare disk per shelf. Results:
Usable space: 14.26TB
IOPS the disks can support: 175 IOPS/disk * 21 data disks = 3675 IOPS each controller.
Advantages: Higher usable space, lower IOPS available for any one volume, full controller CPU utilization available, volumes on each controller are shielded in terms of performance from volumes on the other controller.
Disadvantages: Lower IOPS available for any one volume, no RG dedicated to root volume, lower data protection because of the large RG size, lower stability when expanded because the entire RG is located in one shelf.
3. Create 1 stack of 2 shelves for an active/passive config. Create 4 RG's (14, 14, 15, 3), with the large RG's combined in one aggregate and the 3 disk RG in another. Leave one spare disk per shelf. Results:
Usable space: 12.39TB
IOPS the disks can support: 175 IOPS/disk * 37 data disks = 6475 IOPS for only the active controller.
Advantages: High IOPS available for any one volume, volumes on each controller are shielded in terms of performance from volumes on the other controller, expandable (existing RG's will be expanded onto new shelves, improving stability).
Disadvantages: Lower usable space, no RG dedicated to active controller root volume, only half the CPU power of the cluster used.
4. Create 1 stack of 2 shelves for an active/passive config. Create 3 RG's (22, 21, 3), with the two largest in one aggregate and the 3 disk RG in another. Leave one spare disk per shelf. Results:
Usable space: 13.08TB
IOPS the disks can support: 175 IOPS/disk * 39 data disks = 6825 IOPS for only the active controller.
Advantages: Highest IOPS available for any one volume.
Disadvantages: Lower usable space, no RG dedicated to root volume on active controller, lower data protection because of the large RG size, lower stability when expanded because the entire RG's are located in two shelves, only half the CPU power of the cluster used.
5. Create 1 stack of 2 shelves for an active/passive config. Create 4 RG's (20, 20, 3, 3), with the two largest in one aggregate and two root aggregates of 3 disks.. Leave one spare disk per shelf. Results:
Usable space: 11.89TB
IOPS the disks can support: 175 IOPS/disk * 36 data disks = 6300 IOPS for only the active controller.
Advantages: RG dedicated to root volumes, high IOPS for active controller.
Disadvantages: Lower usable space, lower data protection because of the large RG size, lower stability when expanded because the entire RG's are located in two shelves, only half the CPU power of the cluster used.
Here's the break down:
This post is long enough already so I'll keep the conclusion short: understand the requirements of your application, and use the examples above to help customize NetApp systems to meet those specs at a low price.
If you've just invested in a small NetApp deployment, here are the questions you should be asking yourself:
1. How can I get the most usable space out of my investment?
2. How can I ensure full redundancy and data protection?
3. What configuration will squeeze the most performance out of this system?
4. Where are my performance bottlenecks today in #3's configuration?
5. How long will it take to saturate that bottleneck, and what will be my plans to expand?
I'm going to discuss the configuration options that will both maximize your initial investment and set you up for success in the long term. Be aware that this is a textbook study of tradeoffs between stability, scalability, space, and performance.
A few basics:
1. Each controller needs to put its root volume somewhere. Where yo put it makes a big difference when working with <100 disks.
a. For an enterprise user, the recommended configuration is to create a 3 disk aggregate whose only responsibility is to hold this root volume, which requires no more than a few GB's of space. If you only purchased 24 or 48 disks, you could understandably consider this to be pretty wasteful.
The rational behind this setup is that you isolate these three OS disks from other IO, making your base OS more secure. More importantly, if you ever have to recover the OS due to corruption, the checkdisk process will only have to run against 3 disks rather than several dozen. Lastly, if your root vol ever runs out of space, expect a system panic. By creating a 3 disk aggregate, you protect it from being crowded out by expanding snapshots.
b. For a smaller deployment, another option would be to create an aggregate spanning those 24-48 disks and have the root volume reside there. This is a valid option that is taken by many customers.
2. Each RAID Group (RG) has 2 disks dedicated to parity. Consider this when looking at space utilization.
3. You typically want to avoid having a mixed ownership loop/stack. What this means is within a stack of shelves, do your best to have the disks only owned by a single controller. This is not always achievable right away, but could be after an expansion.
4. Before creating a RG plan for any SAN, one should read TR-3437 and understand it thoroughly. It covers everything you need to know.
Scenario: You purchase a FAS3100 series cluster with 2 shelves of SAS 450GB, no FlashCache. Here are a few of the options available. Note: these IOP numbers are not meant to be accurate, just illustrate the relative merits of each configuration.
1. Create two stacks of 1 shelf. Create two RG's per shelf of 11 and 12 disks each, combined in one aggregate. Leave one spare disk per shelf. Results:
Usable space: 12.89TB
IOPS the disks can support: 175 IOPS/disk * 19 data disks = 3325 IOPS each controller.
Advantages: Easily expandable (existing RG's will be expanded onto new shelves, improving stability), full controller CPU utilization available, volumes on each controller are shielded in terms of performance from volumes on the other controller.
Disadvantages: Lower usable space, lower IOPS available for any one volume, no RG dedicated to root volume.
2. Create two stacks of 1 shelf. Create 1 RG per shelf of 23 disks each, combined in one aggregate. Leave one spare disk per shelf. Results:
Usable space: 14.26TB
IOPS the disks can support: 175 IOPS/disk * 21 data disks = 3675 IOPS each controller.
Advantages: Higher usable space, lower IOPS available for any one volume, full controller CPU utilization available, volumes on each controller are shielded in terms of performance from volumes on the other controller.
Disadvantages: Lower IOPS available for any one volume, no RG dedicated to root volume, lower data protection because of the large RG size, lower stability when expanded because the entire RG is located in one shelf.
3. Create 1 stack of 2 shelves for an active/passive config. Create 4 RG's (14, 14, 15, 3), with the large RG's combined in one aggregate and the 3 disk RG in another. Leave one spare disk per shelf. Results:
Usable space: 12.39TB
IOPS the disks can support: 175 IOPS/disk * 37 data disks = 6475 IOPS for only the active controller.
Advantages: High IOPS available for any one volume, volumes on each controller are shielded in terms of performance from volumes on the other controller, expandable (existing RG's will be expanded onto new shelves, improving stability).
Disadvantages: Lower usable space, no RG dedicated to active controller root volume, only half the CPU power of the cluster used.
4. Create 1 stack of 2 shelves for an active/passive config. Create 3 RG's (22, 21, 3), with the two largest in one aggregate and the 3 disk RG in another. Leave one spare disk per shelf. Results:
Usable space: 13.08TB
IOPS the disks can support: 175 IOPS/disk * 39 data disks = 6825 IOPS for only the active controller.
Advantages: Highest IOPS available for any one volume.
Disadvantages: Lower usable space, no RG dedicated to root volume on active controller, lower data protection because of the large RG size, lower stability when expanded because the entire RG's are located in two shelves, only half the CPU power of the cluster used.
5. Create 1 stack of 2 shelves for an active/passive config. Create 4 RG's (20, 20, 3, 3), with the two largest in one aggregate and two root aggregates of 3 disks.. Leave one spare disk per shelf. Results:
Usable space: 11.89TB
IOPS the disks can support: 175 IOPS/disk * 36 data disks = 6300 IOPS for only the active controller.
Advantages: RG dedicated to root volumes, high IOPS for active controller.
Disadvantages: Lower usable space, lower data protection because of the large RG size, lower stability when expanded because the entire RG's are located in two shelves, only half the CPU power of the cluster used.
Here's the break down:
Credit: Me! |
No comments:
Post a Comment