Preparing for a deep dive into NetApp technology! In an intelligence report to King George in 1776, England's spies wrote about John Adam's strength being that he "sees large things largely." I try to take that approach of not getting caught in minutia when approaching a new technology, to better grasp the big picture. The next few posts will be my journey into that, and I'm sure that in trying to encapsulate complex ideas I will be slightly incorrect in some of these statements. Nuance comes with time! So here we go, basic terms, spelled out in English:
Product Definitions:
- FAS system (aka filer): NetApp's term for the custom machine that manages the storage. Roughly equivalent in purpose to HP EVA, IBM XIV, etc. Capable of serving storage over ethernet NAS (file based protocols like HTTP, FTP, CIFS, etc) or SAN block based protocols (FCoE, iSCSI, or FC). FAS (Fabric Attached Storage) designates that the filer is operating on FCoE, iSCSI, or FC rather than simply as a NAS device.
- SnapVault (OSSV): NetApp's backup solution. Allows full or incremental backups to be transfered from a server directly to a NetApp storage system.
- SnapMirror: Real time replication. Effectively creates software layer RAID 1 by creating exact clones of volumes or qtrees (can't mirror an aggregate from what I've read). This enables NetApp's Metrocluster.
- Metrocluster: their version of DR implementation. Two options: stretch (both controllers in one datacenter) or fabric attached (replication across an ISL (inter-site link) with one controller in each datacenter).
- SyncMirror:
- SnapDrive:
- FlexShare: Allows you to set processing priority for volumes within an aggregate.
- iGroup: Initiator group. All LUN's are mapped to an iGroup, which handle LUN masking based upon the client system. The iGroups basically contain the specifications for the OS-App combo etc to communicate to the LUN. Typically, each server (or cluster) should have its own iGroup based upon the OS, Application (SQL, VMware, etc), and SAN protocol.
I'll keep these definitions updated as I learn the nuances or need to make corrections.
Product Definitions:
- FAS system (aka filer): NetApp's term for the custom machine that manages the storage. Roughly equivalent in purpose to HP EVA, IBM XIV, etc. Capable of serving storage over ethernet NAS (file based protocols like HTTP, FTP, CIFS, etc) or SAN block based protocols (FCoE, iSCSI, or FC). FAS (Fabric Attached Storage) designates that the filer is operating on FCoE, iSCSI, or FC rather than simply as a NAS device.
- SnapVault (OSSV): NetApp's backup solution. Allows full or incremental backups to be transfered from a server directly to a NetApp storage system.
- SnapMirror: Real time replication. Effectively creates software layer RAID 1 by creating exact clones of volumes or qtrees (can't mirror an aggregate from what I've read). This enables NetApp's Metrocluster.
- Metrocluster: their version of DR implementation. Two options: stretch (both controllers in one datacenter) or fabric attached (replication across an ISL (inter-site link) with one controller in each datacenter).
- SyncMirror:
- SnapDrive:
- FlexShare: Allows you to set processing priority for volumes within an aggregate.
- iGroup: Initiator group. All LUN's are mapped to an iGroup, which handle LUN masking based upon the client system. The iGroups basically contain the specifications for the OS-App combo etc to communicate to the LUN. Typically, each server (or cluster) should have its own iGroup based upon the OS, Application (SQL, VMware, etc), and SAN protocol.
Break it down: There are a few layers where the building blocks of storage are combined to form higher level concepts for easier management, each with NetApp-specific jargon. No worries, I'm here to translate and simplify:
- Layer 1: Disk drives. duh.
- Layer 2: RAID Group. This is a group of up to 28 disks operating as a pool of storage, 16 best practice. You want all the RG's in a specific aggregate to be the same size. Two parity disks per RG.
- Layer 2.5: Plex. A plex is a physical copy of the WAFL storage within the aggregate. A mirrored aggregate consists of two plexes; unmirrored aggregates contain a single plex. Take 11 players from the Chicago Bears and NE Patriots, and they're a football team. Move them around a bit, and you can put them in shotgun formation. You can say that they're a set of players (aggregate), and they're distinctly from the Bears and the Patriots (volumes in the aggregate), and that they're a formation (plex)...there are many ways to view the organization of data.
- Layer 3: Aggregate. This is a group of RAID Groups. A RAID group can not be assigned to more than one Aggregate.
- Layer 4: Volume. This is space carved out inside an aggregate. Typically this is space for 1 LUN + reserve space.
- Layer 5: LUN. This is space carved out inside a volume. There can be multiple LUNs per volume, but that can be inadvisable. The LUN is the actual virtual disk being presented to the server.
- Layer 5: LUN. This is space carved out inside a volume. There can be multiple LUNs per volume, but that can be inadvisable. The LUN is the actual virtual disk being presented to the server.
- Layer 6: QTree. Essentially, this is space carved out inside a LUN for a particular directory, sometimes with a hard limit.
I'll keep these definitions updated as I learn the nuances or need to make corrections.
No comments:
Post a Comment