Welcome

Welcome to my blog of papers on IT technology, please provide feedback and suggestions. Enjoy.

Monday, April 13, 2015

Unsafe Powershell Functions

Of recent I have been swimming in the deep end of the PowerShell pool. I discovered to my chagrin how some commands in PowerShell are unsafe. Let me describe what I mean by unsafe, as there are multiple interpretations depending on the context.


A unsafe command is one that interprets empty or undefined variable as to mean the wildcard * .


The example I got into this with is the Exchange 2010 Module's get-mailbox command. In this context "Get-MailBox $_"  is equal to "Get-Mailbox"   when $_ is undefined or blank.
This means the command pulls all mailboxes, rather that erring out, or querying for data.

One thing I realized is that you could protect the code with a conditional statement.
' where {$_ -NE ""} | Get-Mailbox $_ '
or for those of you who like to use short alias in your code
' ? {$_ -NE ""} | Get-Mailbox $_ '

Monday, January 27, 2014

Availability scoring.

Oh what to do, the boss want the system up 100%, but he wants to spend nothing to do it.

How do I cope?

What if there was a way to give them a number that they could quantify, and then the costs for each step to make a system more reliable could be seen from a cost to benefit ratio?

High availability is more complicated that having five 9’s, there are a lot of things that can be done to improve the availability profile of a IT system. These things are varied and diverse and have different effects on the system. In communicating all the things that need to be done, its easy to get lost in the weeds.

One of the problems, is that non-techies, see only the cost, and the big companies marketing of expensive solutions of fancy equipment. This is not true, it's not about cost, it is about taking advantage of what you already have, and what you could easily do. Of course good quality equipment is required for good results, unreliable inexpensive hardware is, well, Unreliable. However there are many things that can be done with new or existing equipment to provide the maximum reliability that it is capable of. Some things can be done inexpensively, and we need to track them and see that they are both done, and maintained.

To overcome this, I created a spreadsheet with the Availability Scoring system as a prototype, the scoring system I  have used is subjective scoring based on my Microsoft Exchange engineer perspective. It's a starting point for discussion. Your site and technology might change what options you have to select from to make the scores work. A different backup strategy would have a different maintainability profile. And of course, there are different considerations for different kinds of servers and systems. Take the ball and run with it, I am sure there are a lot of things I didn't think of that you could do to improve the reliability of the servers in your environment.  Look at the aspects of the scoring system below and think of feature and functions that your not using that may be already and easily, or cheaply added to the system.

Excel Calculator Spreadsheet for Availability calculation

(it is a mufti page spreadsheet done with Microsoft excel. Page 1 is instruction, Page 2 is the input page and Page 3 is the result page.)

These are the Aspect to Availability scoring


Aspect Description
Resistance


This is a measure of durability, how does this feature or set of features impact the resistance to an outage caused by a failure or assault   
Resilience


Recovery from a failure, can a system repair, reset or
overcome attack or failure. To bounce back from an outage.

Maintainability



How this feature impacts the upkeep and the cost in labor and time to keep the system running smoothly. All aspects such as user adding, user support, configuration,  for patching, service, upgrade etc
.
Recoverability


The ability for a system to be Repair from a failure. Be it partial, minor, major,  catastrophic or total  failure.

Security

Configuration, service or device that protects the C.I.A. of a system    

In the right hands this can be a powerful tool for an organization to manange, communication and comprehend the complexity of making their IT infrastructure much more reliable and maintainable.  

Oh, and leave me a note of what you make of it and how it helps you!

Thursday, November 17, 2011

Solar Max

Solar Maximum Is a term the Doomsday crowd is banding about as the next cause of our extinction. While the end of the world has been predicted at least 242 times. End of the world predictions have a consistent track record, as of my current knowledge the world has not ended yet.  I predict that the trend will continue.

But will Solar Max do anything? The answer is yes, it will have a real and measurable impact on the world. Billions of dollars of damage to infrastructure has been incurred during the last Solar Max in 2000

Living with a star has it'd hazards. There is Space Weather due to the nature of how the Sun produces light. For the protection of electronics and communication systems, it can be thought of as a Solar lightning storm. It's a much more regular event than you would expect. At the scale of the solar system, such particle emissions are directional and effect the other planets as well.  This current Cycle 24  is expected to be a large one, second only to the current historical record set In 1954. The impact to radio communications is significant and has been a issue of concern to Amateur radio operators for some time.

Preparations
The same steps for protecting the electrical and communication infrastructure that is performed for protection from Lightning storms and other atmospheric storm is perscribed for this.  This is a excellent opportunity to review the existing protections in place, and maintain or replace aged components and protection systems. As ligning arresters, surge supressors and other protection devices, provide protection they deterioate and lose capactity to protect the systems. How often should a protection device be replaced, Depends largely on the amount of surges it has handled, and its age. This is a good opportunity to inspect and install protection in places where it may not have been made.

- Electrical Grounding for Equipment in the office, home and data center, the office an home should be inspected and repaired as necessary, this is a important factor for proper system operations for many computers

- Lightning arresters, surge supressors and other protections should be placed in circuits that penetrate exterior walls. It is also time to inspect and replace older protection units also.
   - Power
   - Cable TV
   - Phone
   - Antenna
   - Internet
   - and other signal lines.

As a rule of thumb, if it can carry electricity it can carry lightning.

As lightning and the impact of solar storms can penetrate through most building materials, the surge suppressors that are used in data centers should be reviewed for age and condition. Smaller desktop units should be replaced and maintained if they are more than 3 years old.

A Further step that can help eliminate the impact of some of the EMI and RFI is twisting a cable bundle, like twisted pair cable is. the exposes different sides of the wires to the Electromagnetic filed, and will cancel or reduce the induced voltages and help minimize the impact of whatever energey that might be induced.

Monday, August 15, 2011

Cyber warfare and the national integrity of information technology.

Cyber War

Foreign and domestic forces are dedicated to the usurpation, vandalism and destruction of the IT resources of innocent organizations domestic and abroad. The nature of computer and Internet technology makes this theater of battle a turbulent landscape. The combatants range of aggressors such as individual private computer hackers vandals, or ad-hoc groups, Crime syndicates, foreign governments or agents. The home team could be a government agency, large corporation, small business, home users, and sometimes a child using his home video game. This war is has no respect of persons.

The advent and prevalence of broadband to the home, and ubiquitous nature of the internet into culture, commerce, and government has opened the battleground to almost everyone. Broadly, there are initiatives and steps that must be taken to defend these resources.  In this paper I will address some of the steps possible to change the tide of the war. Technologies currently exist and don’t require invention that will help. These technologies only need some refinement and adoption to mainstream deployment. There will always be more to do, as the nature of technological realm is continually evolving. Since the war is on every Internet doorstop, countermeasure must be put at every doorstep.

Information has become the ultimate commodity, beyond what the books have been to humanity, the Internet represents an increase of dimensions for the ability of communication and record keeping of humanity. Mastering this new technology\ will be as profound as the commercialization of paper was to the world. As of now, we have only seen the beginning, and our choices here will have a profound impact on the future. 

Levels of impact

Change is the heart of the Internet. Change as defined by a physics professor at my school, as one of the constants of the universe. The internet is driven by constant innovation on fertile ground, and this changes how things “can be” almost constantly.  This makes how Government and Business do things, sometimes incompatible with the best practices of keeping Information technology safe. Everything we do in regards to policy must work to enable the possibilities and future that the Internet presents, and make it safe secondarily. “There's no reward in life without risk.” -  Farber, Barry J.

Security is a limiting word, with the infinite potential of the Internet and it’s future, the concept needs to be expanded. Succinctly, we are talking of the Integrity, information flow, and command.  Since all the pieces of the internet, interconnect and relate,  all the levels of the system effect the integrity of the Internet.  Command is the control of a system or data to take an action of some kind. Integrity of the command involves technical accuracy of the actions performed, but also who can give this instruction, and what to effect.  Commands can be complex, simple or abstract. What we view as data has developed bewildering complexity in the digital realm.  Controlling this is the essence of the ongoing battle. In this view, Security is part of maintaining the whole integrity of the system, and the internet, not just at single points.

Impact to the integrity of  the national Internet and infrastructure is dependent on all the components working together to ensure this integrity.   The controls must work to also enable the attributes of the internet to continue to thrive. The internet is a dangerous idea, like democracy, electricity and airplanes. The fields in which they grow can be tilled and fertilize, but the weeds must be controlled prudently.  Wisdom is needed as sometimes the weeds can be the germ of the next dangerous idea.

Eliminate duplication of effort:

How government acquires it Information Technology (IT) can benefit from efficiencies  of scale. Taking model from the Military’s NMCI approach and developing a ongoing solution would give the Government as an organization a better ability to defend itself from the impact of the battles being fought. This also serves to remove the federal IT resources from becoming unwitting accessories for IT assailants. Some of these strategies will also serve to reduce the cost of IT, and increase productivity with it.

The national Internet structure has architectural limitations, and other things that will provide better integrity to the National resource we call the Internet.
Finally the endpoints of the great communication grid need a different paradigms for the future. Some of these things are still developing and some are waiting to be used.

Application’s  secure design

While the operating system is part of integrity and secure operation of a IT System. Computer, server, or other network component. Applications that are run on these systems are a weak link in the IT Integrity equation. Many vendors are having to review their products for vulnerable coding techniques. Building software from with good coding practice is not sufficient, code vulnerabilities to exploit can be well written and follow good practice. Security and vulnerability considerations must also be considered separately, and just as critically important.

Digital Certificates

At the heart of any security system is the concept of Non-repudiation. Repudiation is the concept that I didn't send that, it's not mine. Which is what we see when a vandal, forger, thief burgles a IT resorce. Many computers are used in the flood of unsolicited commercial email  (SPAM) on the Internet.  The computers and users that send the bulk of the Spam are not willingly involved, their computers have been taken over, or resources stolen. As a indicator in the intent of this, the people who do this, use the word "owned" to indication that have stolen ownership, of the computer and its resourced. It is a term taken from military themed video games.  Non-repudiation is the process of showing legitimacy of the transaction, email or information. A digital signature, or other method is used to press a computer version of a wax seal onto the document to authenticate it.  Of course its far more sophisticated than wax and a ornate impression, but it server the modern digital equivalent.

Many Internet protocols have an inherent anonymous trust. The computers don't check to see who its from, only that it conforms to it's software requirements.  Packets of information that get on the internet get sent to their destination very reliably, most traffic is accepted blindly and delivered to its destination. Except only for intentional roadblocks and protection technologies that destroy unsolicited or undesirable packets of information, most traffic is accepted blindly and delivered.  Special protocols have been created to overcome this and use methods to validate the source of the information. They use non-repudiation to insure that the information comes from a valid source. IPSec, Virtual Private Networks, Secure Shell, and HTTPS (SSL) protocols provide varying levels of protection. 

 Certificate technologies have been developed as a method to add a level of trust to the protocols and the internet. Computer and User certificates will enhance computer integrity and security by refusing or restricting unsigned information or data. This serves to harden the communications technology from attack, and provide traceability against hostile computer activity. This technology has gained wide use, has proven very reliable. Has extended to new application. There are many areas that could benefit from adoption of this technology.

System Integrity

Many strategies have been developed to protect computers from malware. Malware are, Computer Virus, worms and othetr similar such destructive code.  All of these strategies should be continued to be advanced.  It is one mater to have the technology and another to deploy it. Three fundamental strategies are:

Detecting Malware -. This has a solid track record, but is reactive in nature, Signature updates, must be distributed before software can detect the ever alarming rate of new variants. In many situations this is the only available option, and is highly successful in cleaning Data received from the Internet. Heuristic approaches, that analyze computer code for suspicious signs,  have gained some ground over the years, but have proven technically challenging to implement, they still hold much promise. This detecting for malware is widely used and for the most part successful. It does have as a prerequisite that the software is known to be malignant, sleeper code is a difficult threat.

Integrity monitoring – This is the opposite approach, detecting and verifying known good signatures. The challenge here is maintaining a database of what is good or safe code, and minimizing false alerts.

Signed Code -- Techniques
It is becoming obvious that another strategy will become vital, that is that should be considered, and developed is the signed code algorithm. Personal computer software is constructed from the most of the same building blocks as the next software Program files, and other related files. An integrity algorithm should be deployed to government computers to minimize the load time and performance degradation that is involved with scaning strategies. This technique protects systems from rare, but file corruption problems, which can be difficult to find.

Secure by Default

Many PC’s are sold on the market that are not patched or secured, in the belief that the manufacturer and vendor will not have liability due to a security breach. Consumer and small business computer should be sold and provided in the opposite mode. Personal computer should be deployed in a secure condition. Services and functionality should be secure and disabled by default, and the new owner user should be given an easy to use utility to enable what features he needs. The idea that features and functions for computers should be enabled as needed is wise policy and should be advanced on to prevent malware from getting a foothold. The Federal Agency (DISA) already produces Vulnerability scanning software; this technology could be advanced and made more available. Standards and  advances in this technology would enable computer companies to evolve this to a common practice.

Internet Version 6

The current version of the Internet that is use in the USA is based on an old protocol standard.  While this was suitable for much of the growth of the internet, The Cyber IT needs of the have outgrown it. As a security measure IPv6  will serve us well, however as a strategic mater, the USA is falling behind the world in capability by delaying the upgrade.  The global internet is only completely accessible via IPv6 network now. There are parts of the internet, that the US in its myopic way has lost, and as time goes on that portion of the internet that is inaccessible to the majority of businesses and households is growing.
The current standard the rest of the world is using is a new architecture and design. This new version is Internet Protocol version 6 (IPv6) The US Agencies, companies and organizations need to convert their systems to this new protocol. Major changes were made that eliminate the weaknesses of the antiquated protocol. As much as possible, and as rapidly as feasible the IPv4 systems should be upgraded or replaced. This strategically increase the accessibility to the Entire Internet and keep the US at least in step with the rest of the IT world.

Signed email

Email presents a common vector for the distribution of Malware, virus, scams, and other despicable email. Unsolicited Commercial Email  (UCE) Malware and take advantage of the anonymity of the standard Internet Email protocol. And advancement that should be adopted to eliminate this weakness would be the wide spread adoption of Cryptographic signatures for email. All email could be processed much more efficiently if a Signature is placed on legitimate email that can be nearly instantly validated and delivered.  We are granted free speech by the constitution, but not anonymous speech.  Other better methods exist for legitimate Anonymous speech.

Retire antiquated IT

Agencies are still deploying PC’s with Windows XP, and have delayed plans almost indefinitely to upgrade. While Windows XP is a solid reliable old workhorse, it was written and developed well before the war of data was near the level it is today.  This product is EOL in 2013. Newer technologies, as well as methods of backwards compatibility exist, that make this dithering fertile ground for exploitation.

Newer operating systems have been designed and created with a security designed at the core, and potentiality for the future.

Secure baseline computers.

Redundant work is done to secure PC’s computers when the are purchased, and many agencies have their own way of securing systems, many best practice time and effort is wasted to reproduce the same product. Secure standard system images should be centrally developed and made available for agencies to work with. By aggregating the resources and knowledge, a much superior secure IT platform could be established for the Federal government and other agencies that would benefit from the power to purchase at scale that would enable manufacturers to better serve a common platform of millions, rather than many thousands of different diverse configurations. This would also allow for rapid response to new threats.
Having a common platform and hardware control is a necessary as advanced techniques such as JTAG hacks, Thumb drive and other Cyber-attacks Develop, and mature.

DOD has fallen victim once already of thumb drive vulnerabilities, and responded by banning a useful technology. This is a strategy that has limit to it, and exacts a price on the productivity.  While ultimately this is the only wise choice the DOD could make, it is a destructive choice if adopted on a larger scale.

IT secret service (ITSS)

Enforcement is in many regards reactive, and needs to become much more proactive. Much as postal mail fraud has inspectors to investigate and correct suspicious activities, the internet in its international scope needs to have a Supranational Internet Inspector General, to provide enforcement of the integrity of the internet in the interest of humanity. Respecting the sovereignty of nations is crucial, while maintain political neutrality to the organizations of the world. They should facilitate the ongoing patrol and sentry of the Internet. Assist in the technical enforcement and identification of offenders and collaborators. Offenders that are complicit, unwitting, computer, or human.   

Sanction against IT terror hosting Organizations.

Internet sanctions have happened before with devastating effects, such actions must be considered, in ways that prudently serve the integrity and progression of technology and the internet. These actions should be performed with prudent deliberation and with sound judgment as possible.

New strategies exist that take this idea and leverage it for proactive control of the integrity of the network. They work on the action of licensure and leasing. A computer applies for the permission to connect to the internet, and this is granted after the pre-requisite conditions are met.

There are technologies that could leverage a lesser level of sanction, for network immunity. This system, blocking and reaction to malevolent or unapproved behaviors could go a long way to provide a intelligent self-protecting capability to the internet. Such technology could serve organizations of all sizes, and would benefit from the advice of the ITSS above.

The Great Firewall question

China has taken the bold move to enforce censorship on the internet, and has established the ethical quandaries for the corporations free societies, how to respect Chinese sovereignty, while maintaining the ethics of a company from a democratic republic.
The Chinese government is trying to block information which is computer data that they see as dangerous. This is a seed for something greater. There are hostile entities attacking the US, and infrastructure technology needs to be implemented to protect the US network infrastructure. There are systems on the internet, that necessarly perfom these functions, and have been evolving. It's not simply a matter of supression of information, but defense of infrastructure. 

What can Technology and systems develop to protect against the dangerous and destructive information and hackers communicating across political boundaries. This is a hard problem that could if prudently designed guard against terrorist and malevolent forces.

The future

New concepts and methodologies concepts are developed constantly in the IT realm. Wisdom and insight is needed to cultivate new IT paradigms, as Computer technologies have repeatedly proven to the scientific realm, to consistently defy conceived limitations, and be one of the dangerous ideas. 

Conclusion

We stand at the horrible crossroads of an indescribable amazing future, or a tragedy of a dark age. The choices we make now need to be prudent and wise, and provide a nurturing environment for the innovation of the Internet and a hindrance to the evils. Wisdom and constructive imbalance towards supporting innovation are paramount to achieving a great destiny.

A great hope is that many organizations, out of self-preservation have deployed and developed some of the technologies above, and evolution and improvement of them is well underway. As the cyber battleground changes and grows, all who depend on computer technology will need to be ever better soldiers.  The war will never be won, but the price of the future is eternal vigilance.  

Monday, August 1, 2011

Loading a Windows 7 Tablet PC

Hardware
I just picked up a Archo's 9 Tablet from one of my favorite scratch and dent tech suppliers, http://www.techforless.com/ , These are products for the not timid tech people.  People who are their own tech support. :-)


Maybe some of my techniques will be helpful to you.
You get something that is a little distressed, in the assumption you can make use of it. And little to no tech support.

Blank Hard drive 
My tablet came with no OS, Blank hard drive. It had a license sticker on the back. An no OS restore cd. 
Which is fine for me, as it is now running fine with Window 7 Ultimate and Office 2010. I had some consern that the drive was a failed disk, but it worked fine. Perhaps some security wonk wiped it clean.

I read a variety of mixed reviews on this device, and in my opinion, it’s great as a tablet pc, and does an amazing job of running some heavy weight software. It’s (not so obvious to some people) not a high end supped up Workstation PC.   It runs what I need, and it browses web just fine.

So I thought I would write up the steps I took to rebuild the os, and share with the group.

Who Need a CD? 
First I generated a Boot thumb drive.
I went to the computer and bought a 8 gig high speed Thumb drive, the size had enough space for a copy of windows 7, and all the other stuff I wanted to load on this new pc.
There are lots of directions, involving some fancy formatting needed to make a thumb drive bootable.  I tried a bunch of things, and nothing worked.

I finally used the Microsoft utility for this, (Windows7-USB-DVD-tool.exe) and Shazam, it worked like a charm. (I wonder what other OS’s I can load on that TD…)

A nice article on it:


Customizing under the hood

Once I got that going, I had to locate the appropriate drivers for the computer I was working with.

This is largely going to the website for the manufacturer doing head scratching and navigating, and downloading all the appropriate files. I packed them all in a directory for later use.

I booted the computer with the newly loaded thumb drive, and installed the OS with very little fuss. I did need the mini-replicator port, to get enough open USB ports to do this. And needed a keyboard and mouse to accomplish anything, as the touch screen drivers were not working properly yet.

My next step was to load the various drivers I had previously downloaded. It took me a little while to sort everything out, for the lack of experience with the software, but I was able to figure out how to calibrate the touch screen (digitizer) and get the onscreen keyboard driver to work.  That took a few hours.  Much learning, much easier next time.

License key woes
Meanwhile the OS was squawking at me that it was not activated and that I had a pirated copy.  That was because I hadn’t put the activation key into the OS yet.

I started the activation process and I kept getting weird errors when I tried to activate the OS with my License Activation key. (yes, it is a legit key. ) Very frustrating, and it took a few tries, and searching the internet  for a clue, what the obscure error code was saying. Turned out to be dead easy. I needed to set the date and time on the computer.. then the key worked like  a charm.

Install Office
I loaded up Office 2010 on the computer and activated that.


Quilting the OS
It was time to patch the thing.
(After completing the patching  there is about 75 patches on it.)
I used my favorite tool for this.. Autopatcher.

It will create a package of all the patches needed for a Windows OS, and MS Office.

I set this up on the thumb drive, and had a working PC download all the patches to the thumb drive.  
This minimized the security risk of putting an unpatched OS on the wire to download the updates.
It’s one thing to put a handful of patches, on a pc versus a 70 some odd patches. This tool make it fairly painless.

Enter the Acrobat 
Then it was time for Adobe product, acrobat reader, flash, and so on.
They also need a round or 2 of patching.

Video entertainment
VLC media player, in case I want to watch some video.

Cleaning crew
Then it was time to turn to performance issues. I loaded 2 free utilities to help with that,
First, was CCleaner, to remove the accumulated garbage on the computer.

I used this to get rid of all the leftover and unneeded files on the computer.  Windows sure is messy, and leaves quite a clutter.

Organize
Once the garbage was cleaned out, I added Defraggler to the  mix to defragment the hard drive.

This is a free defragmenting utility that optimizes the hard drive. And boy did it need defragmenting.

This really helped the performance, again this a tablet pc, and while I believe it CPU on it is mighty powerful, the system only has 1Gb RAM to play with so every bit helps.  And this minimizes the amount of work the computer has to do to get anything done. It also can do an offline defrag of the registry files and the page file.

Vandal Resistance
I added Anti-virus software, provided to me as a free perk by work.

PREY 
I installed Prey open source anti-theft software. http://preyproject.com/
Great for any device with a built in video camera.  Turns a thief into potential prey. : - )

Trash Filtering
I set up the OpenDNS account and installed OpenDNS Updater, and set the DNS setting on the computer to do the internet filtering, ahead of the Antivirus software.

Tick Tok 
I also set the computer to auto synchronize it clock to the internet on a periodic basis, so that I wouldn’t have to deal with the cock problem again. (this is a bunch of advanced setting, that require enabling a system service, and make a few command line changes. (this process is a separate article.)

Wallet survival.
Not counting the Microsoft Software, all the components were free. (I have a Microsoft Technet license, so it’s not really free.)  All the above didn’t cost more than the hardware.

If AV ain't free
If I had to pay for anti-virus, and I have 3 systems that needs something I have been using trend internet security. I have found that the price on this product varies quite a bit. I bought my copy from amazon.com for $13 plus shipping.  You can pay up to $70 for the same product.

My Stuff
And now for my applications.. oh Trancender, where are you? One of the points of this was to load trancender software to help me study.

Conclusion
With some free tools and digging I saved a bucket of money, and ended up with a tablet PC my way.

Thursday, December 30, 2010

Many ways to skin a mailbox server

I was working on a design my first 2010 enterprise exchange server cluster, and trying to make sense of all the options and wanted see what was the best potential option and how features effected the number of hard drives, and what the impact of using 1TB SATA drives was in comparison to 600GB SAS drives. 
For this I wanted understand the impact of various options on the design and performance of the exchange system. I downloaded and used the “E2010 Mailbox Server Role Requirements Calculator”. This is currently at http://msexchangeteam.com/files/12/attachments/entry453145.aspx .
This is an amazing work of art, that shows what you can do with Microsoft Excel, and horribly useful when trying to account for all the factors that impact the performance of the storage array.
An excellent and in depth article about the details of the storage calculator is located  on the Microsoft exchange team blog is located at: http://msexchangeteam.com/archive/2009/11/09/453117.aspx
The article puts the challenge for me to a fine point.
“Previous versions of Exchange were somewhat rigid in terms of the choices you had in designing your mailbox server role.
The flexibility in the architecture with Exchange 2010, allows you the freedom to design the solution to meet your needs.”
While redundancy in most of the Exchange 2010 server roles is straight forward, the mailbox role in 2010 has a number of features or options that can be deployed.  Some of these features can be combined with others, and other feature can not be combined.
Picking these options will change how the calculator would allocate drives and other factors.  Assumptions you put in effect the outcome, but while puttering with the calculator I got some wild and intractable numbers, or at least in my daily tumble of interruptions, couldn’t keep track of all the options. So I got a quiet moment and decided that I needed to create a table with all the options I was choosing and intentionally kept other options the same.
Running through the various iterations and possibilities, I learned some of the rules on DAG that did not appear to me in any other location.
There were questions that I asked, or other engineers I worked with asked and I ran simulations through the calculator to see how they would impact drive counts and design.
While much of the world revolves around Money, Exchange storage revolves around Spindles. Drive performance is measured by IOPS (Input / Output Operations Per Second), so to abstract the performance of a given Hard drive or drive array IOPS has been use previously as a standard calculator. Previously there were literally hundreds of different kinds of storage disks to choose from, IOPS was the basis. Now that the types of drive available for use in server arrays has been paired down to a Dozen or so, the current calculator uses a pull down table to select from a much shorter potential list of drives.  Given I had reduce the possible selection set of drives to build the array down to two options, I could see plainly in my results how the number of spindles, or disks was more important that the size of the disks.

General parameters in design:
4 Mailbox servers at primary site
1 DAG
Log drives and data drives are all the same size
6000 Mailboxes
1GB mailbox

Options:
                RAID – Redundant array of independent disks. This is a option in the calculator that uses the standard RAID modes to configure disks.
                JBOD – JBOD storage refers to placing a database and its transaction logs on a single disk without leveraging RAID.
                2nd Site – Site redundancy, having email servers at more than one data center
                Log Drive – Separate drives for storage of the log files
                DB Copies – The number of database copies
                Disks @ Site – the number of disks / spindles at each data center, the first seven options only have 1 data center so there are no disks in the secondary column.

Tables:
These are the tables I created from all the versions of the data I put in to the calculator.               

DRIVE TYPE:



1000 GB 7.2k RPM
DISKS @ Site

RAID
JBOD
2nd Site
Log Drives
DB Copies
Primary
Secondary
Option 1
x



2
76
x
Option 2

x


2
x
x
Option 3
x


x
2
84
x
Option 4

x


3
52
x
Option 5
x



3
100
x
Option 6
x


x
3
116
x
Option 7

x

x
3 to 5
x
x
Option 8
x

x
x
4
84
84
Option 9
x

x

4
76
76
Option 10

x
x

4
36
36
DRIVE TYPE:
600 GB 15k RPM
DISKS @ Site

RAID
JBOD
2nd Site
Log Drives
DB Copies
Primary
Secondary
Option 1
x



2
96
x
Option 2

x


2
x
x
Option 3
x


x
2
104
x
Option 4

x


3
76
x
Option 5
x



3
108
x
Option 6
x


x
3
136
x
Option 7

x

x
3 to 5
x
x
Option 8
x

x
x
4
104
104
Option 9
x

x

4
84
84
Option 10

x
x

4
52
52


               
Some things I learned:
·         JBOD is not permitted with less than 3 copies of the MB database or separate Log Drives
·         Drive size of 2000GB didn’t reduce drive / spindle count
·         Number of drives increased at mailbox size threshold level of 1128MB, and 1491MB in another model.
·         Number of drives did not decrease in count with smaller mailboxes
·         Only 100 databases are permitted per DAG so “max active DB” = 100 / Copies of DB
·         SPEC INT value is NOT CPU clock speed 100 is a good number to pick from.
·         You could have different types of hard drives for active and passive copies, but it’s an administrative “challenge” if there are many copies.
·         Log drives could be smaller drives if used
·         Expen$ive $AN storage is not necessary for a High availability solution with exchange 2010
·         4 JBOD copies of MB on 4 servers is more fault tolerant than 2 copies of raid 10 on 2 servers.
·         JBOD minimizes the impact of a single disk failure in a properly maintained system
·         JBOD maximizes the backup performance and reduces backup time
·         The space requirements for public folder storage that needs to be considered

Comments:
Given the calculations and the reduction of drives that JBOD provided, it seems logical from both a cost perspective and a high availability perspective.
Using a standalone self-sufficient mailbox storage server seems the most efficient option. Multiple mailbox servers could be configured as a Kind of RAIS (redundant array of independent servers)  this could be used as a basis for a DAG or Cluster of mailbox server. Since they would not have a Single point of failure or contention in a SAN array, this solution would be easily maintained as each server is completely independent of outside system or factors.
The advantage of a SAN is not beneficial in this case as the number of drive / spindles are more important than raw storage space. SAN has tremndous features for manageing and sharing drives among numerous servers with regular needs.  Since the large Exchange server designs use multiple servers and numerous spindles not just storage space to get response time, spending money on SAN storage, is in this case, frivolous. The increase in componants and complexity to maintainance and supportablity does not provide any benifit in this case, as you would simply be adding the same number of drives in a SAN as you would be in a DAS solution. While the SAN equipment produced by the major vendors provides great performance. Which gets us back to the primary factor in this case is the drive perfomance for dollar.
If one steps back and takes a purely functional view, there are a lot of hard drives involved in satisfying the requirements. While some solutions exist and may be agreed on to provide a “Complete Solution” to a organizations storage requirements. The counter effect is that these solutions may provide features and functionality that are not required or beneficial for a high availability messaging system.
The "HP DL180 G6 rack server" is a exelent example of a server with storage and processing combined in a single chasis, and what I would recomend based on my research. The server can be selected with a drive cage with room for 14 HD in 2U of rack space 1TB HD and 2 SSD.  They provide the required storage and processor in the same rack space as a san solution would require. Addition storage space could be added by adding a HP StorageWorks MSA70 to each server to increase the total capacity by 25 Drives. These systems are inexpensive as enterprise mail servers go, and can provide increased separation of the cluster nodes to increase the resiliance to failures of the entire cluster. This approach also minimizes the overall amount of rack space requried for this solution.

Conclusion:
Using the sizing calculator for an exchange deployment, whether it is new,  or reviewed for growth in sizing, is a very helpful and powerful tool to understand how much computer storage and processing resources are required to provide an acceptable level of performance.
In this example we can see that the storage requirements come out to approximately 10 times the mailbox size based on the requirements designed into the Mailbox sizing calculator. This is a significant amount of storage, and would be a surprise to any organization contemplating a large deployment. These resources will need to be in place
 It would be useful to consider the optimal mailbox size or number of mailboxes that you can accommodate with a given number of drives. There are also considerations for the space requirements for public folder storage that needs to be considered.