Musings on hardware prices - opps

Michael T. Halligan michael at halligan.org
Tue Aug 23 20:34:22 PDT 2005


>- wow.. long reply .. :-)
>  
>
You should read my project plans. It could be worse. My father was the 
equivalent of an infrastructure architect consultant
at EDS for a decade.  The proposals he wrote just to quote the scoping 
of a large project were usually 50 pages. I think my way
of rebelling as a teenager was by focusing on SMB instead of fortune-100.

>- simple summary ..
>	- there's no $$$ in hardware ... maybe 3% -5% margin
>	so you spend $1,000 up front to make $50 bucks when they pay up
>
>	- the way dell makes $$$ is they charge their 10% or 30% of
>	prepaid maintenance 
>  
>

Dell has a brilliantly designed manufacturing technique, and apparently 
treats their staff very well. William Deming
would be in awe at their efficiency.. and rolling in his grave over 
their quality problems.

Dell is something like 37% profitable overall.  I've heard (but not 
researched) that they are somewhere around 20%
profitable on average for every piece of hardware they sell. I can 
*somewhat* believe that, given the difference between
sales-type quotes, and web quotes. I can think of at least 20 companies 
I've helped start-up (as a consultant) who had
no idea about negotiation with vendors. The typical response seemed to 
be, "You can do that?? But the price they..."

It's a lot like the car industry in that respect.  When everything 
changed, the new guys on the block, selling their cars
for 2/3 of what the big-5 were selling them for, were also 3x as 
profitable per car.. Eventually they retro-fitted quality,
and destroyed America's auto-industry. Imagine if they had listened to 
Deming & the Theory Z guys BEFORE they entered
this market.

My prediction is that Dell is either going to have to change their 
thoughts on quality, and stop their low-end price-wars, or sell
off their server manufacturing arm.. The next HP merger screw-up maybe?


>>Which distributor? The only large distributor I know that's easy to deal 
>>with is Ingram..
>>    
>>
>
>ingram is a joke in my book .. but that's just me 
>
>  
>
Who is left? I haven't really researched this in quite a while, but I 
know (or rather, think I know) that IM purchased Avad, Nimax, D&H, Tech 
Pacific,
I might be mistaken, but I thought they also acquired Synnex, and GnR.


>and it also depends on if we're talking "component" distributors
>vs "PC" distributors ( see below )
>  
>
>> Acma is a tier down from Malabs. I'd be happy to find out how to
>>work in-between MA & Ingram.
>>    
>>
>
>Malabs is worst than ingram in terms of ontime delivery and pricing
>and they're what i call a consumer store without a physical store
>
>- other thing that would help is for folks to stop advertising
>  parts they do NOT have in stock in the silly computer mags ( compuser )
>	- 
>	- they all get those parts within an hr at a couple of 
>	- the distributors
>  
>

My experience has been entirely the opposite. Ingram barely returns your 
phone calls if
you're not doing $50k per month with them. Until recently, I dealt with 
MAlabs via AIM,
which was pretty convenient. On the last 100 purchases I've made through 
MAlabs, all but
5 arrived the next day. Those 5 arrived within 3 days (beating their 5-6 
business day estimate)
and weren't items they kept in stock. We've also done a few same-day 
purchases from MA.

MA isn't the best at volume pricing discounts, but they're OK.   The 
rest of the  small dealers
I've dealt with in the bay area aren't even worth calling. Acma does OK, 
but I've just been
happier with MA. It was always nice knowing I could IM my MA rep, and 
get 48 hard drives, 4
3ware controllers, and the rest of the gear to build a pair of 6TB file 
servers 24-hours later, at
wholesale prices.

If there's a better option for small things like this, I'd love to know. 
I maybe build two servers
a year at this point, and only for a customer  who really wants to 
accept the pain of
custom-built servers. I won't even build my own gear. I'd rather focus 
on my business, and I
am not in the business of building servers.



>>Hardware sucks.
>>    
>>
>
>or can be fun .. depends ... on point of view
>  
>

Fun? Hardware has always been designed for functionality.  Assembling 
computers is just a nightmare.
A real infrastructure requires a massive amount of documentation, 
process development, and testing. I
could build 200 servers with decent specs for $300k.  I'd rather spend 
$450k, lay the responsibility of
warranty, parts replacement, etc, on the vendor, and focus on building 
well-managed infrastructure.



>>On the low-end, I can buy 30 servers from Dell for the same price as 10 
>>IBM or HP's low-end
>>servers (I'm talking IBM's x336 or HP's DL14X line.. Not their better 
>>gear).
>>    
>>
>
>DL series is compaq ?? and they're 1 level above dell's worst performance
>boxes
>  
>
DL is Compaq/HP. I've never known anybody to complain about reliability 
on the DL's. I've got
a dozen DL145s (dual opteron) deployed in 2 datacenters for an IP 
Anycast DNS service, and
they're rather awesome. IPMIv2, a working remote console (ssh into it 
and you've got a built-in
terminal emulator). Easy to work with, cheap, and very powerful.

A bit higher up, the DL380s are workhorses. The rILO makes life a lot 
easier.



>>My assumption
>>with Dell is usually to expect the worst, 30% of servers to be in some 
>>state of failure at any time.
>>    
>>
>
>yup... seems to be par ... and i say one gets what one pays for ??
>
>  
>
Yes.  Now the math here works out that
1000 servers from Dell cost $1M
1000 HP servers cost $2.M
1000 IBM servers cost $2.8M

If you buy those 1000 servers, using the 30% rule, that means you only 
"needed" 769 servers, and
over-bought those 231 servers for redundancy, and extra performance just 
in case.  A project I
helped a VAR bid out two years ago made for a great example.

$1.5M - 1000 servers from Dell
$1M - Implementation/development/support for one year
$250k - networking gear
$200k - Backup solution
$300k - Storage solution
$525k - Power for three years for all gear
------------
3.8M

The company itself ended up folding, but the spec that was written was 
pretty good.  The VAR itself
actually sold HP and IBM gear, and found out that the cost would be 
$1.3M higher for HP, 1.9M higher
for IBM gear.. These were the numbers with their markup. For a service 
that cared about bandwidth and
compute power from their individual nodes, good hardware was not 
justifiable.   In actuality, the Linux/cheap
sever model was perfect for this, since the redundancy was already 
designed to scale out rather than up. 




>>As much as I despise
>>Dell's component choices and technical support, it's a financial AND 
>>technical decision dell is making really hard.
>>    
>>
>
>yup...if one is interested in pricing ... dell is hard to beat
>
>if the customer is intested in performace/reliability ... than dell is 
>out of the loop in my book
>  
>
Blanket statements are really hard to make on imperial evidence. My rule 
of thumb is
to expect Dell gear to be down 30% more often than HP or IBM. That's to 
build in room for
problems. In actual experience, I can look over uptime reports for two 
customers in 2004. One
of them had 100 Dell servers. One of them had 45 HP servers. The dell 
environment, overbuilt,
had 99.99% uptime. The HP environment, if I adjust the #s, had about 
99.95% uptime.  Service
uptime during a 12 hour business day, both archived 5 9s.

The difference? The Dell environment was more of a pain to install 
initially, and had more server failures.
Adjusted, the HP environment cost about $70k more than Dell. Staffing 
costs were similar.

Dell is cheap. Hardware is cheap. If you build right, hardware is not 
important.

>>>	- i havent seen any dead/broken ibm boxes ... 
>>>	- i havent seen too many dead/broken hp boxes
>>>	- i have seen too too too many dead/broken dells and compaqs
>>> 
>>>      
>>>
>>Then you haven't worked in a large enough installation. 
>>    
>>
>
>maybe ... i deal with x,000's of boxes ...
>
>  
>
In a single infrastructure? I find it hard to even imagine either 
mentality.. The mentality of
building your own, or the mentality of overpaying a magnitude for 
hardware that in the end
is just cheap commodity crap.


>>Hardware dies.
>>    
>>
>
>yup...
>
>the $100M question is ... 
>	- why does it die
> 	- is there any pattern to these dead boxes
>	- how does it die
>	...
>  
>
Depending on the app, does it really matter?  A lot of my work has been 
in large environments with small
classes of functionality.. Where 1000 servers may only have 6 functions. 
In other environments, all of my
statements are pointless.

We talked to a company a few months ago (Hi Jim) with several thousand 
servers.. Each server in itself is
important, running a unique simulation. If the simulation crashes, then 
money is lost.  In that situation, both
expensive vendor support, and very stable, expensive hardware is a 
necessity.


>>Commodity hardware is cheap, no matter who is selling it to you.
>>    
>>
>
>cheap is good ....
>
>google also proved that beyond any doubts
>  
>

They also proved that power is expensive :)  The last speculation I 
heard was that they have 300k servers.
I read somewhere they average $1k per server. That means $300M in 
hardware.  I estimate $16 per server
per month in power & AC. That's $57.6M per year just in power.. I like 
to use N/30 to estimate the monthly
cost for financing a piece of hardware.  With these numbers, Google is 
paying $10M per month for their
servers, 4.8M per month for their power & A/C. In the bay area, a Sage 
Level I admin makes a salary of $40k
per year. He/she costs you $55k after benefits and taxes, or $26 per 
hour.  A server costs Google $49 per month before
bandwidth, development costs, etc.  

Hardware is very, very cheap.

>  
>
>>IBM's is the best .
>>    
>>
>
>and you pay for that "name brand" too and supposedly get the 
>knowledgeable world famous ibm support instead of the unisys zombies
>though i ran into 1 knowledgeable unisys dude that knows how to
>rebuild a dead hw raid system with 3 out of 12 disk failures
>( it's actually super-flaky dell hw ( 1tb of storage at $20K ) that has
>  since been decommisioned/replaced by a $6K linux box, but at least
>  it works even if its over priced )
>  
>

If you follow my thread back, I admit I'm talking a lot, you'll see that 
I specifically mention
IBM's (or I believe I did.. too zealous on the quote-deletion) Embedded 
diagnostics. Brand name,
or not, no other PC based server comes close.

And, yeah, Unisys sucks.  Dell uses them, IBM uses them. HP probably 
uses them. That's really where
Dell falls apart. Their "professional services" and "enterprise support" 
would be better staffed through
ManPower Inc.

$20k for 1TB of Dell hardware must be rather old.  Last year I was 
buying 1.8TB scsi jbods from Dell for
around $8k a piece. I was paying $7,999.10 too much for them. The ONLY 
piece of storage I would consider
buying from Dell are their tape libraries (remanufactured from Quantum). 
But I'd buy an external support contract,
and probably figure out who else remanufactures them & try to get their 
firmwares. Dell likes to release broken
firmware, as you'd know if you've ever called up Dell support only to be 
read a script of "did you upgrade this firmware?
Did you upgrade this firmware? I can't help you until you flash this 
<click>". I think it's their hobby.


>i re-invent the wheel when the current products out in the market
>doesn't solve the customers problems
>
>people like to use name brand or actually been there done that,
>and now they come looking for "me" ... which i like, whether its
>me or you guys that can also deliver custom solutions too
>
>and yes... i do have free time, because i do things "my way"
>since its my time and $$$ that they will have to pay ...
>  
>

As a consultant, I have to put my customer's needs first, and foremost. 
If I can save
my customer $100k of my implementation time, by recommending software 
that does
everything they need, but I personally think isn't the greatest, I've 
got to do right
by my customer. My example is configuration management systems. They all 
suck.
The best option out there is the most labor intensive, CFEngine. 
Unfortunately, if it
comes down between 400 hours to setup CFengine perfectly in a large 
environment, or
200 hours + $40k in licensing fees for Altiris, Altiris it is.

I have to just accept that there are always several right ways, and 
there is usually only one
that makes the right compromise between perfection, and budget.

>but sometimes i eat the big one... live and learn ..
>
>  
>


>>Can you give me a list of your customers then?
>>    
>>
>
>i will assume that you will trade your customers list ?? :-)
>  
>
Only the ones who's long-term management contracts aren't expiring soon 
*grins*


>>If you're doing things 
>>the hard way, building servers
>>by hand, writing your own tools instead of utilizing existing free and 
>>commercial ones, you are probably
>>not doing right by your customers.
>>    
>>
>
>that'd depend on the customer specs and budget and expectations ??
>  
>

I guess, depending on the circumstance, that's true. I avoid "BIG" 
business, I don't like
the politics, the paperwork, and the long sales-cycle. A good friend of 
mine works in
a completely different world. He's a VP at a very large bank. When I 
tell him I saved a
customer $400k in staffing costs with a re-design, he'll look at me, and 
say "Money? We
can always make more of that".



>>I personally couldn't look a 
>>customer in the face, and tell them that
>>it makes sense for me to bill an extra 300 hours on a 300 server 
>>deployment,
>>    
>>
>
>i dont pre-assume anything like that to make a pointless point ??
>
>but i do say .. bring me a (signed) legit offer and i will match it
>or tell them to go with the other deal
>  
>
My experience might be a bit skewed. I've walked into far, far too many 
contracts in
the bay area, with the opening words of "the last consultant".. Looking 
at other people's
specs, questionable billing practices, and in general just sloppy work, 
I find it odd that
businesses in the bay area hire consultants period.

I won't even let money be mentioned until I really understand the 
situation.  For larger projects,
the process is first bid the scope, then bid the design, then bid the 
implementation. Hourly billing
is avoided as much as possible. My opinion is if that you can't fix-bid 
the project in a way that you
profit comfortably, the price is agreed to be reasonable, and the scope 
is quickly definable, then either
you shouldn't be bidding it, or the project needs full-time employees.


>>Isn't it our job to gather requirements, and understand specs, then make 
>>recommendations? That's certainly part of how I operate.
>>    
>>
>
>that's the point ...and offer complete ( working ) solutions in addition
>to "recommendations" that meets their specs and/or fix their
>specs/requirements
>
>  
>
Agreed.

>>I think you put a lot of faith on parts selection.
>>    
>>
>
>i do ... that is precisely the difference between bying from
>tom-dick-n-harry and their gorilla vs buying the same identical parts
>from some other distributors and NOT having these so called "hardware
>problems"
>
>one hopefully learns over the 30yrs of building hardware and systems 
>  
>
In the end, we're talking commodities. A long time ago I worked for a 
turn-key beowulf provider.
While trying to create a better, and cheaper (the beowulf industry 
sucks. There is no industry. It
only pushes hardware, the rest of the work is done by slave-labo.... 
err, grad students) parts pipeline.
We discovered hardware brokers. There are hardware brokers out there who 
buy and sell hardware
like a commodity.. It's no different than porkbellies or soybean futures.

The product you see is the same product that 100 different manufacturers 
put out. You can buy
100 Nvidia graphics cards. You can buy 100 Intel ethernet boards.  I 
don't believe that there is any
massively significant deviation between most standard components at 
scale. As long as the parts you're
buying are manufactured with "Server" tolerances (which drastically 
limits your selection) you're OK.
Either way, the big box pushers have FAR better economies of scale than 
you or I.




More information about the Baylisa mailing list